id
stringlengths 40
40
| pid
stringlengths 42
42
| input
stringlengths 8.37k
169k
| output
stringlengths 1
1.63k
|
---|---|---|---|
205163715f345af1b5523da6f808e6dbf5f5dd47 | 205163715f345af1b5523da6f808e6dbf5f5dd47_0 | Q: How many papers are used in experiment?
Text: Introduction
The ACL Anthology (AA) is a digital repository of tens of thousands of articles on Natural Language Processing (NLP) / Computational Linguistics (CL). It includes papers published in the family of ACL conferences as well as in other NLP conferences such as LREC and RANLP. AA is the largest single source of scientific literature on NLP.
This project, which we call NLP Scholar, examines the literature as a whole to identify broad trends in productivity, focus, and impact. We will present the analyses in a sequence of questions and answers. The questions range from fairly mundane to oh-that-will-be-good-to-know. Our broader goal here is simply to record the state of the AA literature: who and how many of us are publishing? what are we publishing on? where and in what form are we publishing? and what is the impact of our publications? The answers are usually in the form of numbers, graphs, and inter-connected visualizations.
We focus on the following aspects of NLP research: size, demographics, areas of research, impact, and correlation of citations with demographic attributes (age and gender).
Target Audience: The analyses presented here are likely to be of interest to any NLP researcher. This might be particularly the case for those that are new to the field and wish to get a broad overview of the NLP publishing landscape. On the other hand, even seasoned NLP'ers have likely wondered about the questions raised here and might be interested in the empirical evidence.
Data: The analyses presented below are based on information about the papers taken directly from AA (as of June 2019) and citation information extracted from Google Scholar (as of June 2019). Thus, all subsequent papers and citations are not included in the analysis. A fresh data collection is planned for January 2020.
Interactive Visualizations: The visualizations we are developing for this work (using Tableau) are interactive—so one can hover, click to select and filter, move sliders, etc. Since this work is high in the number of visualizations, the main visualizations are presented as figures in the paper and some sets of visualizations are pointed to online. The interactive visualizations and data will be made available through the first author's website after peer review.
Related Work: This work builds on past research, including that on Google Scholar BIBREF0, BIBREF1, BIBREF2, BIBREF3, on the analysis of NLP papers BIBREF4, BIBREF5, BIBREF6, BIBREF7, BIBREF8, BIBREF9, on citation intent BIBREF10, BIBREF11, BIBREF12, BIBREF13, BIBREF14, BIBREF15, and on measuring scholarly impact BIBREF16, BIBREF17, BIBREF18, BIBREF19, BIBREF20, BIBREF21.
Caveats and Ethical Considerations: We list several caveats and limitations throughout the paper. A compilation of these is also available online in the About NLP Scholar page.
The analyses presented here are also available as a series of blog posts.
Size
Q. How big is the ACL Anthology (AA)? How is it changing with time?
A. As of June 2019, AA had $\sim $50K entries, however, this includes some number of entries that are not truly research publications (for example, forewords, prefaces, table of contents, programs, schedules, indexes, calls for papers/participation, lists of reviewers, lists of tutorial abstracts, invited talks, appendices, session information, obituaries, book reviews, newsletters, lists of proceedings, lifetime achievement awards, erratum, and notes). We discard them for the analyses here. (Note: CL journal includes position papers like squibs, letter to editor, opinion, etc. We do not discard them.) We are then left with 44,896 articles. Figure FIGREF6 shows a graph of the number of papers published in each of the years from 1965 to 2018.
Discussion: Observe that there was a spurt in the 1990s, but things really took off since the year 2000, and the growth continues. Also, note that the number of publications is considerably higher in alternate years. This is due to biennial conferences. Since 1998 the largest of such conferences has been LREC (In 2018 alone LREC had over 700 main conferences papers and additional papers from its 29 workshops). COLING, another biennial conference (also occurring in the even years) has about 45% of the number of main conference papers as LREC.
Q. How many people publish in the ACL Anthology (NLP conferences)?
A. Figure FIGREF7 shows a graph of the number of authors (of AA papers) over the years:
Discussion: It is a good sign for the field to have a growing number of people join its ranks as researchers. A further interesting question would be:
Q. How many people are actively publishing in NLP?
A. It is hard to know the exact number, but we can determine the number of people who have published in AA in the last N years.
#people who published at least one paper in 2017 and 2018 (2 years): $\sim $12k (11,957 to be precise)
#people who published at least one paper 2015 through 2018 (4 years):$\sim $17.5k (17,457 to be precise)
Of course, some number of researchers published NLP papers in non-AA venues, and some number are active NLP researchers who may not have published papers in the last few years.
Q. How many journal papers exist in the AA? How many main conference papers? How many workshop papers?
A. See Figure FIGREF8.
Discussion: The number of journal papers is dwarfed by the number of conference and workshop papers. (This is common in computer science. Even though NLP is a broad interdisciplinary field, the influence of computer science practices on NLP is particularly strong.) Shared task and system demo papers are relatively new (introduced in the 2000s), but their numbers are already significant and growing.
Creating a separate class for “Top-tier Conference” is somewhat arbitrary, but it helps make certain comparisons more meaningful (for example, when comparing the average number of citations, etc.). For this work, we consider ACL, EMNLP, NAACL, COLING, and EACL as top-tier conferences, but certainly other groupings are also reasonable.
Q. How many papers have been published at ACL (main conference papers)? What are the other NLP venues and what is the distribution of the number of papers across various CL/NLP venues?
A. # ACL (main conference papers) as of June 2018: 4,839
The same workshop can co-occur with different conferences in different years, so we grouped all workshop papers in their own class. We did the same for tutorials, system demonstration papers (demos), and student research papers. Figure FIGREF9 shows the number of main conference papers for various venues and paper types (workshop papers, demos, etc.).
Discussion: Even though LREC is a relatively new conference that occurs only once in two years, it tends to have a high acceptance rate ($\sim $60%), and enjoys substantial participation. Thus, LREC is already the largest single source of NLP conference papers. SemEval, which started as SenseEval in 1998 and occurred once in two or three years, has now morphed into an annual two-day workshop—SemEval. It is the largest single source of NLP shared task papers.
Demographics (focus of analysis: gender, age, and geographic diversity)
NLP, like most other areas of research, suffers from poor demographic diversity. There is very little to low representation from certain nationalities, race, gender, language, income, age, physical abilities, etc. This impacts the breadth of technologies we create, how useful they are, and whether they reach those that need it most. In this section, we analyze three specific attributes among many that deserve attention: gender (specifically, the number of women researchers in NLP), age (more precisely, the number of years of NLP paper publishing experience), and the amount of research in various languages (which loosely correlates with geographic diversity).
Demographics (focus of analysis: gender, age, and geographic diversity) ::: Gender
The ACL Anthology does not record demographic information about the paper authors. (Until recently, ACL and other NLP conferences did not record demographic information of the authors.) However, many first names have strong associations with a male or female gender. We will use these names to estimate the percentage of female first authors in NLP.
The US Social Security Administration publishes a database of names and genders of newborns. We use the dataset to identify 55,133 first names that are strongly associated with females (probability $\ge $99%) and 29,873 first names that are strongly associated with males (probability $\ge $99%). (As a side, it is interesting to note that there is markedly greater diversity in female names than in male names.) We identified 26,637 of the 44,896 AA papers ($\sim $60%) where the first authors have one of these names and determine the percentage of female first author papers across the years. We will refer to this subset of AA papers as AA*.
Note the following caveats associated with this analysis:
The names dataset used has a lower representation of names from nationalities other than the US. However, there is a large expatriate population living in the US.
Chinese names (especially in the romanized form) are not good indicators of gender. Thus the method presented here disregards most Chinese names, and the results of the analysis apply to the group of researchers excluding those with Chinese names.
The dataset only records names associated with two genders.
The approach presented here is meant to be an approximation in the absence of true gender information.
Q. What percent of the AA* papers have female first authors (FFA)? How has this percentage changed with time?
A. Overall FFA%: 30.3%. Figure FIGREF16 shows how FFA% has changed with time. Common paper title words and FFA% of papers that have those words are shown in the bottom half of the image. Note that the slider at the bottom has been set to 400, i.e., only those title words that occur in 400 or more papers are shown. The legend on the bottom right shows that low FFA scores are shown in shades of blue, whereas relatively higher FFA scores are shown in shades of green.
Discussion: Observe that as a community, we are far from obtaining male-female parity in terms of first authors. A further striking (and concerning) observation is that the female first author percentage has not improved since the years 1999 and 2000 when the FFA percentages were highest (32.9% and 32.8%, respectively). In fact there seems to even be a slight downward trend in recent years. The calculations shown above are for the percentage of papers that have female first authors. The percentage of female first authors is about the same ($\sim $31%). On average male authors had a slightly higher average number of publications than female authors.
To put these numbers in context, the percentage of female scientists world wide (considering all areas of research) has been estimated to be around 30%. The reported percentages for many computer science sub-fields are much lower. (See Women in Science (2015).) The percentages are much higher for certain other fields such as psychology and linguistics. (See this study for psychology and this study for linguistics.) If we can identify ways to move the needle on the FFA percentage and get it closer to 50% (or more), NLP can be a beacon to many other fields, especially in the sciences.
FFA percentages are particularly low for papers that have parsing, neural, and unsupervised in the title. There are some areas within NLP that enjoy a healthier female-male parity in terms of first authors of papers. Figure FIGREF20 shows FFA percentages for papers that have the word discourse in the title. There is burgeoning research on neural NLP in the last few years. Figure FIGREF21 shows FFA percentages for papers that have the word neural in the title.
Figure FIGREF22 shows lists of terms with the highest and lowest FFA percentages, respectively, when considering terms that occur in at least 50 paper titles (instead of 400 in the analysis above). Observe that FFA percentages are relatively higher in non-English European language research such as papers on Russian, Portuguese, French, and Italian. FFA percentages are also relatively higher for certain areas of NLP such as work on prosody, readability, discourse, dialogue, paraphrasing, and individual parts of speech such as adjectives and verbs. FFA percentages are particularly low for papers on theoretical aspects of statistical modelling, and areas such as machine translation, parsing, and logic. The full lists of terms and FFA percentages will be made available with the rest of the data.
Demographics (focus of analysis: gender, age, and geographic diversity) ::: Academic Age
While the actual age of NLP researchers might be an interesting aspect to explore, we do not have that information. Thus, instead, we can explore a slightly different (and perhaps more useful) attribute: NLP academic age. We can define NLP academic age as the number of years one has been publishing in AA. So if this is the first year one has published in AA, then their NLP academic age is 1. If one published their first AA paper in 2001 and their latest AA paper in 2018, then their academic age is 18.
Q. How old are we? That is, what is the average NLP academic age of those who published papers in 2018? How has the average changed over the years? That is, have we been getting older or younger? What percentage of authors that published in 2018 were publishing their first AA paper?
A. Average NLP Academic Age of people that published in 2018: 5.41 years
Median NLP Academic Age of people that published in 2018: 2 years
Percentage of 2018 authors that published their first AA paper in 2018: 44.9%
Figure FIGREF24 shows how these numbers have changed over the years.
Discussion: Observe that the Average academic age has been steadily increasing over the years until 2016 and 2017, when the trend has shifted and the average academic age has started to decrease. The median age was 1 year for most of the 1965 to 1990 period, 2 years for most of the 1991 to 2006 period, 3 years for most of the 2007 to 2015 period, and back to 2 years since then. The first-time AA author percentage decreased until about 1988, after which it sort of stayed steady at around 48% until 2004 with occasional bursts to $\sim $56%. Since 2005, the first-time author percentage has gone up and down every other year. It seems that the even years (which are also LREC years) have a higher first-time author percentage. Perhaps, this oscillation in first-time authors percentage is related to LREC’s high acceptance rate.
Q. What is the distribution of authors in various academic age bins? For example, what percentage of authors that published in 2018 had an academic age of 2, 3, or 4? What percentage had an age between 5 and 9? And so on?
A. See Figure FIGREF25.
Discussion: Observe that about 65% of the authors that published in 2018 had an academic age of less than 5. This number has steadily reduced since 1965, was in the 60 to 70% range in 1990s, rose to the 70 to 72% range in early 2000s, then declined again until it reached the lowest value ($\sim $60%) in 2010, and has again steadily risen until 2018 (65%). Thus, even though it may sometimes seem at recent conferences that there is a large influx of new people into NLP (and that is true), proportionally speaking, the average NLP academic age is higher (more experienced) than what it has been in much of its history.
Demographics (focus of analysis: gender, age, and geographic diversity) ::: Location (Languages)
Automatic systems with natural language abilities are growing to be increasingly pervasive in our lives. Not only are they sources of mere convenience, but are crucial in making sure large sections of society and the world are not left behind by the information divide. Thus, the limits of what automatic systems can do in a language, limit the world for the speakers of that language.
We know that much of the research in NLP is on English or uses English datasets. Many reasons have been proffered, and we will not go into that here. Instead, we will focus on estimating how much research pertains to non-English languages.
We will make use of the idea that often when work is done focusing on a non-English language, then the language is mentioned in the title. We collected a list of 122 languages indexed by Wiktionary and looked for the presence of these words in the titles of AA papers. (Of course there are hundreds of other lesser known languages as well, but here we wanted to see the representation of these more prominent languages in NLP literature.)
Figure FIGREF27 is a treemap of the 122 languages arranged alphabetically and shaded such that languages that appear more often in AA paper titles have a darker shade of green.
Discussion: Even though the amount of work done on English is much larger than that on any other language, often the word English does not appear in the title, and this explains why English is not the first (but the second-most) common language name to appear in the titles. This is likely due to the fact that many papers fail to mention the language of study or the language of the datasets used if it is English. There is growing realization in the community that this is not quite right. However, the language of study can be named in other less prominent places than the title, for example the abstract, introduction, or when the datasets are introduced, depending on how central it is to the paper.
We can see from the treemap that the most widely spoken Asian and Western European languages enjoy good representation in AA. These include: Chinese, Arabic, Korean, Japanese, and Hindi (Asian) as well as French, German, Swedish, Spanish, Portuguese, and Italian (European). This is followed by the relatively less widely spoken European languages (such as Russian, Polish, Norwegian, Romanian, Dutch, and Czech) and Asian languages (such as Turkish, Thai, and Urdu). Most of the well-represented languages are from the Indo-European language family. Yet, even in the limited landscape of the most common 122 languages, vast swathes are barren with inattention. Notable among these is the extremely low representation of languages from Africa, languages from non-Indo-European language families, and Indigenous languages from around the world.
Areas of Research
Natural Language Processing addresses a wide range of research questions and tasks pertaining to language and computing. It encompasses many areas of research that have seen an ebb and flow of interest over the years. In this section, we examine the terms that have been used in the titles of ACL Anthology (AA) papers. The terms in a title are particularly informative because they are used to clearly and precisely convey what the paper is about. Some journals ask authors to separately include keywords in the paper or in the meta-information, but AA papers are largely devoid of this information. Thus titles are an especially useful source of keywords for papers—keywords that are often indicative of the area of research.
Keywords could also be extracted from abstracts and papers; we leave that for future work. Further work is also planned on inferring areas of research using word embeddings, techniques from topic modelling, and clustering. There are clear benefits to performing analyses using that information. However, those approaches can be sensitive to the parameters used. Here, we keep things simple and explore counts of terms in paper titles. Thus the results are easily reproducible and verifiable.
Caveat: Even though there is an association between title terms and areas of research, the association can be less strong for some terms and areas. We use the association as one (imperfect) source of information about areas of research. This information may be combined with other sources of information to draw more robust conclusions.
Title Terms: The title has a privileged position in a paper. It serves many functions, and here are three key ones (from an article by Sneha Kulkarni): "A good research paper title: 1. Condenses the paper's content in a few words 2. Captures the readers' attention 3. Differentiates the paper from other papers of the same subject area".
If we examine the titles of papers in the ACL Anthology, we would expect that because of Function 1 many of the most common terms will be associated with the dominant areas of research. Function 2 (or attempting to have a catchy title) on the other hand, arguably leads to more unique and less frequent title terms. Function 3 seems crucial to the effectiveness of a title; and while at first glance it may seem like this will lead to unique title terms, often one needs to establish a connection with something familiar in order to convey how the work being presented is new or different.
It is also worth noting that a catchy term today, will likely not be catchy tomorrow. Similarly, a distinctive term today, may not be distinctive tomorrow. For example, early papers used neural in the title to distinguish themselves from non-nerual approaches, but these days neural is not particularly discriminative as far as NLP papers go.
Thus, competing and complex interactions are involved in the making of titles. Nonetheless, an arguable hypothesis is that: broad trends in interest towards an area of research will be reflected, to some degree, in the frequencies of title terms associated with that area over time. However, even if one does not believe in that hypothesis, it is worth examining the terms in the titles of tens of thousands of papers in the ACL Anthology—spread across many decades.
Q. What terms are used most commonly in the titles of the AA papers? How has that changed with time?
A. Figure FIGREF28 shows the most common unigrams (single word) and bigrams (two-word sequences) in the titles of papers published from 1980 to 2019. (Ignoring function words.) The timeline graph at the bottom shows the percentage of occurrences of the unigrams over the years (the colors of the unigrams in the Timeline match those in the Title Unigram list). Note: For a given year, the timeline graph includes a point for a unigram if the sum of the frequency of the unigram in that year and the two years before it is at least ten. The period before 1980 is not included because of the small number of papers.
Discussion: Appropriately enough, the most common term in the titles of NLP papers is language. Presence of high-ranking terms pertaining to machine translation suggest that it is the area of research that has received considerable attention. Other areas associated with the high-frequency title terms include lexical semantics, named entity recognition, question answering, word sense disambiguation, and sentiment analysis. In fact, the common bigrams in the titles often correspond to names of NLP research areas. Some of the bigrams like shared task and large scale are not areas of research, but rather mechanisms or trends of research that apply broadly to many areas of research. The unigrams, also provide additional insights, such as the interest of the community in Chinese language, and in areas such as speech and parsing.
The Timeline graph is crowded in this view, but clicking on a term from the unigram list will filter out all other lines from the timeline. This is especially useful for determining whether the popularity of a term is growing or declining. (One can already see from above that neural has broken away from the pack in recent years.) Since there are many lines in the Timeline graph, Tableau labels only some (you can see neural and machine). However, hovering over a line, in the eventual interactive visualization, will display the corresponding term—as shown in the figure.
Despite being busy, the graph sheds light on the relative dominance of the most frequent terms and how that has changed with time. The vocabulary of title words is smaller when considering papers from the 1980's than in recent years. (As would be expected since the number of papers then was also relatively fewer.) Further, dominant terms such as language and translation accounted for a higher percentage than in recent years where there is a much larger diversity of topics and the dominant research areas are not as dominant as they once were.
Q. What are the most frequent unigrams and bigrams in the titles of recent papers?
A. Figure FIGREF29 shows the most frequent unigrams and bigrams in the titles of papers published 2016 Jan to 2019 June (time of data collection).
Discussion: Some of the terms that have made notable gains in the top 20 unigrams and bigrams lists in recent years include: neural machine (presumably largely due to the phrase neural machine translation), neural network(s), word embeddings, recurrent neural, deep learning and the corresponding unigrams (neural, networks, etc.). We also see gains for terms related to shared tasks such as SemEval and task.
The sets of most frequent unigrams and bigrams in the titles of AA papers from various time spans are available online. Apart from clicking on terms, one can also enter the query (say parsing) in the search box at the bottom. Apart from filtering the timeline graph (bottom), this action also filters the unigram list (top left) to provide information only about the search term. This is useful because the query term may not be one of the visible top unigrams.
FigureFIGREF31 shows the timeline graph for parsing.
Discussion: Parsing seems to have enjoyed considerable attention in the 1980s, began a period of steep decline in the early 1990s, and a period of gradual decline ever since. One can enter multiple terms in the search box or shift/command click multiple terms to show graphs for more than one term.
FigureFIGREF32 shows the timelines for three bigrams statistical machine, neural machine, and machine translation:
Discussion: The graph indicates that there was a spike in machine translation papers in 1996, but the number of papers dropped substantially after that. Yet, its numbers have been comparatively much higher than other terms. One can also see the rise of statistical machine translation in the early 2000s followed by its decline with the rise of neural machine translation.
Impact
Research articles can have impact in a number of ways—pushing the state of the art, answering crucial questions, finding practical solutions that directly help people, making a new generation of potential-scientists excited about a field of study, and more. As scientists, it seems attractive to quantitatively measure scientific impact, and this is particularly appealing to governments and funding agencies; however, it should be noted that individual measures of research impact are limited in scope—they measure only some kinds of contributions. Citations
The most commonly used metrics of research impact are derived from citations. A citation of a scholarly article is the explicit reference to that article. Citations serve many functions. However, a simplifying assumption is that regardless of the reason for citation, every citation counts as credit to the influence or impact of the cited work. Thus several citation-based metrics have emerged over the years including: number of citations, average citations, h-index, relative citation ratio, and impact factor.
It is not always clear why some papers get lots of citations and others do not. One can argue that highly cited papers have captured the imagination of the field: perhaps because they were particularly creative, opened up a new area of research, pushed the state of the art by a substantial degree, tested compelling hypotheses, or produced useful datasets, among other things.
Note however, that the number of citations is not always a reflection of the quality or importance of a piece of work. Note also that there are systematic biases that prevent certain kinds of papers from accruing citations, especially when the contributions of a piece of work are atypical, not easily quantified, or in an area where the number of scientific publications is low. Further, the citations process can be abused, for example, by egregious self-citations.
Nonetheless, given the immense volume of scientific literature, the relative ease with which one can track citations using services such as Google Scholar and Semantic Scholar, and given the lack of other easily applicable and effective metrics, citation analysis is an imperfect but useful window into research impact.
In this section, we examine citations of AA papers. We focus on two aspects:
Most cited papers: We begin by looking at the most cited papers overall and in various time spans. We will then look at most cited papers by paper-type (long, short, demo, etc) and venue (ACL, LREC, etc.). Perhaps these make interesting reading lists. Perhaps they also lead to a qualitative understanding of the kinds of AA papers that have received lots of citations.
Aggregate citation metrics by time span, paper type, and venue: Access to citation information allows us to calculate aggregate citation metrics such as average and median citations of papers published in different time periods, published in different venues, etc. These can help answer questions such as: on average, how well cited are papers published in the 1990s? on average, how many citations does a short paper get? how many citations does a long paper get? how many citations for a workshop paper? etc.
Data: The analyses presented below are based on information about the papers taken directly from AA (as of June 2019) and citation information extracted from Google Scholar (as of June 2019). We extracted citation information from Google Scholar profiles of authors who had a Google Scholar Profile page and had published at least three papers in the ACL Anthology. This yielded citation information for about 75% of the papers (33,051 out of the 44,896 papers). We will refer to this subset of the ACL Anthology papers as AA’. All citation analysis below is on AA’.
Impact ::: #Citations and Most Cited Papers
Q. How many citations have the AA’ papers received? How is that distributed among the papers published in various decades?
A. $\sim $1.2 million citations (as of June 2019). Figure FIGREF36 shows a timeline graph where each year has a bar with height corresponding to the number of citations received by papers published in that year. Further, the bar has colored fragments corresponding to each of the papers and the height of a fragment (paper) is proportional to the number of citations it has received. Thus it is easy to spot the papers that received a large number of citations, and the years when the published papers received a large number of citations. Hovering over individual papers reveals an information box showing the paper title, authors, year of publication, publication venue, and #citations.
Discussion: With time, not only have the number of papers grown, but also the number of high-citation papers. We see a marked jump in the 1990s over the previous decades, but the 2000s are the most notable in terms of the high number of citations. The 2010s papers will likely surpass the 2000s papers in the years to come.
Q. What are the most cited papers in AA'?
A. Figure FIGREF37 shoes the most cited papers in the AA'.
Discussion: We see that the top-tier conference papers (green) are some of the most cited papers in AA’. There are a notable number of journal papers (dark green) in the most cited list as well, but very few demo (purple) and workshop (orange) papers.
In the interactive visualizations (to be released later), one can click on the url to be to taken directly to the paper’s landing page in the ACL Anthology website. That page includes links to meta information, the pdf, and associated files such as videos and appendices. There will also be functionality to download the lists. Alas, copying the lists from the screenshots shown here is not easy.
Q. What are the most cited AA' journal papers ? What are the most cited AA' workshop papers? What are the most cited AA' shared task papers? What are the most cited AA' demo papers? What are the most cited tutorials?
A. The most cited AA’ journal papers, conference papers, workshop papers, system demo papers, shared task papers, and tutorials can be viewed online. The most cited papers from individual venues (ACL, CL journal, TACL, EMNLP, LREC, etc.) can also be viewed there.
Discussion: Machine translation papers are well-represented in many of these lists, but especially in the system demo papers list. Toolkits such as MT evaluation ones, NLTK, Stanford Core NLP, WordNet Similarity, and OpenNMT have highly cited demo or workshop papers.
The shared task papers list is dominated by task description papers (papers by task organizers describing the data and task), especially for sentiment analysis tasks. However, the list also includes papers by top-performing systems in these shared tasks, such as the NRC-Canada, HidelTime, and UKP papers.
Q. What are the most cited AA' papers in the last decade?
A. Figure FIGREF39 shows the most cited AA' papers in the 2010s. The most cited AA' papers from the earlier periods are available online.
Discussion: The early period (1965–1989) list includes papers focused on grammar and linguistic structure. The 1990s list has papers addressing many different NLP problems with statistical approaches. Papers on MT and sentiment analysis are frequent in the 2000s list. The 2010s are dominated by papers on word embeddings and neural representations.
Impact ::: Average Citations by Time Span
Q. How many citations did the papers published between 1990 and 1994 receive? What is the average number of citations that a paper published between 1990 and 1994 has received? What are the numbers for other time spans?
A. Total citations for papers published between 1990 and 1994: $\sim $92k
Average citations for papers published between 1990 and 1994: 94.3
Figure FIGREF41 shows the numbers for various time spans.
Discussion: The early 1990s were an interesting period for NLP with the use of data from the World Wide Web and technologies from speech processing. This was the period with the highest average citations per paper, closely followed by the 1965–1969 and 1995–1999 periods. The 2000–2004 period is notable for: (1) a markedly larger number of citations than the previous decades; (2) third highest average number of citations. The drop off in the average citations for recent 5-year spans is largely because they have not had as much time to collect citations.
Impact ::: Aggregate Citation Statistics, by Paper Type and Venue
Q. What are the average number of citations received by different types of papers: main conference papers, workshop papers, student research papers, shared task papers, and system demonstration papers?
A. In this analysis, we include only those AA’ papers that were published in 2016 or earlier (to allow for at least 2.5 years to collect citations). There are 26,949 such papers. Figures FIGREF42 and FIGREF43 show the average citations by paper type when considering papers published 1965–2016 and 2010–2016, respectively. Figures FIGREF45 and FIGREF46 show the medians.
Discussion: Journal papers have much higher average and median citations than other papers, but the gap between them and top-tier conferences is markedly reduced when considering papers published since 2010.
System demo papers have the third highest average citations; however, shared task papers have the third highest median citations. The popularity of shared tasks and the general importance given to beating the state of the art (SOTA) seems to have grown in recent years—something that has come under criticism.
It is interesting to note that in terms of citations, workshop papers are doing somewhat better than the conferences that are not top tier. Finally, the citation numbers for tutorials show that even though a small number of tutorials are well cited, a majority receive 1 or no citations. This is in contrast to system demo papers that have average and median citations that are higher or comparable to workshop papers.
Throughout the analyses in this article, we see that median citation numbers are markedly lower than average citation numbers. This is particularly telling. It shows that while there are some very highly cited papers, a majority of the papers obtain much lower number of citations—and when considering papers other than journals and top-tier conferences, the number of citations is frequently lower than ten.
Q. What are the average number of citations received by the long and short ACL main conference papers, respectively?
A. Short papers were introduced at ACL in 2003. Since then ACL is by far the venue with the most number of short papers (compared to other venues). So we compare long and short papers published at ACL since 2003 to determine their average citations. Once again, we limit the papers to those published until 2016 to allow for the papers to have time to collect citations. Figure FIGREF47 shows the average and median citations for long and short papers.
Discussion: On average, long papers get almost three times as many citations as short papers. However, the median for long papers is two-and-half times that of short papers. This difference might be because some very heavily cited long papers push the average up for long papers.
Q. Which venue has publications with the highest average number of citations? What is the average number of citations for ACL and EMNLP papers? What is this average for other venues? What are the average citations for workshop papers, system demonstration papers, and shared task papers?
A. CL journal has the highest average citations per paper. Figure FIGREF49 shows the average citations for AA’ papers published 1965–2016 and 2010–2016, respectively, grouped by venue and paper type. (Figure with median citations is available online.)
Discussion: In terms of citations, TACL papers have not been as successful as EMNLP and ACL; however, CL journal (the more traditional journal paper venue) has the highest average and median paper citations (by a large margin). This gap has reduced in papers published since 2010.
When considering papers published between 2010 and 2016, the system demonstration papers, the SemEval shared task papers, and non-SemEval shared task papers have notably high average (surpassing those of EACL and COLING); however their median citations are lower. This is likely because some heavily cited papers have pushed the average up. Nonetheless, it is interesting to note how, in terms of citations, demo and shared task papers have surpassed many conferences and even become competitive with some top-tier conferences such as EACL and COLING.
Q. What percent of the AA’ papers that were published in 2016 or earlier are cited more than 1000 times? How many more than 10 times? How many papers are cited 0 times?
A. Google Scholar invented the i-10 index as another measure of author research impact. It stands for the number of papers by an author that received ten or more citations. (Ten here is somewhat arbitrary, but reasonable.) Similar to that, one can look at the impact of AA’ as a whole and the impact of various subsets of AA’ through the number of papers in various citation bins. Figure FIGREF50 shows the percentage of AA’ papers in various citation bins. (The percentages of papers when considering papers from specific time spans are available online.)
Discussion: About 56% of the papers are cited ten or more times. 6.4% of the papers are never cited. Note also that some portion of the 1–9 bin likely includes papers that only received self-citations. It is interesting that the percentage of papers with 0 citations is rather steady (between 7.4% and 8.7%) for the 1965–1989, 1990–1999, and 2010–2016 periods. The majority of the papers lie in the 10 to 99 citations bin, for all except the recent periods (2010–2016 and 2016Jan–2016Dec). With time, the recent period should also have the majority of the papers in the 10 to 99 citations bin.
The numbers for the 2016Jan–2016Dec papers show that after 2.5 years, about 89% of the papers have at least one citation and about 33% of the papers have ten or more citations.
Q. What are the citation bin percentages for individual venues and paper types?
A. See Figure FIGREF51.
Discussion: Observe that 70 to 80% of the papers in journals and top-tier conferences have ten or more citations. The percentages are markedly lower (between 30 and 70%) for the other conferences shown above, and even lower for some other conferences (not shown above).
CL Journal is particularly notable for the largest percentage of papers with 100 or more citations. The somewhat high percentage of papers that are never cited (4.3%) are likely because some of the book reviews from earlier years are not explicitly marked in CL journal, and thus they were not removed from analysis. Also, letters to editors, which are more common in CL journal, tend to often obtain 0 citations.
CL, EMNLP, and ACL have the best track record for accepting papers that have gone on to receive 1000 or more citations. *Sem, the semantics conference, seems to have notably lower percentage of high-citation papers, even though it has fairly competitive acceptance rates.
Instead of percentage, if one considers raw numbers of papers that have at least ten citations (i-10 index), then LREC is particularly notable in terms of the large number of papers it accepts that have gone on to obtain ten or more citations ($\sim $1600). Thus, by producing a large number of moderate-to-high citation papers, and introducing many first-time authors, LREC is one of the notable (yet perhaps undervalued) engines of impact on NLP.
About 50% of the SemEval shared task papers received 10 or more citations, and about 46% of the non-SemEval Shared Task Papers received 10 or more citations. About 47% of the workshop papers received ten or more citations. About 43% of the demo papers received 10 or more citations.
Impact ::: Citations to Papers by Areas of Research
Q. What is the average number of citations of AA' papers that have machine translation in the title? What about papers that have the term sentiment analysis or word representations?
A. Different areas of research within NLP enjoy varying amounts of attention. In Part II, we looked at the relative popularity of various areas over time—estimated through the number of paper titles that had corresponding terms. (You may also want to see the discussion on the use of paper title terms to sample papers from various, possibly overlapping, areas.) Figure FIGREF53 shows the top 50 title bigrams ordered by decreasing number of total citations. Only those bigrams that occur in at least 30 AA' papers (published between 1965 and 2016) are considered. (The papers from 2017 and later are not included, to allow for at least 2.5 years for the papers to accumulate citations.)
Discussion: The graph shows that the bigram machine translation occurred in 1,659 papers that together accrued more than 93k citations. These papers have on average 68.8 citations and the median citations is 14. Not all machine translation (MT) papers have machine translation in the title. However, arguably, this set of 1,659 papers is a representative enough sample of machine translation papers; and thus, the average and median are estimates of MT in general. Second in the list are papers with statistical machine in the title—most commonly from the phrase statistical machine translation. One expects considerable overlap in the papers across the sets of papers with machine translation and statistical machine, but machine translation likely covers a broader range of research including work before statistical MT was introduced, neural MT, and MT evaluation.
There are fewer papers with sentiment analysis in the title (356), but these have acquired citations at a higher average (104) than both machine translation and statistical machine. The bigram automatic evaluation jumps out because of its high average citations (337). Some of the neural-related bigrams have high median citations, for example, neural machine (49) and convolutional neural (40.5).
Figure FIGREF54 shows the lists of top 25 bigrams ordered by average citations.
Discussion: Observe the wide variety of topics covered by this list. In some ways that is reassuring for the health of the field as a whole; however, this list does not show which areas are not receiving sufficient attention. It is less clear to me how to highlight those, as simply showing the bottom 50 bigrams by average citations is not meaningful. Also note that this is not in any way an endorsement to write papers with these high-citation bigrams in the title. Doing so is of course no guarantee of receiving a large number of citations.
Correlation of Age and Gender with Citations
In this section, we examine citations across two demographic dimensions: Academic age (number of years one has been publishing) and Gender. There are good reasons to study citations across each of these dimensions including, but not limited to, the following:
Areas of research: To better understand research contributions in the context of the area where the contribution is made.
Academic age: To better understand how the challenges faced by researchers at various stages of their career may impact the citations of their papers. For example, how well-cited are first-time NLP authors? On average, at what academic age do citations peak? etc.
Gender: To better understand the extent to which systematic biases (explicit and implicit) pervasive in society and scientific publishing impact author citations.
Some of these aspects of study may seem controversial. So it is worth addressing that first. The goal here is not to perpetuate stereotypes about age, gender, or even areas of research. The history of scientific discovery is awash with plenty of examples of bad science that has tried to erroneously show that one group of people is “better” than another, with devastating consequences.
People are far more alike than different. However, different demographic groups have faced (and continue to face) various socio-cultural inequities and biases. Gender and race studies look at how demographic differences shape our experiences. They examine the roles of social institutions in maintaining the inequities and biases.
This work is in support of those studies. Unless we measure differences in outcomes such as scientific productivity and impact across demographic groups, we will not fully know the extent to which these inequities and biases impact our scientific community; and we cannot track the effectiveness of measures to make our universities, research labs, and conferences more inclusive, equitable, and fair.
Correlation of Age and Gender with Citations ::: Correlation of Academic Age with Citations
We introduced NLP academic age earlier in the paper, where we defined NLP academic age as the number of years one has been publishing in AA. Here we examine whether NLP academic age impacts citations. The analyses are done in terms of the academic age of the first author; however, similar analyses can be done for the last author and all authors. (There are limitations to each of these analyses though as discussed further below.)
First author is a privileged position in the author list as it is usually reserved for the researcher that has done the most work and writing. The first author is also usually the main driver of the project; although, their mentor or advisor may also be a significant driver of the project. Sometimes multiple authors may be marked as first authors in the paper, but the current analysis simply takes the first author from the author list. In many academic communities, the last author position is reserved for the most senior or mentoring researcher. However, in non-university research labs and in large collaboration projects, the meaning of the last author position is less clear. (Personally, I prefer author names ordered by the amount of work done.)
Examining all authors is slightly more tricky as one has to decide how to credit the citations to the possibly multiple authors. It might also not be a clear indicator of differences across gender as a large number of the papers in AA have both male and female authors.
Q. How does the NLP academic age of the first author correlate with the amount of citations? Are first-year authors less cited than those with more experience?
A. Figure FIGREF59 shows various aggregate citation statistics corresponding to academic age. To produce the graph we put each paper in a bin corresponding to the academic age of the first author when the paper was published. For example, if the first author of a paper had an academic age of 3 when that paper was published, then the paper goes in bin 3. We then calculate #papers, #citations, median citations, and average citations for each bin. For the figure below, We further group the bins 10 to 14, 15 to 19, 20 to 34, and 35 to 50. These groupings are done to avoid clutter, and also because many of the higher age bins have a low number of papers.
Discussion: Observe that the number of papers where the first author has academic age 1 is much larger than the number of papers in any other bin. This is largely because a large number of authors in AA have written exactly one paper as first author. Also, about 60% of the authors in AA (17,874 out of the 29,941 authors) have written exactly one paper (regardless of author position).
The curves for the average and median citations have a slight upside down U shape. The relatively lower average and median citations in year 1 (37.26 and 10, respectively) indicate that being new to the field has some negative impact on citations. The average increases steadily from year 1 to year 4, but the median is already at the highest point by year 2. One might say, that year 2 to year 14 are the period of steady and high citations. Year 15 onwards, there is a steady decline in the citations. It is probably wise to not draw too many conclusions from the averages of the 35 to 50 bin, because of the small number of papers. There seems to be a peak in average citations at age 7. However, there is not a corresponding peak in the median. Thus the peak in average might be due to an increase in the number of very highly cited papers. Citations to Papers by First Author Gender
As noted in Part I, neither ACL nor the ACL Anthology have recorded demographic information for the vast majority of the authors. Thus we use the same setup discussed earlier in the section on demographics, to determine gender using the United States Social Security Administration database of names and genders of newborns to identify 55,133 first names that are strongly associated with females (probability $\ge $99%) and 29,873 first names that are strongly associated with males (probability $\ge $99%).
Q. On average, are women cited less than men?
A. Yes, on average, female first author papers have received markedly fewer citations than male first author papers (36.4 compared to 52.4). The difference in median is smaller (11 compared to 13). See Figure FIGREF60.
Discussion: The large difference in averages and smaller difference in medians suggests that there are markedly more very heavily cited male first-author papers than female first-author papers. The gender-unknown category, which here largely consist of authors with Chinese origin names and names that are less strongly associated with one gender have a slightly higher average, but the same median citations, as authors with female-associated first names.
The differences in citations, or citation gap, across genders may: (1) vary by period of time; (2) vary due to confounding factors such as academic age and areas of research. We explore these next.
Q. How has the citation gap across genders changed over the years?
A. Figure FIGREF61 (left side) shows the citation statistics across four time periods.
Discussion: Observe that female first authors have always been a minority in the history of ACL; however, on average, their papers from the early years (1965 to 1989) received a markedly higher number of citations than those of male first authors from the same period. We can see from the graph that this changed in the 1990s where male first-author papers obtained markedly more citations on average. The citation gap reduced considerably in the 2000s, and the 2010–2016 period saw a further slight reduction in the citation gap.
It is also interesting to note that the gender-unknown category has almost bridged the gap with the males in this most recent time period. Further, the proportion of the gender-unknown authors has increased over the years—arguably, an indication of better representations of authors from around the world in recent years. (Nonetheless, as indicated in Part I, there is still plenty to be done to promote greater inclusion of authors from Africa and South America.)
Q. How have citations varied by gender and academic age? Are women less cited because of a greater proportion of new-to-NLP female first authors than new-to-NLP male first authors?
A. Figure FIGREF61 (right side) shows citation statistics broken down by gender and academic age. (This figure is similar to the academic age graph seen earlier, except that it shows separate average and median lines for female, male, and unknown gender first authors.)
Discussion: The graphs show that female first authors consistently receive fewer citations than male authors for the first fifteen years. The trend is inverted with a small citation gap in the 15th to 34th years period.
Q. Is the citation gap common across the vast majority of areas of research within NLP? Is the gap simply because more women work in areas that receive low numbers of citations (regardless of gender)?
A. Figure FIGREF64 shows the most cited areas of research along with citation statistics split by gender of the first authors of corresponding papers. (This figure is similar to the areas of research graph seen earlier, except that it shows separate citation statistics for the genders.) Note that the figure includes rows for only those bigram and gender pairs with at least 30 AA’ papers (published between 1965 and 2016). Thus for some of the bigrams certain gender entries are not shown.
Discussion: Numbers for an additional 32 areas are available online. Observe that in only about 12% (7 of the top 59) of the most cited areas of research, women received higher average citations than men. These include: sentiment analysis, information extraction, document summarization, spoken dialogue, cross lingual (research), dialogue, systems, language generation. (Of course, note that some of the 59 areas, as estimated using title term bigrams, are overlapping. Also, we did not include large scale in the list above because the difference in averages is very small and it is not really an area of research.) Thus, the citation gap is common across a majority of the high-citations areas within NLP.
Conclusions
This work examined the ACL Anthology to identify broad trends in productivity, focus, and impact. We examined several questions such as: who and how many of us are publishing? what are we publishing on? where and in what form are we publishing? and what is the impact of our publications? Particular attention was paid to the demographics and inclusiveness of the NLP community. Notably, we showed that only about 30% of first authors are female, and that this percentage has not improved since the year 2000. We also showed that, on average, female first authors are cited less than male first authors, even when controlling for academic age. We hope that recording citation and participation gaps across demographic groups will encourage our university, industry, and government research labs to be more inclusive and fair. Several additional aspects of the AA will be explored in future work (see the bottom of the blog posts).
Acknowledgments
This work was possible due to the helpful discussion and encouragement from a number of awesome people, including: Dan Jurafsky, Tara Small, Michael Strube, Cyril Goutte, Eric Joanis, Matt Post, Patrick Littell, Torsten Zesch, Ellen Riloff, Norm Vinson, Iryna Gurevych, Rebecca Knowles, Isar Nejadgholi, and Peter Turney. Also, a big thanks to the ACL Anthology team for creating and maintaining a wonderful resource. | 44,896 articles |
8d989490c5392492ad66e6a5047b7d74cc719f30 | 8d989490c5392492ad66e6a5047b7d74cc719f30_0 | Q: What ensemble methods are used for best model?
Text: Introduction
Through this CS224N Pre-trained Contextual Embeddings (PCE) project, we tackle the question answering problem which is one of the most popular in NLP and has been brought to the forefront by datasets such as SQUAD 2.0. This problem's success stems from both the challenge it presents and the recent successes in approaching human level function. As most, if not all, of the problems humans solve every day can be posed as a question, creating an deep learning based solution that has access to the entire internet is a critical milestone for NLP. Through our project, our group had tested the limits of applying attention in BERT BIBREF0 to improving the network's performance on the SQUAD2.0 dataset BIBREF1. BERT applies attention to the concatenation of the query and context vectors and thus attends these vectors in a global fashion. We propose BERTQA BIBREF2 which adds Context-to-Query (C2Q) and Query-to-Context (Q2C) attention in addition to localized feature extraction via 1D convolutions. We implemented the additions ourselves, while the Pytorch baseline BERT code was obtained from BIBREF3. The SQUAD2.0 answers span from a length of zero to multiple words and this additional attention provides hierarchical information that will allow the network to better learn to detect answer spans of varying sizes. We applied the empirical findings from this part of our project to the large BERT model, which has twice as many layers as the base BERT model. We also augmented the SQUAD2.0 dataset with additional backtranslated examples. This augmented dataset will be publicly available on our github BIBREF4 on the completion of this course. After performing hyperparameter tuning, we ensembled our two best networks to get F1 and EM scores of 82.317 and 79.442 respectively. The experiments took around 300 GPU hours to train.
Related Work
The SQUAD2.0 creators proposed this dataset as a means for networks to actually understand the text they were being interrogated about rather than simply being extractive parsers. Many networks stepped up to the challenge including BERT, BIDAF, and QANET. BERT is a fully feed forward network that is based on the transformer architecture BIBREF5. The base BERT model has 12 transformer encoder layers that terminate in an interchangeable final layer which can be finetuned to the specific task. We chose this network as our baseline because of its use of contextual embeddings and global attention and because of the speed advantage derived from an RNN free architecture. We derived inspiration for our modifications from the BIDAF and QANET models. BIDAF is an LSTM based network that uses character, word, and contextual embeddings which are fed through Context-to-Query (C2Q) and Query-to-Context (Q2C) layers. The final logits are derived from separate Start and End output layers, as opposed to BERT which produces these logits together. Our C2Q/Q2C addition to BERT and the Dense Layer/LSTM based separate final Start and End logit prediction layer were inspired by this paper. We also refered to the QANET model, which is also a fully feed forward network that emphasizes the use of convolutions to capture the local structure of text. Based on this paper, we created a convolutional layer within the C2Q/Q2C architecture to add localized information to BERT's global attention and the C2Q/Q2C coattention.
In addition to referencing these papers that helped us build a successful model, we also explored many other papers which either didn't work with our transformer based model or simply didn't work in combination with our additions to BERT. The three main papers from which we tried to gain ideas are U-Net: Machine Reading Comprehension with Unanswerable Questions BIBREF6, Attention-over-Attention Neural Networks for Reading Comprehension BIBREF7, and FlowQA: Grasping Flow in History for Conversational Machine Comprehension BIBREF8. We tried implementing the multitask learning methodology presented in U-Net by passing the [CLS] token through a series of convolutional layers to create a probability of whether the question has an answer. We combined this prediction with the prediction of Start and End logits by combining the logits' crossentropy loss and the [CLS] binary crossentropy loss. Unfortunately, this additional loss seemed to be hindering the network's learning ability. We conjecture that this type of multitask learning would benefit from full training instead of the finetuning we were restricted to doing because of resources and time. We looked to Attention-over-Attention as a source of additional ways of injecting attention into our network. Attention-over-Attention has a dot-product based attention mechanism that attends to attention vectors instead of embedding vectors. We believe this method did not help in our case because BERT works with the Context and Query as part of the same vector while the Attention-over-Attention model requires completely uncoupled Context and Query vectors. As a side note, we do separate the Context and Query vector derived from BERT before the coattention layers of our model, but these layers are not negatively affected by the fact that these separated vectors contain 'mixed' information between the Context and Query. Finally, we explored the FlowQA paper which proposed combining embeddings from multiple layers as an input to the final prediction layer. We implemented this idea by combining embeddings from multiple BERT layers as an input to our final prediction layer. This final layer was simply an additional transformer encoder and we think that the encoder does not have the LSTM's ability of being able to aggregate information from multiple sources.
Methods
We first focused on directed coattention via context to query and query to context attention as discussed in BIDAF BIBREF9. We then implemented localized feature extraction by 1D convolutions to add local information to coattention based on the QANET architecture BIBREF10. Subsequently, we experimented with different types of skip connections to inject BERT embedding information back into our modified network. We then applied what we learned using the base BERT model to the large BERT model. Finally, we performed hyperparameter tuning by adjusting the number of coattention blocks, the batch size, and the number of epochs trained and ensembled our three best networks. Each part of the project is discussed further in the subsections below.
Methods ::: BERTQA - Directed Coattention
The base BERT network, the baseline for this project, is built with 12 Transformer encoder blocks. These encoder blocks contain multi-head attention and a feed forward network. Each head of the multi-head attention attends to the concatenation of the context and query input and thus forms a global attention output. The output of each Transformer encoder is fed in to the next layer, creating an attention hierarchy. The benefit of this construction is that the model has access to the entire query and context at each level allowing both embeddings to learn from each other and removing the long term memory bottleneck faced by RNN based models. BERTQA uses directed coattention between the query and context, as opposed to attending to their concatenation (Figure FIGREF2). Our architecture consists of a set of 7 directed coattention blocks that are inserted between the BERT embeddings and the final linear layer before loss calculation.
The BERT embeddings are masked to produce seperate query and context embedding vectors (Equations DISPLAY_FORM3 , DISPLAY_FORM4).
Where E is the contextualized embeddings derived from BERT, m is the mask, and c and q are the context and query respectively.
$E_q$ and $E_c$ are then projected through linear layers to obtain key, value, and query vectors (Equation DISPLAY_FORM5).
Where Q, K, and V are the query, key and value vectors.
The Q, K, and V vectors are used in scaled dot-product attention (Equation DISPLAY_FORM6) to create the separate Context-to-Query (C2Q) and Query-to-Context (Q2C) attention vectors.
Where y is q and z is c for Q2C and y is c and z is q for C2Q.
The C2Q attention vector is summed with the query input and the Q2C attention vector is summed with the context input via a skip connection. Each sum vector is then pushed through a fully connected block and then is added back to the output of the fully connected block via another skip connection. Each sum is followed by a layer-wise normalization. The two resulting 3D C2Q and Q2C vectors are concatenated along the third (embedding) dimension which are combined by two 1D convolutions to create the final 3D vector representing the combination of the C2Q and Q2C attention. We use two convolution layers here so that the concatenated dimension is reduced more gradually so that too much information is not lost. This vector then goes into a final attention head to perform separate self attention pre-processing for the Start logit and End logit prediction layers. The Start logit is generated by a linear layer and the End logit is generated by the output of an LSTM which takes the concatenation of the start span and end span embeddings as an input. We used the BERT architecture code written in Pytorch from the HuggingFace github BIBREF3. We wrote our own code for all of the subsequent architecture.
Methods ::: Localized Feature Extraction
To refine the focus of the attention further, we experimented with convolutional feature extraction to add localized information to the coattention output. We added four convolutional layers within the coattention architecture (Figure FIGREF8). The input to these layers were the BERT embeddings and the outputs were added to the outputs of the multi-head attention layers in the coattention architecture and then layer-wise normalized. This combination of coattention and local information provides a hierarchical understanding of the question and context. By itself, BERT provides information about the question and context as a unit, while the coattention extracts information from both the question and context relative to each other. The convolutions extract local features within the question and context to add localized information to the attention and embedding meanings. After adding the separate start and end logic, we found that the localized feature extraction did not allow an improvement in the network's learning via an ablation study where we ran the network without the convolutional layers. We speculate that the convolutions prevented improvement beyond a certain F1 score because they are lossy compressors and the information lost by the convolutions might be essential to downstream learning.
Methods ::: Skip Connections
As shown in Figure FIGREF2, we have a skip connection from the BERT embedding layer combined with the convolved directed co-attention output (C2Q and Q2C). We experimented with 3 skip connection configurations: Simple ResNet inspired Skip, Self-Attention Transformer Skip, and a Highway Network. Of these, the Self-Attention Transformer based skip worked best initially. However, when we combined this skip connection with our logit prediction logic, the network was no longer able learn as well. The Simple ResNet inspired skip BIBREF11 connection solved this issue. It seems that the transformer skip connection followed by the additional transformer encoder blocks that form the beginning of the logit prediction logic processed the BERT embeddings too much and thus lost the benefit of the skip connection. Therefore, we decided to use a Simple ResNet inspired skip alongside the self attention heads for logit prediction. This allows the directed co-attention layers to learn distinct information coming from BERT embeddings via the skip and allows for efficient backpropagation to the BERT layers.
Methods ::: Data Augmentation - SQuAD 2.Q
Inspired by the work presented in BIBREF12 where the authors present a way of generating new questions out of context and after observing the patterns in SQuAD 2.0 we realized there is a lot of syntatic and gramatical variance in the questions written by cloud workers. To help our network generalize better to these variations we decided to augment the dataset by paraphrasing the questions in the SQuAD training set. We applied backtranslation using Google Cloud Translation (NMT) API BIBREF13 to translate the sentence from English to French and then do a back translation to English, essentially 2 translations per question (Figure FIGREF11).
We call our augmented dataset SQUAD 2.Q and make 3 different versions (35%, 50%, and 100% augmentation) alongside code to generate them publicly available on our github BIBREF4.
Methods ::: Hyperparameter Tuning
Hyperparameter tuning has been an on-going process for our experiments. Here are the following hyperparameters we have tweaked and tuned for on Bert Base:
Number of Directed co-Attention layers - We tried various numbers of layers and we found out that N=7 for the co-attention layers gave us optimal performance while being able to fit the model on 2 GPUs (3 F1 score improvement by itself).
Max Sequence length - After initial experiments with default sequence length (context + query token) 384, we switched to a sequence length of 512. This gave us a 0.6 F1 improvement on our model.
Batch Size - Default: 12, We had to use a batch size of 6 for all our experiments due to resource constraints and out of memory issues on the GPU for any larger batch size.
Number of epochs - Default: 2 On increasing the number of epochs we saw a significant degradation in performance (-3 F1 score), we attribute this to the fact that the model starts to overfit to the training data with high variance and since the batch size is smaller the gradient updates could be noisy not allowing it to optimally converge.
Learning Rate - Default: 3e-5 We wrote a script to help us find the optimal learning rate using grid search and found the optimal learning rates for SQuAD 2.0 and SQuAD 2.Q respectively for batch size of 6.
Methods ::: BERT Large and Ensembling
We applied what we learned from the previous five subsections to the large BERT model, which has twice as many layers as the base model. In order to fit this model on our GPU and still use 7 of our coattention layers, we were limited to two examples on the GPU at a time. However, we also found that BERT large requires a larger batch size to get a good performance. As such, we left the batch size 6 as with the base model and used a gradient accumulation of 3 so that only two examples were on the GPU at a time. Additionally, the large model is very sensitive to the learning rate, and the rate of 3e-5 which we used with the smaller model no longer worked. We ran the model on a subset of the data with various learning rates and found that 1.1e-5 to 1.5e-5 works the best for the large model depending on the dataset used (SQuAD 2.0 or SQUAD 2.Q).
After experimenting with multiple combinations of the ideas we described above, we ensembled our three best networks to create our final predictions. The configurations of our three best networks are described in Table TABREF19.
We constructed the ensembled predictions by choosing the answer from the network that had the highest probability and choosing no answer if any of the networks predicted no answer.
Results and Analysis
Table TABREF20 reports the F1 and EM scores obtained for the experiments on the base model. The first column reports the base BERT baseline scores, while the second reports the results for the C2Q/Q2C attention addition. The two skip columns report scores for the skip connection connecting the BERT embedding layer to the coattention output (Simple Skip) and the scores for the same skip connection containing a Transformer block (Transformer Skip). The final column presents the result of the localized feature extraction added inside the C2Q/Q2C architecture (Inside Conv - Figure FIGREF8).
The results presented above verify our hypothesis that adding layers of directed attention to BERT improves its performance. The C2Q/Q2C network produced a significant improvement in the No Answer F1 score while causing a symmetric drop in the Has Answer F1 score. The C2Q/Q2C network attends the context relative to the query and vice versa instead of as a concatenated whole. This method of attention provides more information regarding whether there is an answer to the question in the context than the original BERT attention. The skip connections improved the scores further by adding the BERT embeddings back in to the coattention vectors and providing information that may have been lost by the C2Q/Q2C network in addition to providing a convenient path for backpropagation to the BERT embedding layers. The skip connection containing the transformer provides minimal gains while adding a significant overhead to runtime. Therefore, we built the final convolutional experiments on the Simple Skip architecture. The localized feature extraction within the coattention network produced the best results in the base model, but prevented an improvement in our modified BERT large model.
Table TABREF21 shows the F1 and EM scores obtained for the experiments on the large model. The models labeled 1, 2, and 3 are described in detail in Section 3.6.
Each of the models built on BERT large used our augmented dataset in addition to the coattention architecture, simple skip connection, and separate start and end logit logic. The Model 1 results show that a moderately augmented (35%) data set helps the training since both unaugmented and highly augmented (50%) models did not perform as well. It seems that adding too much augmented data reduces the F1 because the augmented data is noisy relative to the original data. The performance difference between Model 1 and 2 support the use of the LSTM in creating the End logit predictions. The LSTM is successfully combining the information from the Start logit and the End embeddings to provide a good input to the End logit linear layer. The ensemble model performed the best by far due to a significant increase in the no answer F1 which can be attributed to the ensembling method which is biased towards models that predict no answer.
We investigated the attention distributions produced by our proposed model by modifying the open source code from BertViz BIBREF14 . For the case where the question has an answer in the context (Figure FIGREF22), the attention heads produce activation around the answer phrase "in the 10th and 11th centuries". In the case where there is no answer in the context, the attention heads produce considerable activation on the [SEP] word-piece which is outside the context span.
As seen in Figure FIGREF25, we conducted an error analysis over different question types. Note that questions that did not fit into the 7 bins were classified as "Other". An example of a question in the "Other" category would be an "Is it?" question which is a minority set in SQUAD 2.0. Over the baseline, our model pretty much presents an overall improvement across the board in the different type of questions in the SQuAD 2.0 dev set. In the case of "Which" questions, our model goes wrong 69 times where as the baseline model goes wrong 64 times, a very small numeric difference. However, for the "What" questions the baseline model produces incorrect outputs for 776 examples while our model produces 30 fewer incorrect outputs. The reason for this lapse appears to be related to data augmentation where we observed that many a times "Which" was backtranslated as "What" and vice versa. Thus, the questions in these two classes are mixed and a completely accurate analysis of improvements in these classes is not possible.
Figure FIGREF26 shows an example cropped context and question that our ensemble model answers correctly while the BERT large model answers incorrectly. It seems that the BERT large model combined the words spirit and Christian to answer this question even thought the word spirit belongs to martial and the word Christian belongs to piety. Our model was able to keep the paired words together and realize that the question has no answer. We believe that our model was able to get the correct answer because of the coattention which is able to keep the words paired together correctly.
Overall, our model has shown marked qualitative and quantitative improvement over the base and large BERT models. Our SQUAD 2.Q dataset helps improve performance by mimicking the natural variance in questions present in the SQUAD 2.0 dataset. BertQA produces a significant improvement in the No Answer F1 by being able to maintain associations between words via coattention, as seen in Figure FIGREF26, and by ensembling our three best models.
Conclusion
We present a novel architectural scheme to use transformers to help the network learn directed co-attention which has improved performance over BERT baseline. We experimented with several architectural modifications and presented an ablation study. We present SQuAD 2.Q, an augmented dataset, developed using NMT backtranslation which helps our model generalize better over syntatic and grammatical variance of human writing. Our ensemble model gives a 3.5 point improvement over the Bert Large dev F1. We learned a lot about neural architectural techniques through experimenting with various model configurations. We also learned about how different model components do or don't work together and that some architectural choices like convolutional layers that work so well in computer vision do not necessarily work as well in NLP.
We would like to improve on the quality of data augmentation to limit noise in the dataset and further extend this work to context augmentation as well. Apart from that, we would also like to try recent architectures like Transformer-XL BIBREF15 which has potential to offer additional improvement on HasAns F1 by remembering long term dependencies and evaluate how it scales with our model as a next step. Given sufficient compute resources we would also like to pre-train our C2Q and Q2C layers similar to BERT pre-training to learn deeper language semantics and then fine-tune it on the SQuAD dataset for the task of Question Answering.
We would like to thank the CS224n Team for all the support throughout the course and also thank the folks at Azure for providing us with Cloud credits. | choosing the answer from the network that had the highest probability and choosing no answer if any of the networks predicted no answer |
a7829abed2186f757a59d3da44893c0172c7012b | a7829abed2186f757a59d3da44893c0172c7012b_0 | Q: What hyperparameters have been tuned?
Text: Introduction
Through this CS224N Pre-trained Contextual Embeddings (PCE) project, we tackle the question answering problem which is one of the most popular in NLP and has been brought to the forefront by datasets such as SQUAD 2.0. This problem's success stems from both the challenge it presents and the recent successes in approaching human level function. As most, if not all, of the problems humans solve every day can be posed as a question, creating an deep learning based solution that has access to the entire internet is a critical milestone for NLP. Through our project, our group had tested the limits of applying attention in BERT BIBREF0 to improving the network's performance on the SQUAD2.0 dataset BIBREF1. BERT applies attention to the concatenation of the query and context vectors and thus attends these vectors in a global fashion. We propose BERTQA BIBREF2 which adds Context-to-Query (C2Q) and Query-to-Context (Q2C) attention in addition to localized feature extraction via 1D convolutions. We implemented the additions ourselves, while the Pytorch baseline BERT code was obtained from BIBREF3. The SQUAD2.0 answers span from a length of zero to multiple words and this additional attention provides hierarchical information that will allow the network to better learn to detect answer spans of varying sizes. We applied the empirical findings from this part of our project to the large BERT model, which has twice as many layers as the base BERT model. We also augmented the SQUAD2.0 dataset with additional backtranslated examples. This augmented dataset will be publicly available on our github BIBREF4 on the completion of this course. After performing hyperparameter tuning, we ensembled our two best networks to get F1 and EM scores of 82.317 and 79.442 respectively. The experiments took around 300 GPU hours to train.
Related Work
The SQUAD2.0 creators proposed this dataset as a means for networks to actually understand the text they were being interrogated about rather than simply being extractive parsers. Many networks stepped up to the challenge including BERT, BIDAF, and QANET. BERT is a fully feed forward network that is based on the transformer architecture BIBREF5. The base BERT model has 12 transformer encoder layers that terminate in an interchangeable final layer which can be finetuned to the specific task. We chose this network as our baseline because of its use of contextual embeddings and global attention and because of the speed advantage derived from an RNN free architecture. We derived inspiration for our modifications from the BIDAF and QANET models. BIDAF is an LSTM based network that uses character, word, and contextual embeddings which are fed through Context-to-Query (C2Q) and Query-to-Context (Q2C) layers. The final logits are derived from separate Start and End output layers, as opposed to BERT which produces these logits together. Our C2Q/Q2C addition to BERT and the Dense Layer/LSTM based separate final Start and End logit prediction layer were inspired by this paper. We also refered to the QANET model, which is also a fully feed forward network that emphasizes the use of convolutions to capture the local structure of text. Based on this paper, we created a convolutional layer within the C2Q/Q2C architecture to add localized information to BERT's global attention and the C2Q/Q2C coattention.
In addition to referencing these papers that helped us build a successful model, we also explored many other papers which either didn't work with our transformer based model or simply didn't work in combination with our additions to BERT. The three main papers from which we tried to gain ideas are U-Net: Machine Reading Comprehension with Unanswerable Questions BIBREF6, Attention-over-Attention Neural Networks for Reading Comprehension BIBREF7, and FlowQA: Grasping Flow in History for Conversational Machine Comprehension BIBREF8. We tried implementing the multitask learning methodology presented in U-Net by passing the [CLS] token through a series of convolutional layers to create a probability of whether the question has an answer. We combined this prediction with the prediction of Start and End logits by combining the logits' crossentropy loss and the [CLS] binary crossentropy loss. Unfortunately, this additional loss seemed to be hindering the network's learning ability. We conjecture that this type of multitask learning would benefit from full training instead of the finetuning we were restricted to doing because of resources and time. We looked to Attention-over-Attention as a source of additional ways of injecting attention into our network. Attention-over-Attention has a dot-product based attention mechanism that attends to attention vectors instead of embedding vectors. We believe this method did not help in our case because BERT works with the Context and Query as part of the same vector while the Attention-over-Attention model requires completely uncoupled Context and Query vectors. As a side note, we do separate the Context and Query vector derived from BERT before the coattention layers of our model, but these layers are not negatively affected by the fact that these separated vectors contain 'mixed' information between the Context and Query. Finally, we explored the FlowQA paper which proposed combining embeddings from multiple layers as an input to the final prediction layer. We implemented this idea by combining embeddings from multiple BERT layers as an input to our final prediction layer. This final layer was simply an additional transformer encoder and we think that the encoder does not have the LSTM's ability of being able to aggregate information from multiple sources.
Methods
We first focused on directed coattention via context to query and query to context attention as discussed in BIDAF BIBREF9. We then implemented localized feature extraction by 1D convolutions to add local information to coattention based on the QANET architecture BIBREF10. Subsequently, we experimented with different types of skip connections to inject BERT embedding information back into our modified network. We then applied what we learned using the base BERT model to the large BERT model. Finally, we performed hyperparameter tuning by adjusting the number of coattention blocks, the batch size, and the number of epochs trained and ensembled our three best networks. Each part of the project is discussed further in the subsections below.
Methods ::: BERTQA - Directed Coattention
The base BERT network, the baseline for this project, is built with 12 Transformer encoder blocks. These encoder blocks contain multi-head attention and a feed forward network. Each head of the multi-head attention attends to the concatenation of the context and query input and thus forms a global attention output. The output of each Transformer encoder is fed in to the next layer, creating an attention hierarchy. The benefit of this construction is that the model has access to the entire query and context at each level allowing both embeddings to learn from each other and removing the long term memory bottleneck faced by RNN based models. BERTQA uses directed coattention between the query and context, as opposed to attending to their concatenation (Figure FIGREF2). Our architecture consists of a set of 7 directed coattention blocks that are inserted between the BERT embeddings and the final linear layer before loss calculation.
The BERT embeddings are masked to produce seperate query and context embedding vectors (Equations DISPLAY_FORM3 , DISPLAY_FORM4).
Where E is the contextualized embeddings derived from BERT, m is the mask, and c and q are the context and query respectively.
$E_q$ and $E_c$ are then projected through linear layers to obtain key, value, and query vectors (Equation DISPLAY_FORM5).
Where Q, K, and V are the query, key and value vectors.
The Q, K, and V vectors are used in scaled dot-product attention (Equation DISPLAY_FORM6) to create the separate Context-to-Query (C2Q) and Query-to-Context (Q2C) attention vectors.
Where y is q and z is c for Q2C and y is c and z is q for C2Q.
The C2Q attention vector is summed with the query input and the Q2C attention vector is summed with the context input via a skip connection. Each sum vector is then pushed through a fully connected block and then is added back to the output of the fully connected block via another skip connection. Each sum is followed by a layer-wise normalization. The two resulting 3D C2Q and Q2C vectors are concatenated along the third (embedding) dimension which are combined by two 1D convolutions to create the final 3D vector representing the combination of the C2Q and Q2C attention. We use two convolution layers here so that the concatenated dimension is reduced more gradually so that too much information is not lost. This vector then goes into a final attention head to perform separate self attention pre-processing for the Start logit and End logit prediction layers. The Start logit is generated by a linear layer and the End logit is generated by the output of an LSTM which takes the concatenation of the start span and end span embeddings as an input. We used the BERT architecture code written in Pytorch from the HuggingFace github BIBREF3. We wrote our own code for all of the subsequent architecture.
Methods ::: Localized Feature Extraction
To refine the focus of the attention further, we experimented with convolutional feature extraction to add localized information to the coattention output. We added four convolutional layers within the coattention architecture (Figure FIGREF8). The input to these layers were the BERT embeddings and the outputs were added to the outputs of the multi-head attention layers in the coattention architecture and then layer-wise normalized. This combination of coattention and local information provides a hierarchical understanding of the question and context. By itself, BERT provides information about the question and context as a unit, while the coattention extracts information from both the question and context relative to each other. The convolutions extract local features within the question and context to add localized information to the attention and embedding meanings. After adding the separate start and end logic, we found that the localized feature extraction did not allow an improvement in the network's learning via an ablation study where we ran the network without the convolutional layers. We speculate that the convolutions prevented improvement beyond a certain F1 score because they are lossy compressors and the information lost by the convolutions might be essential to downstream learning.
Methods ::: Skip Connections
As shown in Figure FIGREF2, we have a skip connection from the BERT embedding layer combined with the convolved directed co-attention output (C2Q and Q2C). We experimented with 3 skip connection configurations: Simple ResNet inspired Skip, Self-Attention Transformer Skip, and a Highway Network. Of these, the Self-Attention Transformer based skip worked best initially. However, when we combined this skip connection with our logit prediction logic, the network was no longer able learn as well. The Simple ResNet inspired skip BIBREF11 connection solved this issue. It seems that the transformer skip connection followed by the additional transformer encoder blocks that form the beginning of the logit prediction logic processed the BERT embeddings too much and thus lost the benefit of the skip connection. Therefore, we decided to use a Simple ResNet inspired skip alongside the self attention heads for logit prediction. This allows the directed co-attention layers to learn distinct information coming from BERT embeddings via the skip and allows for efficient backpropagation to the BERT layers.
Methods ::: Data Augmentation - SQuAD 2.Q
Inspired by the work presented in BIBREF12 where the authors present a way of generating new questions out of context and after observing the patterns in SQuAD 2.0 we realized there is a lot of syntatic and gramatical variance in the questions written by cloud workers. To help our network generalize better to these variations we decided to augment the dataset by paraphrasing the questions in the SQuAD training set. We applied backtranslation using Google Cloud Translation (NMT) API BIBREF13 to translate the sentence from English to French and then do a back translation to English, essentially 2 translations per question (Figure FIGREF11).
We call our augmented dataset SQUAD 2.Q and make 3 different versions (35%, 50%, and 100% augmentation) alongside code to generate them publicly available on our github BIBREF4.
Methods ::: Hyperparameter Tuning
Hyperparameter tuning has been an on-going process for our experiments. Here are the following hyperparameters we have tweaked and tuned for on Bert Base:
Number of Directed co-Attention layers - We tried various numbers of layers and we found out that N=7 for the co-attention layers gave us optimal performance while being able to fit the model on 2 GPUs (3 F1 score improvement by itself).
Max Sequence length - After initial experiments with default sequence length (context + query token) 384, we switched to a sequence length of 512. This gave us a 0.6 F1 improvement on our model.
Batch Size - Default: 12, We had to use a batch size of 6 for all our experiments due to resource constraints and out of memory issues on the GPU for any larger batch size.
Number of epochs - Default: 2 On increasing the number of epochs we saw a significant degradation in performance (-3 F1 score), we attribute this to the fact that the model starts to overfit to the training data with high variance and since the batch size is smaller the gradient updates could be noisy not allowing it to optimally converge.
Learning Rate - Default: 3e-5 We wrote a script to help us find the optimal learning rate using grid search and found the optimal learning rates for SQuAD 2.0 and SQuAD 2.Q respectively for batch size of 6.
Methods ::: BERT Large and Ensembling
We applied what we learned from the previous five subsections to the large BERT model, which has twice as many layers as the base model. In order to fit this model on our GPU and still use 7 of our coattention layers, we were limited to two examples on the GPU at a time. However, we also found that BERT large requires a larger batch size to get a good performance. As such, we left the batch size 6 as with the base model and used a gradient accumulation of 3 so that only two examples were on the GPU at a time. Additionally, the large model is very sensitive to the learning rate, and the rate of 3e-5 which we used with the smaller model no longer worked. We ran the model on a subset of the data with various learning rates and found that 1.1e-5 to 1.5e-5 works the best for the large model depending on the dataset used (SQuAD 2.0 or SQUAD 2.Q).
After experimenting with multiple combinations of the ideas we described above, we ensembled our three best networks to create our final predictions. The configurations of our three best networks are described in Table TABREF19.
We constructed the ensembled predictions by choosing the answer from the network that had the highest probability and choosing no answer if any of the networks predicted no answer.
Results and Analysis
Table TABREF20 reports the F1 and EM scores obtained for the experiments on the base model. The first column reports the base BERT baseline scores, while the second reports the results for the C2Q/Q2C attention addition. The two skip columns report scores for the skip connection connecting the BERT embedding layer to the coattention output (Simple Skip) and the scores for the same skip connection containing a Transformer block (Transformer Skip). The final column presents the result of the localized feature extraction added inside the C2Q/Q2C architecture (Inside Conv - Figure FIGREF8).
The results presented above verify our hypothesis that adding layers of directed attention to BERT improves its performance. The C2Q/Q2C network produced a significant improvement in the No Answer F1 score while causing a symmetric drop in the Has Answer F1 score. The C2Q/Q2C network attends the context relative to the query and vice versa instead of as a concatenated whole. This method of attention provides more information regarding whether there is an answer to the question in the context than the original BERT attention. The skip connections improved the scores further by adding the BERT embeddings back in to the coattention vectors and providing information that may have been lost by the C2Q/Q2C network in addition to providing a convenient path for backpropagation to the BERT embedding layers. The skip connection containing the transformer provides minimal gains while adding a significant overhead to runtime. Therefore, we built the final convolutional experiments on the Simple Skip architecture. The localized feature extraction within the coattention network produced the best results in the base model, but prevented an improvement in our modified BERT large model.
Table TABREF21 shows the F1 and EM scores obtained for the experiments on the large model. The models labeled 1, 2, and 3 are described in detail in Section 3.6.
Each of the models built on BERT large used our augmented dataset in addition to the coattention architecture, simple skip connection, and separate start and end logit logic. The Model 1 results show that a moderately augmented (35%) data set helps the training since both unaugmented and highly augmented (50%) models did not perform as well. It seems that adding too much augmented data reduces the F1 because the augmented data is noisy relative to the original data. The performance difference between Model 1 and 2 support the use of the LSTM in creating the End logit predictions. The LSTM is successfully combining the information from the Start logit and the End embeddings to provide a good input to the End logit linear layer. The ensemble model performed the best by far due to a significant increase in the no answer F1 which can be attributed to the ensembling method which is biased towards models that predict no answer.
We investigated the attention distributions produced by our proposed model by modifying the open source code from BertViz BIBREF14 . For the case where the question has an answer in the context (Figure FIGREF22), the attention heads produce activation around the answer phrase "in the 10th and 11th centuries". In the case where there is no answer in the context, the attention heads produce considerable activation on the [SEP] word-piece which is outside the context span.
As seen in Figure FIGREF25, we conducted an error analysis over different question types. Note that questions that did not fit into the 7 bins were classified as "Other". An example of a question in the "Other" category would be an "Is it?" question which is a minority set in SQUAD 2.0. Over the baseline, our model pretty much presents an overall improvement across the board in the different type of questions in the SQuAD 2.0 dev set. In the case of "Which" questions, our model goes wrong 69 times where as the baseline model goes wrong 64 times, a very small numeric difference. However, for the "What" questions the baseline model produces incorrect outputs for 776 examples while our model produces 30 fewer incorrect outputs. The reason for this lapse appears to be related to data augmentation where we observed that many a times "Which" was backtranslated as "What" and vice versa. Thus, the questions in these two classes are mixed and a completely accurate analysis of improvements in these classes is not possible.
Figure FIGREF26 shows an example cropped context and question that our ensemble model answers correctly while the BERT large model answers incorrectly. It seems that the BERT large model combined the words spirit and Christian to answer this question even thought the word spirit belongs to martial and the word Christian belongs to piety. Our model was able to keep the paired words together and realize that the question has no answer. We believe that our model was able to get the correct answer because of the coattention which is able to keep the words paired together correctly.
Overall, our model has shown marked qualitative and quantitative improvement over the base and large BERT models. Our SQUAD 2.Q dataset helps improve performance by mimicking the natural variance in questions present in the SQUAD 2.0 dataset. BertQA produces a significant improvement in the No Answer F1 by being able to maintain associations between words via coattention, as seen in Figure FIGREF26, and by ensembling our three best models.
Conclusion
We present a novel architectural scheme to use transformers to help the network learn directed co-attention which has improved performance over BERT baseline. We experimented with several architectural modifications and presented an ablation study. We present SQuAD 2.Q, an augmented dataset, developed using NMT backtranslation which helps our model generalize better over syntatic and grammatical variance of human writing. Our ensemble model gives a 3.5 point improvement over the Bert Large dev F1. We learned a lot about neural architectural techniques through experimenting with various model configurations. We also learned about how different model components do or don't work together and that some architectural choices like convolutional layers that work so well in computer vision do not necessarily work as well in NLP.
We would like to improve on the quality of data augmentation to limit noise in the dataset and further extend this work to context augmentation as well. Apart from that, we would also like to try recent architectures like Transformer-XL BIBREF15 which has potential to offer additional improvement on HasAns F1 by remembering long term dependencies and evaluate how it scales with our model as a next step. Given sufficient compute resources we would also like to pre-train our C2Q and Q2C layers similar to BERT pre-training to learn deeper language semantics and then fine-tune it on the SQuAD dataset for the task of Question Answering.
We would like to thank the CS224n Team for all the support throughout the course and also thank the folks at Azure for providing us with Cloud credits. | number of coattention blocks, the batch size, and the number of epochs trained and ensembled our three best networks |
707db46938d16647bf4b6407b2da84b5c7ab4a81 | 707db46938d16647bf4b6407b2da84b5c7ab4a81_0 | Q: How much F1 was improved after adding skip connections?
Text: Introduction
Through this CS224N Pre-trained Contextual Embeddings (PCE) project, we tackle the question answering problem which is one of the most popular in NLP and has been brought to the forefront by datasets such as SQUAD 2.0. This problem's success stems from both the challenge it presents and the recent successes in approaching human level function. As most, if not all, of the problems humans solve every day can be posed as a question, creating an deep learning based solution that has access to the entire internet is a critical milestone for NLP. Through our project, our group had tested the limits of applying attention in BERT BIBREF0 to improving the network's performance on the SQUAD2.0 dataset BIBREF1. BERT applies attention to the concatenation of the query and context vectors and thus attends these vectors in a global fashion. We propose BERTQA BIBREF2 which adds Context-to-Query (C2Q) and Query-to-Context (Q2C) attention in addition to localized feature extraction via 1D convolutions. We implemented the additions ourselves, while the Pytorch baseline BERT code was obtained from BIBREF3. The SQUAD2.0 answers span from a length of zero to multiple words and this additional attention provides hierarchical information that will allow the network to better learn to detect answer spans of varying sizes. We applied the empirical findings from this part of our project to the large BERT model, which has twice as many layers as the base BERT model. We also augmented the SQUAD2.0 dataset with additional backtranslated examples. This augmented dataset will be publicly available on our github BIBREF4 on the completion of this course. After performing hyperparameter tuning, we ensembled our two best networks to get F1 and EM scores of 82.317 and 79.442 respectively. The experiments took around 300 GPU hours to train.
Related Work
The SQUAD2.0 creators proposed this dataset as a means for networks to actually understand the text they were being interrogated about rather than simply being extractive parsers. Many networks stepped up to the challenge including BERT, BIDAF, and QANET. BERT is a fully feed forward network that is based on the transformer architecture BIBREF5. The base BERT model has 12 transformer encoder layers that terminate in an interchangeable final layer which can be finetuned to the specific task. We chose this network as our baseline because of its use of contextual embeddings and global attention and because of the speed advantage derived from an RNN free architecture. We derived inspiration for our modifications from the BIDAF and QANET models. BIDAF is an LSTM based network that uses character, word, and contextual embeddings which are fed through Context-to-Query (C2Q) and Query-to-Context (Q2C) layers. The final logits are derived from separate Start and End output layers, as opposed to BERT which produces these logits together. Our C2Q/Q2C addition to BERT and the Dense Layer/LSTM based separate final Start and End logit prediction layer were inspired by this paper. We also refered to the QANET model, which is also a fully feed forward network that emphasizes the use of convolutions to capture the local structure of text. Based on this paper, we created a convolutional layer within the C2Q/Q2C architecture to add localized information to BERT's global attention and the C2Q/Q2C coattention.
In addition to referencing these papers that helped us build a successful model, we also explored many other papers which either didn't work with our transformer based model or simply didn't work in combination with our additions to BERT. The three main papers from which we tried to gain ideas are U-Net: Machine Reading Comprehension with Unanswerable Questions BIBREF6, Attention-over-Attention Neural Networks for Reading Comprehension BIBREF7, and FlowQA: Grasping Flow in History for Conversational Machine Comprehension BIBREF8. We tried implementing the multitask learning methodology presented in U-Net by passing the [CLS] token through a series of convolutional layers to create a probability of whether the question has an answer. We combined this prediction with the prediction of Start and End logits by combining the logits' crossentropy loss and the [CLS] binary crossentropy loss. Unfortunately, this additional loss seemed to be hindering the network's learning ability. We conjecture that this type of multitask learning would benefit from full training instead of the finetuning we were restricted to doing because of resources and time. We looked to Attention-over-Attention as a source of additional ways of injecting attention into our network. Attention-over-Attention has a dot-product based attention mechanism that attends to attention vectors instead of embedding vectors. We believe this method did not help in our case because BERT works with the Context and Query as part of the same vector while the Attention-over-Attention model requires completely uncoupled Context and Query vectors. As a side note, we do separate the Context and Query vector derived from BERT before the coattention layers of our model, but these layers are not negatively affected by the fact that these separated vectors contain 'mixed' information between the Context and Query. Finally, we explored the FlowQA paper which proposed combining embeddings from multiple layers as an input to the final prediction layer. We implemented this idea by combining embeddings from multiple BERT layers as an input to our final prediction layer. This final layer was simply an additional transformer encoder and we think that the encoder does not have the LSTM's ability of being able to aggregate information from multiple sources.
Methods
We first focused on directed coattention via context to query and query to context attention as discussed in BIDAF BIBREF9. We then implemented localized feature extraction by 1D convolutions to add local information to coattention based on the QANET architecture BIBREF10. Subsequently, we experimented with different types of skip connections to inject BERT embedding information back into our modified network. We then applied what we learned using the base BERT model to the large BERT model. Finally, we performed hyperparameter tuning by adjusting the number of coattention blocks, the batch size, and the number of epochs trained and ensembled our three best networks. Each part of the project is discussed further in the subsections below.
Methods ::: BERTQA - Directed Coattention
The base BERT network, the baseline for this project, is built with 12 Transformer encoder blocks. These encoder blocks contain multi-head attention and a feed forward network. Each head of the multi-head attention attends to the concatenation of the context and query input and thus forms a global attention output. The output of each Transformer encoder is fed in to the next layer, creating an attention hierarchy. The benefit of this construction is that the model has access to the entire query and context at each level allowing both embeddings to learn from each other and removing the long term memory bottleneck faced by RNN based models. BERTQA uses directed coattention between the query and context, as opposed to attending to their concatenation (Figure FIGREF2). Our architecture consists of a set of 7 directed coattention blocks that are inserted between the BERT embeddings and the final linear layer before loss calculation.
The BERT embeddings are masked to produce seperate query and context embedding vectors (Equations DISPLAY_FORM3 , DISPLAY_FORM4).
Where E is the contextualized embeddings derived from BERT, m is the mask, and c and q are the context and query respectively.
$E_q$ and $E_c$ are then projected through linear layers to obtain key, value, and query vectors (Equation DISPLAY_FORM5).
Where Q, K, and V are the query, key and value vectors.
The Q, K, and V vectors are used in scaled dot-product attention (Equation DISPLAY_FORM6) to create the separate Context-to-Query (C2Q) and Query-to-Context (Q2C) attention vectors.
Where y is q and z is c for Q2C and y is c and z is q for C2Q.
The C2Q attention vector is summed with the query input and the Q2C attention vector is summed with the context input via a skip connection. Each sum vector is then pushed through a fully connected block and then is added back to the output of the fully connected block via another skip connection. Each sum is followed by a layer-wise normalization. The two resulting 3D C2Q and Q2C vectors are concatenated along the third (embedding) dimension which are combined by two 1D convolutions to create the final 3D vector representing the combination of the C2Q and Q2C attention. We use two convolution layers here so that the concatenated dimension is reduced more gradually so that too much information is not lost. This vector then goes into a final attention head to perform separate self attention pre-processing for the Start logit and End logit prediction layers. The Start logit is generated by a linear layer and the End logit is generated by the output of an LSTM which takes the concatenation of the start span and end span embeddings as an input. We used the BERT architecture code written in Pytorch from the HuggingFace github BIBREF3. We wrote our own code for all of the subsequent architecture.
Methods ::: Localized Feature Extraction
To refine the focus of the attention further, we experimented with convolutional feature extraction to add localized information to the coattention output. We added four convolutional layers within the coattention architecture (Figure FIGREF8). The input to these layers were the BERT embeddings and the outputs were added to the outputs of the multi-head attention layers in the coattention architecture and then layer-wise normalized. This combination of coattention and local information provides a hierarchical understanding of the question and context. By itself, BERT provides information about the question and context as a unit, while the coattention extracts information from both the question and context relative to each other. The convolutions extract local features within the question and context to add localized information to the attention and embedding meanings. After adding the separate start and end logic, we found that the localized feature extraction did not allow an improvement in the network's learning via an ablation study where we ran the network without the convolutional layers. We speculate that the convolutions prevented improvement beyond a certain F1 score because they are lossy compressors and the information lost by the convolutions might be essential to downstream learning.
Methods ::: Skip Connections
As shown in Figure FIGREF2, we have a skip connection from the BERT embedding layer combined with the convolved directed co-attention output (C2Q and Q2C). We experimented with 3 skip connection configurations: Simple ResNet inspired Skip, Self-Attention Transformer Skip, and a Highway Network. Of these, the Self-Attention Transformer based skip worked best initially. However, when we combined this skip connection with our logit prediction logic, the network was no longer able learn as well. The Simple ResNet inspired skip BIBREF11 connection solved this issue. It seems that the transformer skip connection followed by the additional transformer encoder blocks that form the beginning of the logit prediction logic processed the BERT embeddings too much and thus lost the benefit of the skip connection. Therefore, we decided to use a Simple ResNet inspired skip alongside the self attention heads for logit prediction. This allows the directed co-attention layers to learn distinct information coming from BERT embeddings via the skip and allows for efficient backpropagation to the BERT layers.
Methods ::: Data Augmentation - SQuAD 2.Q
Inspired by the work presented in BIBREF12 where the authors present a way of generating new questions out of context and after observing the patterns in SQuAD 2.0 we realized there is a lot of syntatic and gramatical variance in the questions written by cloud workers. To help our network generalize better to these variations we decided to augment the dataset by paraphrasing the questions in the SQuAD training set. We applied backtranslation using Google Cloud Translation (NMT) API BIBREF13 to translate the sentence from English to French and then do a back translation to English, essentially 2 translations per question (Figure FIGREF11).
We call our augmented dataset SQUAD 2.Q and make 3 different versions (35%, 50%, and 100% augmentation) alongside code to generate them publicly available on our github BIBREF4.
Methods ::: Hyperparameter Tuning
Hyperparameter tuning has been an on-going process for our experiments. Here are the following hyperparameters we have tweaked and tuned for on Bert Base:
Number of Directed co-Attention layers - We tried various numbers of layers and we found out that N=7 for the co-attention layers gave us optimal performance while being able to fit the model on 2 GPUs (3 F1 score improvement by itself).
Max Sequence length - After initial experiments with default sequence length (context + query token) 384, we switched to a sequence length of 512. This gave us a 0.6 F1 improvement on our model.
Batch Size - Default: 12, We had to use a batch size of 6 for all our experiments due to resource constraints and out of memory issues on the GPU for any larger batch size.
Number of epochs - Default: 2 On increasing the number of epochs we saw a significant degradation in performance (-3 F1 score), we attribute this to the fact that the model starts to overfit to the training data with high variance and since the batch size is smaller the gradient updates could be noisy not allowing it to optimally converge.
Learning Rate - Default: 3e-5 We wrote a script to help us find the optimal learning rate using grid search and found the optimal learning rates for SQuAD 2.0 and SQuAD 2.Q respectively for batch size of 6.
Methods ::: BERT Large and Ensembling
We applied what we learned from the previous five subsections to the large BERT model, which has twice as many layers as the base model. In order to fit this model on our GPU and still use 7 of our coattention layers, we were limited to two examples on the GPU at a time. However, we also found that BERT large requires a larger batch size to get a good performance. As such, we left the batch size 6 as with the base model and used a gradient accumulation of 3 so that only two examples were on the GPU at a time. Additionally, the large model is very sensitive to the learning rate, and the rate of 3e-5 which we used with the smaller model no longer worked. We ran the model on a subset of the data with various learning rates and found that 1.1e-5 to 1.5e-5 works the best for the large model depending on the dataset used (SQuAD 2.0 or SQUAD 2.Q).
After experimenting with multiple combinations of the ideas we described above, we ensembled our three best networks to create our final predictions. The configurations of our three best networks are described in Table TABREF19.
We constructed the ensembled predictions by choosing the answer from the network that had the highest probability and choosing no answer if any of the networks predicted no answer.
Results and Analysis
Table TABREF20 reports the F1 and EM scores obtained for the experiments on the base model. The first column reports the base BERT baseline scores, while the second reports the results for the C2Q/Q2C attention addition. The two skip columns report scores for the skip connection connecting the BERT embedding layer to the coattention output (Simple Skip) and the scores for the same skip connection containing a Transformer block (Transformer Skip). The final column presents the result of the localized feature extraction added inside the C2Q/Q2C architecture (Inside Conv - Figure FIGREF8).
The results presented above verify our hypothesis that adding layers of directed attention to BERT improves its performance. The C2Q/Q2C network produced a significant improvement in the No Answer F1 score while causing a symmetric drop in the Has Answer F1 score. The C2Q/Q2C network attends the context relative to the query and vice versa instead of as a concatenated whole. This method of attention provides more information regarding whether there is an answer to the question in the context than the original BERT attention. The skip connections improved the scores further by adding the BERT embeddings back in to the coattention vectors and providing information that may have been lost by the C2Q/Q2C network in addition to providing a convenient path for backpropagation to the BERT embedding layers. The skip connection containing the transformer provides minimal gains while adding a significant overhead to runtime. Therefore, we built the final convolutional experiments on the Simple Skip architecture. The localized feature extraction within the coattention network produced the best results in the base model, but prevented an improvement in our modified BERT large model.
Table TABREF21 shows the F1 and EM scores obtained for the experiments on the large model. The models labeled 1, 2, and 3 are described in detail in Section 3.6.
Each of the models built on BERT large used our augmented dataset in addition to the coattention architecture, simple skip connection, and separate start and end logit logic. The Model 1 results show that a moderately augmented (35%) data set helps the training since both unaugmented and highly augmented (50%) models did not perform as well. It seems that adding too much augmented data reduces the F1 because the augmented data is noisy relative to the original data. The performance difference between Model 1 and 2 support the use of the LSTM in creating the End logit predictions. The LSTM is successfully combining the information from the Start logit and the End embeddings to provide a good input to the End logit linear layer. The ensemble model performed the best by far due to a significant increase in the no answer F1 which can be attributed to the ensembling method which is biased towards models that predict no answer.
We investigated the attention distributions produced by our proposed model by modifying the open source code from BertViz BIBREF14 . For the case where the question has an answer in the context (Figure FIGREF22), the attention heads produce activation around the answer phrase "in the 10th and 11th centuries". In the case where there is no answer in the context, the attention heads produce considerable activation on the [SEP] word-piece which is outside the context span.
As seen in Figure FIGREF25, we conducted an error analysis over different question types. Note that questions that did not fit into the 7 bins were classified as "Other". An example of a question in the "Other" category would be an "Is it?" question which is a minority set in SQUAD 2.0. Over the baseline, our model pretty much presents an overall improvement across the board in the different type of questions in the SQuAD 2.0 dev set. In the case of "Which" questions, our model goes wrong 69 times where as the baseline model goes wrong 64 times, a very small numeric difference. However, for the "What" questions the baseline model produces incorrect outputs for 776 examples while our model produces 30 fewer incorrect outputs. The reason for this lapse appears to be related to data augmentation where we observed that many a times "Which" was backtranslated as "What" and vice versa. Thus, the questions in these two classes are mixed and a completely accurate analysis of improvements in these classes is not possible.
Figure FIGREF26 shows an example cropped context and question that our ensemble model answers correctly while the BERT large model answers incorrectly. It seems that the BERT large model combined the words spirit and Christian to answer this question even thought the word spirit belongs to martial and the word Christian belongs to piety. Our model was able to keep the paired words together and realize that the question has no answer. We believe that our model was able to get the correct answer because of the coattention which is able to keep the words paired together correctly.
Overall, our model has shown marked qualitative and quantitative improvement over the base and large BERT models. Our SQUAD 2.Q dataset helps improve performance by mimicking the natural variance in questions present in the SQUAD 2.0 dataset. BertQA produces a significant improvement in the No Answer F1 by being able to maintain associations between words via coattention, as seen in Figure FIGREF26, and by ensembling our three best models.
Conclusion
We present a novel architectural scheme to use transformers to help the network learn directed co-attention which has improved performance over BERT baseline. We experimented with several architectural modifications and presented an ablation study. We present SQuAD 2.Q, an augmented dataset, developed using NMT backtranslation which helps our model generalize better over syntatic and grammatical variance of human writing. Our ensemble model gives a 3.5 point improvement over the Bert Large dev F1. We learned a lot about neural architectural techniques through experimenting with various model configurations. We also learned about how different model components do or don't work together and that some architectural choices like convolutional layers that work so well in computer vision do not necessarily work as well in NLP.
We would like to improve on the quality of data augmentation to limit noise in the dataset and further extend this work to context augmentation as well. Apart from that, we would also like to try recent architectures like Transformer-XL BIBREF15 which has potential to offer additional improvement on HasAns F1 by remembering long term dependencies and evaluate how it scales with our model as a next step. Given sufficient compute resources we would also like to pre-train our C2Q and Q2C layers similar to BERT pre-training to learn deeper language semantics and then fine-tune it on the SQuAD dataset for the task of Question Answering.
We would like to thank the CS224n Team for all the support throughout the course and also thank the folks at Azure for providing us with Cloud credits. | Simple Skip improves F1 from 74.34 to 74.81
Transformer Skip improes F1 from 74.34 to 74.95 |
d72548fa4d29115252605d5abe1561a3ef2430ca | d72548fa4d29115252605d5abe1561a3ef2430ca_0 | Q: Where do they retrieve neighbor n-grams from in their approach?
Text: Introduction
Over the last few years, neural sequence to sequence models BIBREF0 , BIBREF1 , BIBREF2 have revolutionized the field of machine translation by significantly improving translation quality over their phrase based counterparts BIBREF3 , BIBREF4 , BIBREF5 . With more gains arising from continued research on new neural network architectures and accompanying training techniques BIBREF6 , BIBREF7 , BIBREF8 , NMT researchers, both in industry and academia, have doubled down on their ability to train high capacity models on large corpora with gradient based optimization.
However, despite huge improvements in overall translation quality NMT has shown some glaring weaknesses, including idiom processing, and rare word or phrase translation BIBREF9 , BIBREF10 , BIBREF11 - tasks that should be easy if the model could retain learned information from individual training examples. NMT has also been shown to perform poorly when dealing with multi-domain data BIBREF12 . This `catastrophic forgetting' problem has been well-studied in traditional neural network literature, caused by parameter shift during the training process BIBREF13 , BIBREF14 . Non-parametric methods, on the other hand, are resistant to forgetting but are prone to over-fitting due to their reliance on individual training examples. We focus on a non-parametric extension to NMT, hoping to combine the generalization ability of neural networks with the eidetic memory of non-parametric methods. Given a translation query, we rely on an external retrieval mechanism to find similar source-target instances in the training corpus, which are then utilized by the model.
There has been some work on semi-parametric NMT BIBREF15 , BIBREF16 , BIBREF17 , but its effectiveness has been confined to narrow domain datasets. Existing approaches have relied on sentence level similarity metrics for retrieval, which works well for domains with high train-test overlap, but fails to retrieve useful candidates for broad domains. Even if we could find training instances with overlapping phrases it's likely that the information in most retrieved source-target pairs is noise for the purpose of translating the current query.
To retrieve useful candidates when sentence similarity is low, we use n-gram retrieval instead of sentence retrieval. This results in neighbors which have high local overlap with the source sentence, even if they are significantly different in terms of overall sentence similarity. This is intuitively similar to utilizing information from a phrase table BIBREF18 within NMT BIBREF19 , without losing the global context lost when constructing the phrase table. We also propose another simple extension using dense vectors for n-gram retrieval which allows us to exploit similarities beyond lexical overlap.
To effectively extract the signal from the noisy retrieved neighbors, we develop an extension of the approach proposed in BIBREF17 . While BIBREF17 encode the retrieved targets without any context, we incorporate information from the current and retrieved sources while encoding the retrieved target, in order to distinguish useful information from noise.
We evaluate our semi-parametric NMT approach on two tasks.
Semi-parametric NMT
Standard approaches for Neural Machine Translation rely on seq2seq architectures BIBREF0 , BIBREF1 , where given a source sequence INLINEFORM0 and a target sequence INLINEFORM1 , the goal is to model the probability distribution, INLINEFORM2 .
Semi-parametric NMT BIBREF19 , BIBREF15 approaches this learning problem with a different formulation, by modeling INLINEFORM0 instead, where INLINEFORM1 is the set of sentence pairs where the source sentence is a neighbor of INLINEFORM2 , retrieved from the training corpus using some similarity metric. This relies on a two step approach - the retrieval stage finds training instances, INLINEFORM3 , similar to the source sentence INLINEFORM4 , and the translation stage generates the target sequence INLINEFORM5 given INLINEFORM6 and INLINEFORM7 . We follow this setup, proposing improvements to both stages in order to enhance the applicability of semi-parametric NMT to more general translation tasks.
Retrieval Approaches
Existing approaches have proposed using off the shelf search engines for the retrieval stage. However, our objective differs from traditional information retrieval, since the goal of retrieval in semi-parametric NMT is to find neighbors which might improve translation performance, which might not correlate with maximizing sentence similarity.
Our baseline strategy relies on a sentence level similarity score, similar to those used for standard information retrieval tasks BIBREF24 . We compare this against finer-grained n-gram retrieval using the same similarity metric. We also propose a dense vector based n-gram retrieval strategy, using representations extracted from a pre-trained NMT model.
Our baseline approach relies on a simple inverse document frequency (IDF) based similarity score. We define the IDF score of any token, INLINEFORM0 , as INLINEFORM1 , where INLINEFORM2 is the number of sentence pairs in training corpus and INLINEFORM3 is the number of sentences INLINEFORM4 occurs in. Let any two sentence pairs in the corpus be INLINEFORM5 and INLINEFORM6 . Then we define the similarity between INLINEFORM7 and INLINEFORM8 by, DISPLAYFORM0
For every sentence in the training, dev and test corpora, we find the INLINEFORM0 most similar training sentence pairs and provide them as context to NMT.
Motivated by phrase based SMT, we retrieve neighbors which have high local, sub-sentence level overlap with the source sentence. We adapt our approach to retrieve n-grams instead of sentences. We note that the similarity metric defined above for sentences is equally applicable for n-gram retrieval.
Let INLINEFORM0 be a sentence. Then the set of all possible n-grams of X, for a given INLINEFORM1 , can be defined as INLINEFORM2 (also including padding at the end). To reduce the number of n-grams used to represent every sentence, we define the reduced set of n-grams for X to be INLINEFORM3 .
We represent every sentence by their reduced n-gram set. For every n-gram in INLINEFORM0 , we find the closest n-gram in the training set using the IDF similarity defined above. For each retrieved n-gram we find the corresponding sentence (In case an n-gram is present in multiple sentences, we choose one randomly). The set of neighbors of INLINEFORM1 is then the set of all sentences in the training corpus that contain an n-gram that maximizes the n-gram similarity with any n-gram in INLINEFORM2 .
To capture phrases of different lengths we use multiple n-gram widths, INLINEFORM0 . In case a sentence has already been added to the retrieved set, we find the next most similar sentence to avoid having duplicates. The number of neighbors retrieved for each source sentence is proportional to its length.
We also extend our n-gram retrieval strategy with dense vector based n-gram representations. The objective behind using a dense vector based approach is to incorporate information relevant to the translation task in the retrieval stage. We use a pre-trained Transformer Base BIBREF6 encoder trained on WMT to generate sub-word level dense representations for the sentence. The representation for each n-gram is now defined to be the mean of the representations of all its constituent sub-words. We use the INLINEFORM0 distance of n-gram representations as the retrieval criterion. Note that we use a sub-word level decomposition of sentences for dense retrieval, as compared to word-level for IDF based retrieval (i.e., n-grams are composed of sub-words instead of words).
Following the approach described for IDF based n-gram retrieval, we use multiple values of INLINEFORM0 , and remove duplicate neighbors while creating the retrieved set.
NMT with Context Retrieval
To incorporate the retrieved neighbors, INLINEFORM0 , within the NMT model, we first encode them using Transformer layers, as described in subsection UID12 . This encoded memory is then used within the decoder via an attention mechanism, as described in subsection UID15 .
We now describe how each retrieved translation pair, INLINEFORM0 , is encoded. This architecture is illustrated in Figure FIGREF9 .
We first encode the retrieved source, INLINEFORM0 , in a Transformer layer. Apart from self-attention, we incorporate information from the encoder representation of the current source, INLINEFORM1 , using decoder style cross-attention.
The retrieved target, INLINEFORM0 , is encoded in a similar manner, attending the encoded representation of INLINEFORM1 generated in the previous step.
The encoded representations for all targets, INLINEFORM0 , are then concatenated along the time axis to form the Conditional Source Target Memory (CSTM).
We use gated multi-source attention to combine the context from the source encoder representations and the CSTM. This is similar to the gated attention employed by BIBREF17 . We use a Transformer based decoder that attends to both, the encoder outputs and the CSTM, in every cross-attention layer. The rest of the decoder architecture remains unchanged.
Let the context vectors obtained by applying multi-head attention to the source and memory, with query INLINEFORM0 be INLINEFORM1 and INLINEFORM2 respectively. Then the gated context vector, INLINEFORM3 , is given by, DISPLAYFORM0 DISPLAYFORM1
where INLINEFORM0 is the scalar gating variable at time-step t, and INLINEFORM1 and INLINEFORM2 are learned parameters. These steps are illustrated in Figure FIGREF10 .
Data and Evaluation
We compare the performance of a standard Transformer Base model and our semi-parametric NMT approach on an English-French translation task. We create a new heterogeneous dataset, constructed from a combination of the WMT training set (36M pairs), the IWSLT bilingual corpus (237k pairs), JRC-Acquis (797k pairs) and OpenSubtitles (33M pairs). For WMT, we use newstest 13 for validation and newstest 14 for test. For IWSLT, we use a combination of the test corpora from 2012-14 for validation and test 2015 for eval. For OpenSubtitles and JRC-Acquis, we create our own splits for validation and test, since no benchmark split is publicly available. After deduping, the JRC-Acquis test and validation set contain 6574 and 5121 sentence pairs respectively. The OpenSubtitles test and validation sets contain 3975 and 3488 pairs. For multi-domain training, the validation set is a concatenation of the four individual validation sets.
All datasets are tokenized with the Moses tokenizer BIBREF25 and mixed without any sampling. We use a shared vocabulary Sentence-Piece Model BIBREF26 for sub-word tokenization, with a vocabulary size of 32000 tokens. We train each model for 1M steps, and choose the best checkpoint from the last 5 checkpoints based on validation performance. BLEU scores are computed with tokenized true-cased output and references with multi-bleu.perl from Moses.
For IDF based sentence retrieval, for each sentence in the training, dev and test corpus, we use INLINEFORM0 neighbors per example during both, training and evaluation. For the N-Gram level retrieval strategies, we used INLINEFORM1 neighbors during training, and neighbors corresponding to all n-grams during decoding. This was meant to limit memory requirements and enable the model to fit on P100s during training. We used n-gram width, INLINEFORM2 , for both IDF and dense vector based n-gram retrieval approaches. For scalability reasons, we restricted the retrieval set to the in-domain training corpus, i.e. neighbors for all train, dev and test sentences in the JRC-Acquis corpus were retrieved from the JRC-Acquis training split, and similarly for the other datasets.
Hyper-parameters and Optimization
For our baseline model we use the standard Transformer Base model BIBREF6 . For the semi-parametric model, all our hyper-parameters for attention (8 attention heads), model dimensions (512) and hidden dimensions (2048), including those used in the CSTM memory are equivalent to Transformer Base.
The Transformer baselines are trained on 16 GPUs, with the learning rate, warm-up schedule and batching scheme described in BIBREF6 . The semi-parametric models were trained on 32 GPUs with each replica split over 2 GPUs, one to train the translation model and the other for computing the CSTM. We used a conservative learning rate schedule (3, 40K) BIBREF8 to train the semi-parametric models.
We apply a dropout rate BIBREF27 of 0.1 to all inputs, residuals, attentions and ReLU connections in both models. We use Adam BIBREF28 to train all models, and apply label smoothing with an uncertainty of 0.1 BIBREF29 . In addition to the transformer layers, layer normalization BIBREF30 was applied to the output of the CSTM. All models are implemented in Tensorflow-Lingvo BIBREF31 .
Results
We compare the test performance of a multi-domain Transformer Base and our semi-parametric model using dense vector based n-gram retrieval and CSTM in Table TABREF21 . Apart from significantly improving performance by more than 10 BLEU points on JRC-Acquis, 2-3 BLEU on OpenSubtitles and IWSLT, we notice a moderate gain of 0.5 BLEU points on WMT 14.
Comparison of retrieval strategies
We compare the performance of all 3 retrieval strategies in Table TABREF21 . The semi-parametric model with sentence level retrieval out-performs the seq2seq model by a huge margin on JRC-Acquis and OpenSubtitles. A sample from the JRC-Acquis dataset where the semi-parametric approach improves significantly over the neural approach is included in Table TABREF22 . We notice that there is a lot of overlap between the source sentence and the retrieved source, resulting in the semi-parametric model copying large chunks from the retrieved target. However, its performance is noticeably worse on WMT and IWSLT. Based on a manual inspection of the retrieved candidates, we attribute these losses to retrieval failures. For broad domain datasets like WMT and IWSLT sentence retrieval fails to find good candidates.
Switching to n-gram level retrieval brings the WMT performance close to the seq2seq approach, and IWSLT performance to 2 BLEU points above the baseline model. Representative examples from IWSLT and WMT where n-gram retrieval improves over sentence level retrieval can be seen in Tables TABREF24 and TABREF25 . Despite the majority of the retrieved neighbor having nothing in common with the source sentence, n-gram retrieval is able to find neighbors that contain local overlaps.
Using dense n-gram retrieval allows us to move beyond lexical overlap and retrieve semantically similar n-grams even when the actual tokens are different. As a result, dense n-gram retrieval improves performance over all our models on all 4 datasets. An illustrative example from WMT is included in Table TABREF26 .
Memory Ablation Experiments
We report the performance of the various memory ablations in Table TABREF27 . We first remove the retrieved sources, INLINEFORM0 , from the CSTM, resulting in an architecture where the encoding of a retrieved target, INLINEFORM1 , only incorporates information from the source INLINEFORM2 , represented by the row CTM in the table. This results in a clear drop in performance on all datasets. We ablate further by removing the attention to the original source INLINEFORM3 , resulting in a slightly smaller drop in performance (represented by TM). These experiments indicate that incorporating context from the sources significantly contributes to performance, by allowing the model to distinguish between relevant context and noise.
Non-Parametric Adaptation
Using a semi-parametric formulation for MT opens up the possibility of non-parametric adaptation. The biggest advantage of this approach is the possibility of training a single massively customizable model which can be adapted to any new dataset or document at inference time, by just updating the retrieval dataset.
We evaluate our model's performance on non-parametric adaptation and compare it against a fully fine-tuned model. In this setting, we train a baseline model and a dense n-gram based semi-parametric model on the WMT training corpus. We only retrieve and train on examples from the WMT corpus during training. We use the same hyper-parameters and training approaches used for the multi-domain experiments, as in Section SECREF3 .
The baseline model is then fine-tuned independently on JRC-Acquis, OpenSubtitles and IWSLT. The semi-parametric model is adapted non-parametrically to these three datasets, without any parameter updates. Adaptation is achieved via the retrieval mechanism - while evaluating, we retrieve similar examples from their respective training datasets. To quantify headroom, we also fine-tune our semi-parametric model on each of these datasets.
The results for non-parametric adaptation experiments are documented in Table TABREF30 . We notice that the non-parametric adaptation strategy significantly out-performs the base model on all 4 datasets. More importantly, the we find that our approach is capable of adapting to both, JRC-Acquis and OpenSubtitles, via just the retrieval apparatus, and out-performs the fully fine-tuned model indicating that non-parametric adaptation might be a reasonable approach when adapting to a lot of narrow domains or documents.
In-domain fine-tuning on top of non-parametric adaptation further improves by 2 BLEU points on all datasets, increasing the gap even further with the seq2seq adapted models.
Related Work
Tools incorporating information from individual translation pairs, or translation memories BIBREF32 , BIBREF33 , have been widely utilized by human translators in the industry. There have been a few efforts attempting to combine non-parametric methods with NMT BIBREF15 , BIBREF16 , BIBREF17 , but the key difference of our approach is the introduction of local, sub-sentence level similarity in the retrieval process, via n-gram level retrieval. Combined with our architectural improvements, motivated by the target encoder and gated attention from BIBREF17 and the extended transformer model from BIBREF34 , our semi-parametric NMT model is able to out-perform purely neural models in broad multi-domain settings.
Some works have proposed using phrase tables or the outputs of Phrase based MT within NMT BIBREF19 , BIBREF35 , BIBREF36 . While this reduces the noise present within the retrieved translation pairs, it requires training and maintaining a separate SMT system which might introduce errors of its own.
Another class of methods requires fine-tuning the entire NMT model to every instance at inference time, using retrieved examples BIBREF37 , BIBREF38 , but these approaches require running expensive gradient descent steps before every translation.
Beyond NMT, there have been a few other attempts to incorporate non-parametric approaches into neural generative models BIBREF39 , BIBREF40 , BIBREF41 . This strong trend towards combining neural generative models with non-parametric methods is an attempt to counter the weaknesses of neural networks, especially their failure to remember information from individual training instances and the diversity problem of seq2seq models BIBREF42 , BIBREF43 .
While our approach relies purely on retrieval from the training corpus, there has been quite a lot of work, especially on Question Answering, that attempts to find additional signals to perform the supervised task in the presence of external knowledge sources BIBREF44 , BIBREF45 . Retrieving information from unsupervised corpora by utilizing multilingual representations BIBREF46 might be another interesting extension of this work.
Conclusions and Future Work
We make two major technical contributions in this work which enable us to improve the quality of semi-parametric NMT on broad domain datasets. First, we propose using n-gram retrieval, with standard Inverse Document Frequency similarity and with dense vector representations, that takes into account local sentence similarities that are critical to translation. As a result we are able to retrieve useful candidates even for broad domain tasks with little train-test overlap. Second, we propose a novel architecture to encode retrieved source-target pairs, allowing the model to distinguish useful information from noise by encoding the retrieved targets in context of the current translation task.
We demonstrate, for the first time, that semi-parametric methods can beat neural models by significant margins on multi-domain Machine Translation. By successfully training semi-parametric neural models on a broad domain dataset (WMT), we also open the door for non-parametric adaptation, showing huge improvements on new domains without any parameter updates.
While we constrain this work to retrieved context, our architecture can be utilized to incorporate information from other sources of context, including documents, bilingual dictionaries etc. Using dense representations for retrieval also allows extending semi-parametric neural methods to other input modalities, including images and speech.
With this work, we hope to motivate further investigation into semi-parametric neural models for and beyond Neural Machine Translation.
Acknowledgments
We would like to thank Naveen Arivazhagan, Macduff Hughes, Dmitry Lepikhin, Mia Chen, Yuan Cao, Ciprian Chelba, Zhifeng Chen, Melvin Johnson and other members of the Google Brain and Google Translate teams for their useful inputs and discussions. We would also like to thank the entire Lingvo development team for their foundational contributions to this project. | represent every sentence by their reduced n-gram set |
24d06808fa3b903140659ee5a471fdfa86279980 | 24d06808fa3b903140659ee5a471fdfa86279980_0 | Q: To which systems do they compare their results against?
Text: Introduction
Over the last few years, neural sequence to sequence models BIBREF0 , BIBREF1 , BIBREF2 have revolutionized the field of machine translation by significantly improving translation quality over their phrase based counterparts BIBREF3 , BIBREF4 , BIBREF5 . With more gains arising from continued research on new neural network architectures and accompanying training techniques BIBREF6 , BIBREF7 , BIBREF8 , NMT researchers, both in industry and academia, have doubled down on their ability to train high capacity models on large corpora with gradient based optimization.
However, despite huge improvements in overall translation quality NMT has shown some glaring weaknesses, including idiom processing, and rare word or phrase translation BIBREF9 , BIBREF10 , BIBREF11 - tasks that should be easy if the model could retain learned information from individual training examples. NMT has also been shown to perform poorly when dealing with multi-domain data BIBREF12 . This `catastrophic forgetting' problem has been well-studied in traditional neural network literature, caused by parameter shift during the training process BIBREF13 , BIBREF14 . Non-parametric methods, on the other hand, are resistant to forgetting but are prone to over-fitting due to their reliance on individual training examples. We focus on a non-parametric extension to NMT, hoping to combine the generalization ability of neural networks with the eidetic memory of non-parametric methods. Given a translation query, we rely on an external retrieval mechanism to find similar source-target instances in the training corpus, which are then utilized by the model.
There has been some work on semi-parametric NMT BIBREF15 , BIBREF16 , BIBREF17 , but its effectiveness has been confined to narrow domain datasets. Existing approaches have relied on sentence level similarity metrics for retrieval, which works well for domains with high train-test overlap, but fails to retrieve useful candidates for broad domains. Even if we could find training instances with overlapping phrases it's likely that the information in most retrieved source-target pairs is noise for the purpose of translating the current query.
To retrieve useful candidates when sentence similarity is low, we use n-gram retrieval instead of sentence retrieval. This results in neighbors which have high local overlap with the source sentence, even if they are significantly different in terms of overall sentence similarity. This is intuitively similar to utilizing information from a phrase table BIBREF18 within NMT BIBREF19 , without losing the global context lost when constructing the phrase table. We also propose another simple extension using dense vectors for n-gram retrieval which allows us to exploit similarities beyond lexical overlap.
To effectively extract the signal from the noisy retrieved neighbors, we develop an extension of the approach proposed in BIBREF17 . While BIBREF17 encode the retrieved targets without any context, we incorporate information from the current and retrieved sources while encoding the retrieved target, in order to distinguish useful information from noise.
We evaluate our semi-parametric NMT approach on two tasks.
Semi-parametric NMT
Standard approaches for Neural Machine Translation rely on seq2seq architectures BIBREF0 , BIBREF1 , where given a source sequence INLINEFORM0 and a target sequence INLINEFORM1 , the goal is to model the probability distribution, INLINEFORM2 .
Semi-parametric NMT BIBREF19 , BIBREF15 approaches this learning problem with a different formulation, by modeling INLINEFORM0 instead, where INLINEFORM1 is the set of sentence pairs where the source sentence is a neighbor of INLINEFORM2 , retrieved from the training corpus using some similarity metric. This relies on a two step approach - the retrieval stage finds training instances, INLINEFORM3 , similar to the source sentence INLINEFORM4 , and the translation stage generates the target sequence INLINEFORM5 given INLINEFORM6 and INLINEFORM7 . We follow this setup, proposing improvements to both stages in order to enhance the applicability of semi-parametric NMT to more general translation tasks.
Retrieval Approaches
Existing approaches have proposed using off the shelf search engines for the retrieval stage. However, our objective differs from traditional information retrieval, since the goal of retrieval in semi-parametric NMT is to find neighbors which might improve translation performance, which might not correlate with maximizing sentence similarity.
Our baseline strategy relies on a sentence level similarity score, similar to those used for standard information retrieval tasks BIBREF24 . We compare this against finer-grained n-gram retrieval using the same similarity metric. We also propose a dense vector based n-gram retrieval strategy, using representations extracted from a pre-trained NMT model.
Our baseline approach relies on a simple inverse document frequency (IDF) based similarity score. We define the IDF score of any token, INLINEFORM0 , as INLINEFORM1 , where INLINEFORM2 is the number of sentence pairs in training corpus and INLINEFORM3 is the number of sentences INLINEFORM4 occurs in. Let any two sentence pairs in the corpus be INLINEFORM5 and INLINEFORM6 . Then we define the similarity between INLINEFORM7 and INLINEFORM8 by, DISPLAYFORM0
For every sentence in the training, dev and test corpora, we find the INLINEFORM0 most similar training sentence pairs and provide them as context to NMT.
Motivated by phrase based SMT, we retrieve neighbors which have high local, sub-sentence level overlap with the source sentence. We adapt our approach to retrieve n-grams instead of sentences. We note that the similarity metric defined above for sentences is equally applicable for n-gram retrieval.
Let INLINEFORM0 be a sentence. Then the set of all possible n-grams of X, for a given INLINEFORM1 , can be defined as INLINEFORM2 (also including padding at the end). To reduce the number of n-grams used to represent every sentence, we define the reduced set of n-grams for X to be INLINEFORM3 .
We represent every sentence by their reduced n-gram set. For every n-gram in INLINEFORM0 , we find the closest n-gram in the training set using the IDF similarity defined above. For each retrieved n-gram we find the corresponding sentence (In case an n-gram is present in multiple sentences, we choose one randomly). The set of neighbors of INLINEFORM1 is then the set of all sentences in the training corpus that contain an n-gram that maximizes the n-gram similarity with any n-gram in INLINEFORM2 .
To capture phrases of different lengths we use multiple n-gram widths, INLINEFORM0 . In case a sentence has already been added to the retrieved set, we find the next most similar sentence to avoid having duplicates. The number of neighbors retrieved for each source sentence is proportional to its length.
We also extend our n-gram retrieval strategy with dense vector based n-gram representations. The objective behind using a dense vector based approach is to incorporate information relevant to the translation task in the retrieval stage. We use a pre-trained Transformer Base BIBREF6 encoder trained on WMT to generate sub-word level dense representations for the sentence. The representation for each n-gram is now defined to be the mean of the representations of all its constituent sub-words. We use the INLINEFORM0 distance of n-gram representations as the retrieval criterion. Note that we use a sub-word level decomposition of sentences for dense retrieval, as compared to word-level for IDF based retrieval (i.e., n-grams are composed of sub-words instead of words).
Following the approach described for IDF based n-gram retrieval, we use multiple values of INLINEFORM0 , and remove duplicate neighbors while creating the retrieved set.
NMT with Context Retrieval
To incorporate the retrieved neighbors, INLINEFORM0 , within the NMT model, we first encode them using Transformer layers, as described in subsection UID12 . This encoded memory is then used within the decoder via an attention mechanism, as described in subsection UID15 .
We now describe how each retrieved translation pair, INLINEFORM0 , is encoded. This architecture is illustrated in Figure FIGREF9 .
We first encode the retrieved source, INLINEFORM0 , in a Transformer layer. Apart from self-attention, we incorporate information from the encoder representation of the current source, INLINEFORM1 , using decoder style cross-attention.
The retrieved target, INLINEFORM0 , is encoded in a similar manner, attending the encoded representation of INLINEFORM1 generated in the previous step.
The encoded representations for all targets, INLINEFORM0 , are then concatenated along the time axis to form the Conditional Source Target Memory (CSTM).
We use gated multi-source attention to combine the context from the source encoder representations and the CSTM. This is similar to the gated attention employed by BIBREF17 . We use a Transformer based decoder that attends to both, the encoder outputs and the CSTM, in every cross-attention layer. The rest of the decoder architecture remains unchanged.
Let the context vectors obtained by applying multi-head attention to the source and memory, with query INLINEFORM0 be INLINEFORM1 and INLINEFORM2 respectively. Then the gated context vector, INLINEFORM3 , is given by, DISPLAYFORM0 DISPLAYFORM1
where INLINEFORM0 is the scalar gating variable at time-step t, and INLINEFORM1 and INLINEFORM2 are learned parameters. These steps are illustrated in Figure FIGREF10 .
Data and Evaluation
We compare the performance of a standard Transformer Base model and our semi-parametric NMT approach on an English-French translation task. We create a new heterogeneous dataset, constructed from a combination of the WMT training set (36M pairs), the IWSLT bilingual corpus (237k pairs), JRC-Acquis (797k pairs) and OpenSubtitles (33M pairs). For WMT, we use newstest 13 for validation and newstest 14 for test. For IWSLT, we use a combination of the test corpora from 2012-14 for validation and test 2015 for eval. For OpenSubtitles and JRC-Acquis, we create our own splits for validation and test, since no benchmark split is publicly available. After deduping, the JRC-Acquis test and validation set contain 6574 and 5121 sentence pairs respectively. The OpenSubtitles test and validation sets contain 3975 and 3488 pairs. For multi-domain training, the validation set is a concatenation of the four individual validation sets.
All datasets are tokenized with the Moses tokenizer BIBREF25 and mixed without any sampling. We use a shared vocabulary Sentence-Piece Model BIBREF26 for sub-word tokenization, with a vocabulary size of 32000 tokens. We train each model for 1M steps, and choose the best checkpoint from the last 5 checkpoints based on validation performance. BLEU scores are computed with tokenized true-cased output and references with multi-bleu.perl from Moses.
For IDF based sentence retrieval, for each sentence in the training, dev and test corpus, we use INLINEFORM0 neighbors per example during both, training and evaluation. For the N-Gram level retrieval strategies, we used INLINEFORM1 neighbors during training, and neighbors corresponding to all n-grams during decoding. This was meant to limit memory requirements and enable the model to fit on P100s during training. We used n-gram width, INLINEFORM2 , for both IDF and dense vector based n-gram retrieval approaches. For scalability reasons, we restricted the retrieval set to the in-domain training corpus, i.e. neighbors for all train, dev and test sentences in the JRC-Acquis corpus were retrieved from the JRC-Acquis training split, and similarly for the other datasets.
Hyper-parameters and Optimization
For our baseline model we use the standard Transformer Base model BIBREF6 . For the semi-parametric model, all our hyper-parameters for attention (8 attention heads), model dimensions (512) and hidden dimensions (2048), including those used in the CSTM memory are equivalent to Transformer Base.
The Transformer baselines are trained on 16 GPUs, with the learning rate, warm-up schedule and batching scheme described in BIBREF6 . The semi-parametric models were trained on 32 GPUs with each replica split over 2 GPUs, one to train the translation model and the other for computing the CSTM. We used a conservative learning rate schedule (3, 40K) BIBREF8 to train the semi-parametric models.
We apply a dropout rate BIBREF27 of 0.1 to all inputs, residuals, attentions and ReLU connections in both models. We use Adam BIBREF28 to train all models, and apply label smoothing with an uncertainty of 0.1 BIBREF29 . In addition to the transformer layers, layer normalization BIBREF30 was applied to the output of the CSTM. All models are implemented in Tensorflow-Lingvo BIBREF31 .
Results
We compare the test performance of a multi-domain Transformer Base and our semi-parametric model using dense vector based n-gram retrieval and CSTM in Table TABREF21 . Apart from significantly improving performance by more than 10 BLEU points on JRC-Acquis, 2-3 BLEU on OpenSubtitles and IWSLT, we notice a moderate gain of 0.5 BLEU points on WMT 14.
Comparison of retrieval strategies
We compare the performance of all 3 retrieval strategies in Table TABREF21 . The semi-parametric model with sentence level retrieval out-performs the seq2seq model by a huge margin on JRC-Acquis and OpenSubtitles. A sample from the JRC-Acquis dataset where the semi-parametric approach improves significantly over the neural approach is included in Table TABREF22 . We notice that there is a lot of overlap between the source sentence and the retrieved source, resulting in the semi-parametric model copying large chunks from the retrieved target. However, its performance is noticeably worse on WMT and IWSLT. Based on a manual inspection of the retrieved candidates, we attribute these losses to retrieval failures. For broad domain datasets like WMT and IWSLT sentence retrieval fails to find good candidates.
Switching to n-gram level retrieval brings the WMT performance close to the seq2seq approach, and IWSLT performance to 2 BLEU points above the baseline model. Representative examples from IWSLT and WMT where n-gram retrieval improves over sentence level retrieval can be seen in Tables TABREF24 and TABREF25 . Despite the majority of the retrieved neighbor having nothing in common with the source sentence, n-gram retrieval is able to find neighbors that contain local overlaps.
Using dense n-gram retrieval allows us to move beyond lexical overlap and retrieve semantically similar n-grams even when the actual tokens are different. As a result, dense n-gram retrieval improves performance over all our models on all 4 datasets. An illustrative example from WMT is included in Table TABREF26 .
Memory Ablation Experiments
We report the performance of the various memory ablations in Table TABREF27 . We first remove the retrieved sources, INLINEFORM0 , from the CSTM, resulting in an architecture where the encoding of a retrieved target, INLINEFORM1 , only incorporates information from the source INLINEFORM2 , represented by the row CTM in the table. This results in a clear drop in performance on all datasets. We ablate further by removing the attention to the original source INLINEFORM3 , resulting in a slightly smaller drop in performance (represented by TM). These experiments indicate that incorporating context from the sources significantly contributes to performance, by allowing the model to distinguish between relevant context and noise.
Non-Parametric Adaptation
Using a semi-parametric formulation for MT opens up the possibility of non-parametric adaptation. The biggest advantage of this approach is the possibility of training a single massively customizable model which can be adapted to any new dataset or document at inference time, by just updating the retrieval dataset.
We evaluate our model's performance on non-parametric adaptation and compare it against a fully fine-tuned model. In this setting, we train a baseline model and a dense n-gram based semi-parametric model on the WMT training corpus. We only retrieve and train on examples from the WMT corpus during training. We use the same hyper-parameters and training approaches used for the multi-domain experiments, as in Section SECREF3 .
The baseline model is then fine-tuned independently on JRC-Acquis, OpenSubtitles and IWSLT. The semi-parametric model is adapted non-parametrically to these three datasets, without any parameter updates. Adaptation is achieved via the retrieval mechanism - while evaluating, we retrieve similar examples from their respective training datasets. To quantify headroom, we also fine-tune our semi-parametric model on each of these datasets.
The results for non-parametric adaptation experiments are documented in Table TABREF30 . We notice that the non-parametric adaptation strategy significantly out-performs the base model on all 4 datasets. More importantly, the we find that our approach is capable of adapting to both, JRC-Acquis and OpenSubtitles, via just the retrieval apparatus, and out-performs the fully fine-tuned model indicating that non-parametric adaptation might be a reasonable approach when adapting to a lot of narrow domains or documents.
In-domain fine-tuning on top of non-parametric adaptation further improves by 2 BLEU points on all datasets, increasing the gap even further with the seq2seq adapted models.
Related Work
Tools incorporating information from individual translation pairs, or translation memories BIBREF32 , BIBREF33 , have been widely utilized by human translators in the industry. There have been a few efforts attempting to combine non-parametric methods with NMT BIBREF15 , BIBREF16 , BIBREF17 , but the key difference of our approach is the introduction of local, sub-sentence level similarity in the retrieval process, via n-gram level retrieval. Combined with our architectural improvements, motivated by the target encoder and gated attention from BIBREF17 and the extended transformer model from BIBREF34 , our semi-parametric NMT model is able to out-perform purely neural models in broad multi-domain settings.
Some works have proposed using phrase tables or the outputs of Phrase based MT within NMT BIBREF19 , BIBREF35 , BIBREF36 . While this reduces the noise present within the retrieved translation pairs, it requires training and maintaining a separate SMT system which might introduce errors of its own.
Another class of methods requires fine-tuning the entire NMT model to every instance at inference time, using retrieved examples BIBREF37 , BIBREF38 , but these approaches require running expensive gradient descent steps before every translation.
Beyond NMT, there have been a few other attempts to incorporate non-parametric approaches into neural generative models BIBREF39 , BIBREF40 , BIBREF41 . This strong trend towards combining neural generative models with non-parametric methods is an attempt to counter the weaknesses of neural networks, especially their failure to remember information from individual training instances and the diversity problem of seq2seq models BIBREF42 , BIBREF43 .
While our approach relies purely on retrieval from the training corpus, there has been quite a lot of work, especially on Question Answering, that attempts to find additional signals to perform the supervised task in the presence of external knowledge sources BIBREF44 , BIBREF45 . Retrieving information from unsupervised corpora by utilizing multilingual representations BIBREF46 might be another interesting extension of this work.
Conclusions and Future Work
We make two major technical contributions in this work which enable us to improve the quality of semi-parametric NMT on broad domain datasets. First, we propose using n-gram retrieval, with standard Inverse Document Frequency similarity and with dense vector representations, that takes into account local sentence similarities that are critical to translation. As a result we are able to retrieve useful candidates even for broad domain tasks with little train-test overlap. Second, we propose a novel architecture to encode retrieved source-target pairs, allowing the model to distinguish useful information from noise by encoding the retrieved targets in context of the current translation task.
We demonstrate, for the first time, that semi-parametric methods can beat neural models by significant margins on multi-domain Machine Translation. By successfully training semi-parametric neural models on a broad domain dataset (WMT), we also open the door for non-parametric adaptation, showing huge improvements on new domains without any parameter updates.
While we constrain this work to retrieved context, our architecture can be utilized to incorporate information from other sources of context, including documents, bilingual dictionaries etc. Using dense representations for retrieval also allows extending semi-parametric neural methods to other input modalities, including images and speech.
With this work, we hope to motivate further investigation into semi-parametric neural models for and beyond Neural Machine Translation.
Acknowledgments
We would like to thank Naveen Arivazhagan, Macduff Hughes, Dmitry Lepikhin, Mia Chen, Yuan Cao, Ciprian Chelba, Zhifeng Chen, Melvin Johnson and other members of the Google Brain and Google Translate teams for their useful inputs and discussions. We would also like to thank the entire Lingvo development team for their foundational contributions to this project. | standard Transformer Base model |
dba3d05c495e2c8ca476139e78f65059db2eb72d | dba3d05c495e2c8ca476139e78f65059db2eb72d_0 | Q: Does their combination of a non-parametric retrieval and neural network get trained end-to-end?
Text: Introduction
Over the last few years, neural sequence to sequence models BIBREF0 , BIBREF1 , BIBREF2 have revolutionized the field of machine translation by significantly improving translation quality over their phrase based counterparts BIBREF3 , BIBREF4 , BIBREF5 . With more gains arising from continued research on new neural network architectures and accompanying training techniques BIBREF6 , BIBREF7 , BIBREF8 , NMT researchers, both in industry and academia, have doubled down on their ability to train high capacity models on large corpora with gradient based optimization.
However, despite huge improvements in overall translation quality NMT has shown some glaring weaknesses, including idiom processing, and rare word or phrase translation BIBREF9 , BIBREF10 , BIBREF11 - tasks that should be easy if the model could retain learned information from individual training examples. NMT has also been shown to perform poorly when dealing with multi-domain data BIBREF12 . This `catastrophic forgetting' problem has been well-studied in traditional neural network literature, caused by parameter shift during the training process BIBREF13 , BIBREF14 . Non-parametric methods, on the other hand, are resistant to forgetting but are prone to over-fitting due to their reliance on individual training examples. We focus on a non-parametric extension to NMT, hoping to combine the generalization ability of neural networks with the eidetic memory of non-parametric methods. Given a translation query, we rely on an external retrieval mechanism to find similar source-target instances in the training corpus, which are then utilized by the model.
There has been some work on semi-parametric NMT BIBREF15 , BIBREF16 , BIBREF17 , but its effectiveness has been confined to narrow domain datasets. Existing approaches have relied on sentence level similarity metrics for retrieval, which works well for domains with high train-test overlap, but fails to retrieve useful candidates for broad domains. Even if we could find training instances with overlapping phrases it's likely that the information in most retrieved source-target pairs is noise for the purpose of translating the current query.
To retrieve useful candidates when sentence similarity is low, we use n-gram retrieval instead of sentence retrieval. This results in neighbors which have high local overlap with the source sentence, even if they are significantly different in terms of overall sentence similarity. This is intuitively similar to utilizing information from a phrase table BIBREF18 within NMT BIBREF19 , without losing the global context lost when constructing the phrase table. We also propose another simple extension using dense vectors for n-gram retrieval which allows us to exploit similarities beyond lexical overlap.
To effectively extract the signal from the noisy retrieved neighbors, we develop an extension of the approach proposed in BIBREF17 . While BIBREF17 encode the retrieved targets without any context, we incorporate information from the current and retrieved sources while encoding the retrieved target, in order to distinguish useful information from noise.
We evaluate our semi-parametric NMT approach on two tasks.
Semi-parametric NMT
Standard approaches for Neural Machine Translation rely on seq2seq architectures BIBREF0 , BIBREF1 , where given a source sequence INLINEFORM0 and a target sequence INLINEFORM1 , the goal is to model the probability distribution, INLINEFORM2 .
Semi-parametric NMT BIBREF19 , BIBREF15 approaches this learning problem with a different formulation, by modeling INLINEFORM0 instead, where INLINEFORM1 is the set of sentence pairs where the source sentence is a neighbor of INLINEFORM2 , retrieved from the training corpus using some similarity metric. This relies on a two step approach - the retrieval stage finds training instances, INLINEFORM3 , similar to the source sentence INLINEFORM4 , and the translation stage generates the target sequence INLINEFORM5 given INLINEFORM6 and INLINEFORM7 . We follow this setup, proposing improvements to both stages in order to enhance the applicability of semi-parametric NMT to more general translation tasks.
Retrieval Approaches
Existing approaches have proposed using off the shelf search engines for the retrieval stage. However, our objective differs from traditional information retrieval, since the goal of retrieval in semi-parametric NMT is to find neighbors which might improve translation performance, which might not correlate with maximizing sentence similarity.
Our baseline strategy relies on a sentence level similarity score, similar to those used for standard information retrieval tasks BIBREF24 . We compare this against finer-grained n-gram retrieval using the same similarity metric. We also propose a dense vector based n-gram retrieval strategy, using representations extracted from a pre-trained NMT model.
Our baseline approach relies on a simple inverse document frequency (IDF) based similarity score. We define the IDF score of any token, INLINEFORM0 , as INLINEFORM1 , where INLINEFORM2 is the number of sentence pairs in training corpus and INLINEFORM3 is the number of sentences INLINEFORM4 occurs in. Let any two sentence pairs in the corpus be INLINEFORM5 and INLINEFORM6 . Then we define the similarity between INLINEFORM7 and INLINEFORM8 by, DISPLAYFORM0
For every sentence in the training, dev and test corpora, we find the INLINEFORM0 most similar training sentence pairs and provide them as context to NMT.
Motivated by phrase based SMT, we retrieve neighbors which have high local, sub-sentence level overlap with the source sentence. We adapt our approach to retrieve n-grams instead of sentences. We note that the similarity metric defined above for sentences is equally applicable for n-gram retrieval.
Let INLINEFORM0 be a sentence. Then the set of all possible n-grams of X, for a given INLINEFORM1 , can be defined as INLINEFORM2 (also including padding at the end). To reduce the number of n-grams used to represent every sentence, we define the reduced set of n-grams for X to be INLINEFORM3 .
We represent every sentence by their reduced n-gram set. For every n-gram in INLINEFORM0 , we find the closest n-gram in the training set using the IDF similarity defined above. For each retrieved n-gram we find the corresponding sentence (In case an n-gram is present in multiple sentences, we choose one randomly). The set of neighbors of INLINEFORM1 is then the set of all sentences in the training corpus that contain an n-gram that maximizes the n-gram similarity with any n-gram in INLINEFORM2 .
To capture phrases of different lengths we use multiple n-gram widths, INLINEFORM0 . In case a sentence has already been added to the retrieved set, we find the next most similar sentence to avoid having duplicates. The number of neighbors retrieved for each source sentence is proportional to its length.
We also extend our n-gram retrieval strategy with dense vector based n-gram representations. The objective behind using a dense vector based approach is to incorporate information relevant to the translation task in the retrieval stage. We use a pre-trained Transformer Base BIBREF6 encoder trained on WMT to generate sub-word level dense representations for the sentence. The representation for each n-gram is now defined to be the mean of the representations of all its constituent sub-words. We use the INLINEFORM0 distance of n-gram representations as the retrieval criterion. Note that we use a sub-word level decomposition of sentences for dense retrieval, as compared to word-level for IDF based retrieval (i.e., n-grams are composed of sub-words instead of words).
Following the approach described for IDF based n-gram retrieval, we use multiple values of INLINEFORM0 , and remove duplicate neighbors while creating the retrieved set.
NMT with Context Retrieval
To incorporate the retrieved neighbors, INLINEFORM0 , within the NMT model, we first encode them using Transformer layers, as described in subsection UID12 . This encoded memory is then used within the decoder via an attention mechanism, as described in subsection UID15 .
We now describe how each retrieved translation pair, INLINEFORM0 , is encoded. This architecture is illustrated in Figure FIGREF9 .
We first encode the retrieved source, INLINEFORM0 , in a Transformer layer. Apart from self-attention, we incorporate information from the encoder representation of the current source, INLINEFORM1 , using decoder style cross-attention.
The retrieved target, INLINEFORM0 , is encoded in a similar manner, attending the encoded representation of INLINEFORM1 generated in the previous step.
The encoded representations for all targets, INLINEFORM0 , are then concatenated along the time axis to form the Conditional Source Target Memory (CSTM).
We use gated multi-source attention to combine the context from the source encoder representations and the CSTM. This is similar to the gated attention employed by BIBREF17 . We use a Transformer based decoder that attends to both, the encoder outputs and the CSTM, in every cross-attention layer. The rest of the decoder architecture remains unchanged.
Let the context vectors obtained by applying multi-head attention to the source and memory, with query INLINEFORM0 be INLINEFORM1 and INLINEFORM2 respectively. Then the gated context vector, INLINEFORM3 , is given by, DISPLAYFORM0 DISPLAYFORM1
where INLINEFORM0 is the scalar gating variable at time-step t, and INLINEFORM1 and INLINEFORM2 are learned parameters. These steps are illustrated in Figure FIGREF10 .
Data and Evaluation
We compare the performance of a standard Transformer Base model and our semi-parametric NMT approach on an English-French translation task. We create a new heterogeneous dataset, constructed from a combination of the WMT training set (36M pairs), the IWSLT bilingual corpus (237k pairs), JRC-Acquis (797k pairs) and OpenSubtitles (33M pairs). For WMT, we use newstest 13 for validation and newstest 14 for test. For IWSLT, we use a combination of the test corpora from 2012-14 for validation and test 2015 for eval. For OpenSubtitles and JRC-Acquis, we create our own splits for validation and test, since no benchmark split is publicly available. After deduping, the JRC-Acquis test and validation set contain 6574 and 5121 sentence pairs respectively. The OpenSubtitles test and validation sets contain 3975 and 3488 pairs. For multi-domain training, the validation set is a concatenation of the four individual validation sets.
All datasets are tokenized with the Moses tokenizer BIBREF25 and mixed without any sampling. We use a shared vocabulary Sentence-Piece Model BIBREF26 for sub-word tokenization, with a vocabulary size of 32000 tokens. We train each model for 1M steps, and choose the best checkpoint from the last 5 checkpoints based on validation performance. BLEU scores are computed with tokenized true-cased output and references with multi-bleu.perl from Moses.
For IDF based sentence retrieval, for each sentence in the training, dev and test corpus, we use INLINEFORM0 neighbors per example during both, training and evaluation. For the N-Gram level retrieval strategies, we used INLINEFORM1 neighbors during training, and neighbors corresponding to all n-grams during decoding. This was meant to limit memory requirements and enable the model to fit on P100s during training. We used n-gram width, INLINEFORM2 , for both IDF and dense vector based n-gram retrieval approaches. For scalability reasons, we restricted the retrieval set to the in-domain training corpus, i.e. neighbors for all train, dev and test sentences in the JRC-Acquis corpus were retrieved from the JRC-Acquis training split, and similarly for the other datasets.
Hyper-parameters and Optimization
For our baseline model we use the standard Transformer Base model BIBREF6 . For the semi-parametric model, all our hyper-parameters for attention (8 attention heads), model dimensions (512) and hidden dimensions (2048), including those used in the CSTM memory are equivalent to Transformer Base.
The Transformer baselines are trained on 16 GPUs, with the learning rate, warm-up schedule and batching scheme described in BIBREF6 . The semi-parametric models were trained on 32 GPUs with each replica split over 2 GPUs, one to train the translation model and the other for computing the CSTM. We used a conservative learning rate schedule (3, 40K) BIBREF8 to train the semi-parametric models.
We apply a dropout rate BIBREF27 of 0.1 to all inputs, residuals, attentions and ReLU connections in both models. We use Adam BIBREF28 to train all models, and apply label smoothing with an uncertainty of 0.1 BIBREF29 . In addition to the transformer layers, layer normalization BIBREF30 was applied to the output of the CSTM. All models are implemented in Tensorflow-Lingvo BIBREF31 .
Results
We compare the test performance of a multi-domain Transformer Base and our semi-parametric model using dense vector based n-gram retrieval and CSTM in Table TABREF21 . Apart from significantly improving performance by more than 10 BLEU points on JRC-Acquis, 2-3 BLEU on OpenSubtitles and IWSLT, we notice a moderate gain of 0.5 BLEU points on WMT 14.
Comparison of retrieval strategies
We compare the performance of all 3 retrieval strategies in Table TABREF21 . The semi-parametric model with sentence level retrieval out-performs the seq2seq model by a huge margin on JRC-Acquis and OpenSubtitles. A sample from the JRC-Acquis dataset where the semi-parametric approach improves significantly over the neural approach is included in Table TABREF22 . We notice that there is a lot of overlap between the source sentence and the retrieved source, resulting in the semi-parametric model copying large chunks from the retrieved target. However, its performance is noticeably worse on WMT and IWSLT. Based on a manual inspection of the retrieved candidates, we attribute these losses to retrieval failures. For broad domain datasets like WMT and IWSLT sentence retrieval fails to find good candidates.
Switching to n-gram level retrieval brings the WMT performance close to the seq2seq approach, and IWSLT performance to 2 BLEU points above the baseline model. Representative examples from IWSLT and WMT where n-gram retrieval improves over sentence level retrieval can be seen in Tables TABREF24 and TABREF25 . Despite the majority of the retrieved neighbor having nothing in common with the source sentence, n-gram retrieval is able to find neighbors that contain local overlaps.
Using dense n-gram retrieval allows us to move beyond lexical overlap and retrieve semantically similar n-grams even when the actual tokens are different. As a result, dense n-gram retrieval improves performance over all our models on all 4 datasets. An illustrative example from WMT is included in Table TABREF26 .
Memory Ablation Experiments
We report the performance of the various memory ablations in Table TABREF27 . We first remove the retrieved sources, INLINEFORM0 , from the CSTM, resulting in an architecture where the encoding of a retrieved target, INLINEFORM1 , only incorporates information from the source INLINEFORM2 , represented by the row CTM in the table. This results in a clear drop in performance on all datasets. We ablate further by removing the attention to the original source INLINEFORM3 , resulting in a slightly smaller drop in performance (represented by TM). These experiments indicate that incorporating context from the sources significantly contributes to performance, by allowing the model to distinguish between relevant context and noise.
Non-Parametric Adaptation
Using a semi-parametric formulation for MT opens up the possibility of non-parametric adaptation. The biggest advantage of this approach is the possibility of training a single massively customizable model which can be adapted to any new dataset or document at inference time, by just updating the retrieval dataset.
We evaluate our model's performance on non-parametric adaptation and compare it against a fully fine-tuned model. In this setting, we train a baseline model and a dense n-gram based semi-parametric model on the WMT training corpus. We only retrieve and train on examples from the WMT corpus during training. We use the same hyper-parameters and training approaches used for the multi-domain experiments, as in Section SECREF3 .
The baseline model is then fine-tuned independently on JRC-Acquis, OpenSubtitles and IWSLT. The semi-parametric model is adapted non-parametrically to these three datasets, without any parameter updates. Adaptation is achieved via the retrieval mechanism - while evaluating, we retrieve similar examples from their respective training datasets. To quantify headroom, we also fine-tune our semi-parametric model on each of these datasets.
The results for non-parametric adaptation experiments are documented in Table TABREF30 . We notice that the non-parametric adaptation strategy significantly out-performs the base model on all 4 datasets. More importantly, the we find that our approach is capable of adapting to both, JRC-Acquis and OpenSubtitles, via just the retrieval apparatus, and out-performs the fully fine-tuned model indicating that non-parametric adaptation might be a reasonable approach when adapting to a lot of narrow domains or documents.
In-domain fine-tuning on top of non-parametric adaptation further improves by 2 BLEU points on all datasets, increasing the gap even further with the seq2seq adapted models.
Related Work
Tools incorporating information from individual translation pairs, or translation memories BIBREF32 , BIBREF33 , have been widely utilized by human translators in the industry. There have been a few efforts attempting to combine non-parametric methods with NMT BIBREF15 , BIBREF16 , BIBREF17 , but the key difference of our approach is the introduction of local, sub-sentence level similarity in the retrieval process, via n-gram level retrieval. Combined with our architectural improvements, motivated by the target encoder and gated attention from BIBREF17 and the extended transformer model from BIBREF34 , our semi-parametric NMT model is able to out-perform purely neural models in broad multi-domain settings.
Some works have proposed using phrase tables or the outputs of Phrase based MT within NMT BIBREF19 , BIBREF35 , BIBREF36 . While this reduces the noise present within the retrieved translation pairs, it requires training and maintaining a separate SMT system which might introduce errors of its own.
Another class of methods requires fine-tuning the entire NMT model to every instance at inference time, using retrieved examples BIBREF37 , BIBREF38 , but these approaches require running expensive gradient descent steps before every translation.
Beyond NMT, there have been a few other attempts to incorporate non-parametric approaches into neural generative models BIBREF39 , BIBREF40 , BIBREF41 . This strong trend towards combining neural generative models with non-parametric methods is an attempt to counter the weaknesses of neural networks, especially their failure to remember information from individual training instances and the diversity problem of seq2seq models BIBREF42 , BIBREF43 .
While our approach relies purely on retrieval from the training corpus, there has been quite a lot of work, especially on Question Answering, that attempts to find additional signals to perform the supervised task in the presence of external knowledge sources BIBREF44 , BIBREF45 . Retrieving information from unsupervised corpora by utilizing multilingual representations BIBREF46 might be another interesting extension of this work.
Conclusions and Future Work
We make two major technical contributions in this work which enable us to improve the quality of semi-parametric NMT on broad domain datasets. First, we propose using n-gram retrieval, with standard Inverse Document Frequency similarity and with dense vector representations, that takes into account local sentence similarities that are critical to translation. As a result we are able to retrieve useful candidates even for broad domain tasks with little train-test overlap. Second, we propose a novel architecture to encode retrieved source-target pairs, allowing the model to distinguish useful information from noise by encoding the retrieved targets in context of the current translation task.
We demonstrate, for the first time, that semi-parametric methods can beat neural models by significant margins on multi-domain Machine Translation. By successfully training semi-parametric neural models on a broad domain dataset (WMT), we also open the door for non-parametric adaptation, showing huge improvements on new domains without any parameter updates.
While we constrain this work to retrieved context, our architecture can be utilized to incorporate information from other sources of context, including documents, bilingual dictionaries etc. Using dense representations for retrieval also allows extending semi-parametric neural methods to other input modalities, including images and speech.
With this work, we hope to motivate further investigation into semi-parametric neural models for and beyond Neural Machine Translation.
Acknowledgments
We would like to thank Naveen Arivazhagan, Macduff Hughes, Dmitry Lepikhin, Mia Chen, Yuan Cao, Ciprian Chelba, Zhifeng Chen, Melvin Johnson and other members of the Google Brain and Google Translate teams for their useful inputs and discussions. We would also like to thank the entire Lingvo development team for their foundational contributions to this project. | Yes |
0062ad4aed09a57d0ece6aa4b873f4a4bf65d165 | 0062ad4aed09a57d0ece6aa4b873f4a4bf65d165_0 | Q: Which similarity measure do they use in their n-gram retrieval approach?
Text: Introduction
Over the last few years, neural sequence to sequence models BIBREF0 , BIBREF1 , BIBREF2 have revolutionized the field of machine translation by significantly improving translation quality over their phrase based counterparts BIBREF3 , BIBREF4 , BIBREF5 . With more gains arising from continued research on new neural network architectures and accompanying training techniques BIBREF6 , BIBREF7 , BIBREF8 , NMT researchers, both in industry and academia, have doubled down on their ability to train high capacity models on large corpora with gradient based optimization.
However, despite huge improvements in overall translation quality NMT has shown some glaring weaknesses, including idiom processing, and rare word or phrase translation BIBREF9 , BIBREF10 , BIBREF11 - tasks that should be easy if the model could retain learned information from individual training examples. NMT has also been shown to perform poorly when dealing with multi-domain data BIBREF12 . This `catastrophic forgetting' problem has been well-studied in traditional neural network literature, caused by parameter shift during the training process BIBREF13 , BIBREF14 . Non-parametric methods, on the other hand, are resistant to forgetting but are prone to over-fitting due to their reliance on individual training examples. We focus on a non-parametric extension to NMT, hoping to combine the generalization ability of neural networks with the eidetic memory of non-parametric methods. Given a translation query, we rely on an external retrieval mechanism to find similar source-target instances in the training corpus, which are then utilized by the model.
There has been some work on semi-parametric NMT BIBREF15 , BIBREF16 , BIBREF17 , but its effectiveness has been confined to narrow domain datasets. Existing approaches have relied on sentence level similarity metrics for retrieval, which works well for domains with high train-test overlap, but fails to retrieve useful candidates for broad domains. Even if we could find training instances with overlapping phrases it's likely that the information in most retrieved source-target pairs is noise for the purpose of translating the current query.
To retrieve useful candidates when sentence similarity is low, we use n-gram retrieval instead of sentence retrieval. This results in neighbors which have high local overlap with the source sentence, even if they are significantly different in terms of overall sentence similarity. This is intuitively similar to utilizing information from a phrase table BIBREF18 within NMT BIBREF19 , without losing the global context lost when constructing the phrase table. We also propose another simple extension using dense vectors for n-gram retrieval which allows us to exploit similarities beyond lexical overlap.
To effectively extract the signal from the noisy retrieved neighbors, we develop an extension of the approach proposed in BIBREF17 . While BIBREF17 encode the retrieved targets without any context, we incorporate information from the current and retrieved sources while encoding the retrieved target, in order to distinguish useful information from noise.
We evaluate our semi-parametric NMT approach on two tasks.
Semi-parametric NMT
Standard approaches for Neural Machine Translation rely on seq2seq architectures BIBREF0 , BIBREF1 , where given a source sequence INLINEFORM0 and a target sequence INLINEFORM1 , the goal is to model the probability distribution, INLINEFORM2 .
Semi-parametric NMT BIBREF19 , BIBREF15 approaches this learning problem with a different formulation, by modeling INLINEFORM0 instead, where INLINEFORM1 is the set of sentence pairs where the source sentence is a neighbor of INLINEFORM2 , retrieved from the training corpus using some similarity metric. This relies on a two step approach - the retrieval stage finds training instances, INLINEFORM3 , similar to the source sentence INLINEFORM4 , and the translation stage generates the target sequence INLINEFORM5 given INLINEFORM6 and INLINEFORM7 . We follow this setup, proposing improvements to both stages in order to enhance the applicability of semi-parametric NMT to more general translation tasks.
Retrieval Approaches
Existing approaches have proposed using off the shelf search engines for the retrieval stage. However, our objective differs from traditional information retrieval, since the goal of retrieval in semi-parametric NMT is to find neighbors which might improve translation performance, which might not correlate with maximizing sentence similarity.
Our baseline strategy relies on a sentence level similarity score, similar to those used for standard information retrieval tasks BIBREF24 . We compare this against finer-grained n-gram retrieval using the same similarity metric. We also propose a dense vector based n-gram retrieval strategy, using representations extracted from a pre-trained NMT model.
Our baseline approach relies on a simple inverse document frequency (IDF) based similarity score. We define the IDF score of any token, INLINEFORM0 , as INLINEFORM1 , where INLINEFORM2 is the number of sentence pairs in training corpus and INLINEFORM3 is the number of sentences INLINEFORM4 occurs in. Let any two sentence pairs in the corpus be INLINEFORM5 and INLINEFORM6 . Then we define the similarity between INLINEFORM7 and INLINEFORM8 by, DISPLAYFORM0
For every sentence in the training, dev and test corpora, we find the INLINEFORM0 most similar training sentence pairs and provide them as context to NMT.
Motivated by phrase based SMT, we retrieve neighbors which have high local, sub-sentence level overlap with the source sentence. We adapt our approach to retrieve n-grams instead of sentences. We note that the similarity metric defined above for sentences is equally applicable for n-gram retrieval.
Let INLINEFORM0 be a sentence. Then the set of all possible n-grams of X, for a given INLINEFORM1 , can be defined as INLINEFORM2 (also including padding at the end). To reduce the number of n-grams used to represent every sentence, we define the reduced set of n-grams for X to be INLINEFORM3 .
We represent every sentence by their reduced n-gram set. For every n-gram in INLINEFORM0 , we find the closest n-gram in the training set using the IDF similarity defined above. For each retrieved n-gram we find the corresponding sentence (In case an n-gram is present in multiple sentences, we choose one randomly). The set of neighbors of INLINEFORM1 is then the set of all sentences in the training corpus that contain an n-gram that maximizes the n-gram similarity with any n-gram in INLINEFORM2 .
To capture phrases of different lengths we use multiple n-gram widths, INLINEFORM0 . In case a sentence has already been added to the retrieved set, we find the next most similar sentence to avoid having duplicates. The number of neighbors retrieved for each source sentence is proportional to its length.
We also extend our n-gram retrieval strategy with dense vector based n-gram representations. The objective behind using a dense vector based approach is to incorporate information relevant to the translation task in the retrieval stage. We use a pre-trained Transformer Base BIBREF6 encoder trained on WMT to generate sub-word level dense representations for the sentence. The representation for each n-gram is now defined to be the mean of the representations of all its constituent sub-words. We use the INLINEFORM0 distance of n-gram representations as the retrieval criterion. Note that we use a sub-word level decomposition of sentences for dense retrieval, as compared to word-level for IDF based retrieval (i.e., n-grams are composed of sub-words instead of words).
Following the approach described for IDF based n-gram retrieval, we use multiple values of INLINEFORM0 , and remove duplicate neighbors while creating the retrieved set.
NMT with Context Retrieval
To incorporate the retrieved neighbors, INLINEFORM0 , within the NMT model, we first encode them using Transformer layers, as described in subsection UID12 . This encoded memory is then used within the decoder via an attention mechanism, as described in subsection UID15 .
We now describe how each retrieved translation pair, INLINEFORM0 , is encoded. This architecture is illustrated in Figure FIGREF9 .
We first encode the retrieved source, INLINEFORM0 , in a Transformer layer. Apart from self-attention, we incorporate information from the encoder representation of the current source, INLINEFORM1 , using decoder style cross-attention.
The retrieved target, INLINEFORM0 , is encoded in a similar manner, attending the encoded representation of INLINEFORM1 generated in the previous step.
The encoded representations for all targets, INLINEFORM0 , are then concatenated along the time axis to form the Conditional Source Target Memory (CSTM).
We use gated multi-source attention to combine the context from the source encoder representations and the CSTM. This is similar to the gated attention employed by BIBREF17 . We use a Transformer based decoder that attends to both, the encoder outputs and the CSTM, in every cross-attention layer. The rest of the decoder architecture remains unchanged.
Let the context vectors obtained by applying multi-head attention to the source and memory, with query INLINEFORM0 be INLINEFORM1 and INLINEFORM2 respectively. Then the gated context vector, INLINEFORM3 , is given by, DISPLAYFORM0 DISPLAYFORM1
where INLINEFORM0 is the scalar gating variable at time-step t, and INLINEFORM1 and INLINEFORM2 are learned parameters. These steps are illustrated in Figure FIGREF10 .
Data and Evaluation
We compare the performance of a standard Transformer Base model and our semi-parametric NMT approach on an English-French translation task. We create a new heterogeneous dataset, constructed from a combination of the WMT training set (36M pairs), the IWSLT bilingual corpus (237k pairs), JRC-Acquis (797k pairs) and OpenSubtitles (33M pairs). For WMT, we use newstest 13 for validation and newstest 14 for test. For IWSLT, we use a combination of the test corpora from 2012-14 for validation and test 2015 for eval. For OpenSubtitles and JRC-Acquis, we create our own splits for validation and test, since no benchmark split is publicly available. After deduping, the JRC-Acquis test and validation set contain 6574 and 5121 sentence pairs respectively. The OpenSubtitles test and validation sets contain 3975 and 3488 pairs. For multi-domain training, the validation set is a concatenation of the four individual validation sets.
All datasets are tokenized with the Moses tokenizer BIBREF25 and mixed without any sampling. We use a shared vocabulary Sentence-Piece Model BIBREF26 for sub-word tokenization, with a vocabulary size of 32000 tokens. We train each model for 1M steps, and choose the best checkpoint from the last 5 checkpoints based on validation performance. BLEU scores are computed with tokenized true-cased output and references with multi-bleu.perl from Moses.
For IDF based sentence retrieval, for each sentence in the training, dev and test corpus, we use INLINEFORM0 neighbors per example during both, training and evaluation. For the N-Gram level retrieval strategies, we used INLINEFORM1 neighbors during training, and neighbors corresponding to all n-grams during decoding. This was meant to limit memory requirements and enable the model to fit on P100s during training. We used n-gram width, INLINEFORM2 , for both IDF and dense vector based n-gram retrieval approaches. For scalability reasons, we restricted the retrieval set to the in-domain training corpus, i.e. neighbors for all train, dev and test sentences in the JRC-Acquis corpus were retrieved from the JRC-Acquis training split, and similarly for the other datasets.
Hyper-parameters and Optimization
For our baseline model we use the standard Transformer Base model BIBREF6 . For the semi-parametric model, all our hyper-parameters for attention (8 attention heads), model dimensions (512) and hidden dimensions (2048), including those used in the CSTM memory are equivalent to Transformer Base.
The Transformer baselines are trained on 16 GPUs, with the learning rate, warm-up schedule and batching scheme described in BIBREF6 . The semi-parametric models were trained on 32 GPUs with each replica split over 2 GPUs, one to train the translation model and the other for computing the CSTM. We used a conservative learning rate schedule (3, 40K) BIBREF8 to train the semi-parametric models.
We apply a dropout rate BIBREF27 of 0.1 to all inputs, residuals, attentions and ReLU connections in both models. We use Adam BIBREF28 to train all models, and apply label smoothing with an uncertainty of 0.1 BIBREF29 . In addition to the transformer layers, layer normalization BIBREF30 was applied to the output of the CSTM. All models are implemented in Tensorflow-Lingvo BIBREF31 .
Results
We compare the test performance of a multi-domain Transformer Base and our semi-parametric model using dense vector based n-gram retrieval and CSTM in Table TABREF21 . Apart from significantly improving performance by more than 10 BLEU points on JRC-Acquis, 2-3 BLEU on OpenSubtitles and IWSLT, we notice a moderate gain of 0.5 BLEU points on WMT 14.
Comparison of retrieval strategies
We compare the performance of all 3 retrieval strategies in Table TABREF21 . The semi-parametric model with sentence level retrieval out-performs the seq2seq model by a huge margin on JRC-Acquis and OpenSubtitles. A sample from the JRC-Acquis dataset where the semi-parametric approach improves significantly over the neural approach is included in Table TABREF22 . We notice that there is a lot of overlap between the source sentence and the retrieved source, resulting in the semi-parametric model copying large chunks from the retrieved target. However, its performance is noticeably worse on WMT and IWSLT. Based on a manual inspection of the retrieved candidates, we attribute these losses to retrieval failures. For broad domain datasets like WMT and IWSLT sentence retrieval fails to find good candidates.
Switching to n-gram level retrieval brings the WMT performance close to the seq2seq approach, and IWSLT performance to 2 BLEU points above the baseline model. Representative examples from IWSLT and WMT where n-gram retrieval improves over sentence level retrieval can be seen in Tables TABREF24 and TABREF25 . Despite the majority of the retrieved neighbor having nothing in common with the source sentence, n-gram retrieval is able to find neighbors that contain local overlaps.
Using dense n-gram retrieval allows us to move beyond lexical overlap and retrieve semantically similar n-grams even when the actual tokens are different. As a result, dense n-gram retrieval improves performance over all our models on all 4 datasets. An illustrative example from WMT is included in Table TABREF26 .
Memory Ablation Experiments
We report the performance of the various memory ablations in Table TABREF27 . We first remove the retrieved sources, INLINEFORM0 , from the CSTM, resulting in an architecture where the encoding of a retrieved target, INLINEFORM1 , only incorporates information from the source INLINEFORM2 , represented by the row CTM in the table. This results in a clear drop in performance on all datasets. We ablate further by removing the attention to the original source INLINEFORM3 , resulting in a slightly smaller drop in performance (represented by TM). These experiments indicate that incorporating context from the sources significantly contributes to performance, by allowing the model to distinguish between relevant context and noise.
Non-Parametric Adaptation
Using a semi-parametric formulation for MT opens up the possibility of non-parametric adaptation. The biggest advantage of this approach is the possibility of training a single massively customizable model which can be adapted to any new dataset or document at inference time, by just updating the retrieval dataset.
We evaluate our model's performance on non-parametric adaptation and compare it against a fully fine-tuned model. In this setting, we train a baseline model and a dense n-gram based semi-parametric model on the WMT training corpus. We only retrieve and train on examples from the WMT corpus during training. We use the same hyper-parameters and training approaches used for the multi-domain experiments, as in Section SECREF3 .
The baseline model is then fine-tuned independently on JRC-Acquis, OpenSubtitles and IWSLT. The semi-parametric model is adapted non-parametrically to these three datasets, without any parameter updates. Adaptation is achieved via the retrieval mechanism - while evaluating, we retrieve similar examples from their respective training datasets. To quantify headroom, we also fine-tune our semi-parametric model on each of these datasets.
The results for non-parametric adaptation experiments are documented in Table TABREF30 . We notice that the non-parametric adaptation strategy significantly out-performs the base model on all 4 datasets. More importantly, the we find that our approach is capable of adapting to both, JRC-Acquis and OpenSubtitles, via just the retrieval apparatus, and out-performs the fully fine-tuned model indicating that non-parametric adaptation might be a reasonable approach when adapting to a lot of narrow domains or documents.
In-domain fine-tuning on top of non-parametric adaptation further improves by 2 BLEU points on all datasets, increasing the gap even further with the seq2seq adapted models.
Related Work
Tools incorporating information from individual translation pairs, or translation memories BIBREF32 , BIBREF33 , have been widely utilized by human translators in the industry. There have been a few efforts attempting to combine non-parametric methods with NMT BIBREF15 , BIBREF16 , BIBREF17 , but the key difference of our approach is the introduction of local, sub-sentence level similarity in the retrieval process, via n-gram level retrieval. Combined with our architectural improvements, motivated by the target encoder and gated attention from BIBREF17 and the extended transformer model from BIBREF34 , our semi-parametric NMT model is able to out-perform purely neural models in broad multi-domain settings.
Some works have proposed using phrase tables or the outputs of Phrase based MT within NMT BIBREF19 , BIBREF35 , BIBREF36 . While this reduces the noise present within the retrieved translation pairs, it requires training and maintaining a separate SMT system which might introduce errors of its own.
Another class of methods requires fine-tuning the entire NMT model to every instance at inference time, using retrieved examples BIBREF37 , BIBREF38 , but these approaches require running expensive gradient descent steps before every translation.
Beyond NMT, there have been a few other attempts to incorporate non-parametric approaches into neural generative models BIBREF39 , BIBREF40 , BIBREF41 . This strong trend towards combining neural generative models with non-parametric methods is an attempt to counter the weaknesses of neural networks, especially their failure to remember information from individual training instances and the diversity problem of seq2seq models BIBREF42 , BIBREF43 .
While our approach relies purely on retrieval from the training corpus, there has been quite a lot of work, especially on Question Answering, that attempts to find additional signals to perform the supervised task in the presence of external knowledge sources BIBREF44 , BIBREF45 . Retrieving information from unsupervised corpora by utilizing multilingual representations BIBREF46 might be another interesting extension of this work.
Conclusions and Future Work
We make two major technical contributions in this work which enable us to improve the quality of semi-parametric NMT on broad domain datasets. First, we propose using n-gram retrieval, with standard Inverse Document Frequency similarity and with dense vector representations, that takes into account local sentence similarities that are critical to translation. As a result we are able to retrieve useful candidates even for broad domain tasks with little train-test overlap. Second, we propose a novel architecture to encode retrieved source-target pairs, allowing the model to distinguish useful information from noise by encoding the retrieved targets in context of the current translation task.
We demonstrate, for the first time, that semi-parametric methods can beat neural models by significant margins on multi-domain Machine Translation. By successfully training semi-parametric neural models on a broad domain dataset (WMT), we also open the door for non-parametric adaptation, showing huge improvements on new domains without any parameter updates.
While we constrain this work to retrieved context, our architecture can be utilized to incorporate information from other sources of context, including documents, bilingual dictionaries etc. Using dense representations for retrieval also allows extending semi-parametric neural methods to other input modalities, including images and speech.
With this work, we hope to motivate further investigation into semi-parametric neural models for and beyond Neural Machine Translation.
Acknowledgments
We would like to thank Naveen Arivazhagan, Macduff Hughes, Dmitry Lepikhin, Mia Chen, Yuan Cao, Ciprian Chelba, Zhifeng Chen, Melvin Johnson and other members of the Google Brain and Google Translate teams for their useful inputs and discussions. We would also like to thank the entire Lingvo development team for their foundational contributions to this project. | we define the similarity between INLINEFORM7 and INLINEFORM8 by, DISPLAYFORM0 |
67a28fe78f07c1383176b89e78630ee191cf15db | 67a28fe78f07c1383176b89e78630ee191cf15db_0 | Q: Where is MVCNN pertained?
Text: Introduction
Different sentence classification tasks are crucial for many Natural Language Processing (NLP) applications. Natural language sentences have complicated structures, both sequential and hierarchical, that are essential for understanding them. In addition, how to decode and compose the features of component units, including single words and variable-size phrases, is central to the sentence classification problem.
In recent years, deep learning models have achieved remarkable results in computer vision BIBREF0 , speech recognition BIBREF1 and NLP BIBREF2 . A problem largely specific to NLP is how to detect features of linguistic units, how to conduct composition over variable-size sequences and how to use them for NLP tasks BIBREF3 , BIBREF4 , BIBREF5 . socher2011dynamic proposed recursive neural networks to form phrases based on parsing trees. This approach depends on the availability of a well performing parser; for many languages and domains, especially noisy domains, reliable parsing is difficult. Hence, convolution neural networks (CNN) are getting increasing attention, for they are able to model long-range dependencies in sentences via hierarchical structures BIBREF6 , BIBREF5 , BIBREF7 . Current CNN systems usually implement a convolution layer with fixed-size filters (i.e., feature detectors), in which the concrete filter size is a hyperparameter. They essentially split a sentence into multiple sub-sentences by a sliding window, then determine the sentence label by using the dominant label across all sub-sentences. The underlying assumption is that the sub-sentence with that granularity is potentially good enough to represent the whole sentence. However, it is hard to find the granularity of a “good sub-sentence” that works well across sentences. This motivates us to implement variable-size filters in a convolution layer in order to extract features of multigranular phrases.
Breakthroughs of deep learning in NLP are also based on learning distributed word representations – also called “word embeddings” – by neural language models BIBREF8 , BIBREF9 , BIBREF10 , BIBREF11 , BIBREF12 . Word embeddings are derived by projecting words from a sparse, 1-of- $V$ encoding ( $V$ : vocabulary size) onto a lower dimensional and dense vector space via hidden layers and can be interpreted as feature extractors that encode semantic and syntactic features of words.
Many papers study the comparative performance of different versions of word embeddings, usually learned by different neural network (NN) architectures. For example, chen2013expressive compared HLBL BIBREF9 , SENNA BIBREF2 , Turian BIBREF13 and Huang BIBREF14 , showing great variance in quality and characteristics of the semantics captured by the tested embedding versions. hill2014not showed that embeddings learned by neural machine translation models outperform three representative monolingual embedding versions: skip-gram BIBREF15 , GloVe BIBREF16 and C&W BIBREF3 in some cases. These prior studies motivate us to explore combining multiple versions of word embeddings, treating each of them as a distinct description of words. Our expectation is that the combination of these embedding versions, trained by different NNs on different corpora, should contain more information than each version individually. We want to leverage this diversity of different embedding versions to extract higher quality sentence features and thereby improve sentence classification performance.
The letters “M” and “V” in the name “MVCNN” of our architecture denote the multichannel and variable-size convolution filters, respectively. “Multichannel” employs language from computer vision where a color image has red, green and blue channels. Here, a channel is a description by an embedding version.
For many sentence classification tasks, only relatively small training sets are available. MVCNN has a large number of parameters, so that overfitting is a danger when they are trained on small training sets. We address this problem by pretraining MVCNN on unlabeled data. These pretrained weights can then be fine-tuned for the specific classification task.
In sum, we attribute the success of MVCNN to: (i) designing variable-size convolution filters to extract variable-range features of sentences and (ii) exploring the combination of multiple public embedding versions to initialize words in sentences. We also employ two “tricks” to further enhance system performance: mutual learning and pretraining.
In remaining parts, Section "Related Work" presents related work. Section "Model Description" gives details of our classification model. Section "Model Enhancements" introduces two tricks that enhance system performance: mutual-learning and pretraining. Section "Experiments" reports experimental results. Section "Conclusion" concludes this work.
Related Work
Much prior work has exploited deep neural networks to model sentences.
blacoe2012comparison represented a sentence by element-wise addition, multiplication, or recursive autoencoder over embeddings of component single words. yin2014exploration extended this approach by composing on words and phrases instead of only single words.
collobert2008unified and yu2014deep used one layer of convolution over phrases detected by a sliding window on a target sentence, then used max- or average-pooling to form a sentence representation.
blunsom2014convolutional stacked multiple layers of one-dimensional convolution by dynamic k-max pooling to model sentences. We also adopt dynamic k-max pooling while our convolution layer has variable-size filters.
kimEMNLP2014 also studied multichannel representation and variable-size filters. Differently, their multichannel relies on a single version of pretrained embeddings (i.e., pretrained Word2Vec embeddings) with two copies: one is kept stable and the other one is fine-tuned by backpropagation. We develop this insight by incorporating diverse embedding versions. Additionally, their idea of variable-size filters is further developed.
le2014distributed initialized the representation of a sentence as a parameter vector, treating it as a global feature and combining this vector with the representations of context words to do word prediction. Finally, this fine-tuned vector is used as representation of this sentence. Apparently, this method can only produce generic sentence representations which encode no task-specific features.
Our work is also inspired by studies that compared the performance of different word embedding versions or investigated the combination of them. For example, turian2010word compared Brown clusters, C&W embeddings and HLBL embeddings in NER and chunking tasks. They found that Brown clusters and word embeddings both can improve the accuracy of supervised NLP systems; and demonstrated empirically that combining different word representations is beneficial. luo2014pre adapted CBOW BIBREF12 to train word embeddings on different datasets: free text documents from Wikipedia, search click-through data and user query data, showing that combining them gets stronger results than using individual word embeddings in web search ranking and word similarity task. However, these two papers either learned word representations on the same corpus BIBREF13 or enhanced the embedding quality by extending training corpora, not learning algorithms BIBREF17 . In our work, there is no limit to the type of embedding versions we can use and they leverage not only the diversity of corpora, but also the different principles of learning algorithms.
Model Description
We now describe the architecture of our model MVCNN, illustrated in Figure 1 .
Multichannel Input. The input of MVCNN includes multichannel feature maps of a considered sentence, each is a matrix initialized by a different embedding version. Let $s$ be sentence length, $d$ dimension of word embeddings and $c$ the total number of different embedding versions (i.e., channels). Hence, the whole initialized input is a three-dimensional array of size $c\times d\times s$ . Figure 1 depicts a sentence with $s=12$ words. Each word is initialized by $c=5$ embeddings, each coming from a different channel. In implementation, sentences in a mini-batch will be padded to the same length, and unknown words for corresponding channel are randomly initialized or can acquire good initialization from the mutual-learning phase described in next section.
Multichannel initialization brings two advantages: 1) a frequent word can have $c$ representations in the beginning (instead of only one), which means it has more available information to leverage; 2) a rare word missed in some embedding versions can be “made up” by others (we call it “partially known word”). Therefore, this kind of initialization is able to make use of information about partially known words, without having to employ full random initialization or removal of unknown words. The vocabulary of the binary sentiment prediction task described in experimental part contains 5232 words unknown in HLBL embeddings, 4273 in Huang embeddings, 3299 in GloVe embeddings, 4136 in SENNA embeddings and 2257 in Word2Vec embeddings. But only 1824 words find no embedding from any channel! Hence, multichannel initialization can considerably reduce the number of unknown words.
Convolution Layer (Conv). For convenience, we first introduce how this work uses a convolution layer on one input feature map to generate one higher-level feature map. Given a sentence of length $s$ : $w_1, w_2, \ldots , w_s$ ; $\mathbf {w}_i\in \mathbb {R}^{d}$ denotes the embedding of word $w_i$ ; a convolution layer uses sliding filters to extract local features of that sentence. The filter width $l$ is a parameter. We first concatenate the initialized embeddings of $l$ consecutive words ( $\mathbf {w}_{i-l+1}, \ldots , \mathbf {w}_i$ ) as $\mathbf {c}_i\in \mathbb {R}^{ld}$ $(1\le i <s+l)$ , then generate the feature value of this phrase as $\textbf {p}_i$ (the whole vector $w_1, w_2, \ldots , w_s$0 contains all the local features) using a tanh activation function and a linear projection vector $w_1, w_2, \ldots , w_s$1 as:
$$\mathbf {p}_i=\mathrm {tanh}(\mathbf {v}^\mathrm {T}\mathbf {c}_i+b)$$ (Eq. 2)
More generally, convolution operation can deal with multiple input feature maps and can be stacked to yield feature maps of increasing layers. In each layer, there are usually multiple filters of the same size, but with different weights BIBREF4 . We refer to a filter with a specific set of weights as a kernel. The goal is often to train a model in which different kernels detect different kinds of features of a local region. However, this traditional way can not detect the features of regions of different granularity. Hence we keep the property of multi-kernel while extending it to variable-size in the same layer.
As in CNN for object recognition, to increase the number of kernels of a certain layer, multiple feature maps may be computed in parallel at the same layer. Further, to increase the size diversity of kernels in the same layer, more feature maps containing various-range dependency features can be learned. We denote a feature map of the $i^{\mathrm {th}}$ layer by $\mathbf {F}_i$ , and assume totally $n$ feature maps exist in layer $i-1$ : $\mathbf {F}_{i-1}^1, \ldots , \mathbf {F}_{i-1}^n$ . Considering a specific filter size $l$ in layer $i$ , each feature map $\mathbf {F}_{i,l}^j$ is computed by convolving a distinct set of filters of size $l$ , arranged in a matrix $\mathbf {V}_{i,l}^{j,k}$ , with each feature map $\mathbf {F}_i$0 and summing the results:
$$\mathbf {F}_{i,l}^j=\sum ^n_{k=1}\mathbf {V}_{i,l}^{j,k}*\mathbf {F}^k_{i-1}$$ (Eq. 3)
where $*$ indicates the convolution operation and $j$ is the index of a feature map in layer $i$ . The weights in $\mathbf {V}$ form a rank 4 tensor.
Note that we use wide convolution in this work: it means word representations $\mathbf {w}_g$ for $g\le 0$ or $g\ge s+1$ are actually zero embeddings. Wide convolution enables that each word can be detected by all filter weights in $\mathbf {V}$ .
In Figure 1 , the first convolution layer deals with an input with $n=5$ feature maps. Its filters have sizes 3 and 5 respectively (i.e., $l=3, 5$ ), and each filter has $j=3$ kernels. This means this convolution layer can detect three kinds of features of phrases with length 3 and 5, respectively.
DCNN in BIBREF4 used one-dimensional convolution: each higher-order feature is produced from values of a single dimension in the lower-layer feature map. Even though that work proposed folding operation to model the dependencies between adjacent dimensions, this type of dependency modeling is still limited. Differently, convolution in present work is able to model dependency across dimensions as well as adjacent words, which obviates the need for a folding step. This change also means our model has substantially fewer parameters than the DCNN since the output of each convolution layer is smaller by a factor of $d$ .
Dynamic k-max Pooling. blunsom2014convolutional pool the $k$ most active features compared with simple max (1-max) pooling BIBREF2 . This property enables it to connect multiple convolution layers to form a deep architecture to extract high-level abstract features. In this work, we directly use it to extract features for variable-size feature maps. For a given feature map in layer $i$ , dynamic k-max pooling extracts $k_{i}$ top values from each dimension and $k_{top}$ top values in the top layer. We set
$$\nonumber k_{i}=\mathrm {max}(k_{top}, \lceil \frac{L-i}{L}s\rceil )$$ (Eq. 5)
where $i\in \lbrace 1,2,\ldots \, L\rbrace $ is the order of convolution layer from bottom to top in Figure 1 ; $L$ is the total numbers of convolution layers; $k_{top}$ is a constant determined empirically, we set it to 4 as BIBREF4 .
As a result, the second convolution layer in Figure 1 has an input with two same-size feature maps, one results from filter size 3, one from filter size 5. The values in the two feature maps are for phrases with different granularity. The motivation of this convolution layer lies in that a feature reflected by a short phrase may be not trustworthy while the longer phrase containing the short one is trustworthy, or the long phrase has no trustworthy feature while its component short phrase is more reliable. This and even higher-order convolution layers therefore can make a trade-off between the features of different granularity.
Hidden Layer. On the top of the final k-max pooling, we stack a fully connected layer to learn sentence representation with given dimension (e.g., $d$ ).
Logistic Regression Layer. Finally, sentence representation is forwarded into logistic regression layer for classification.
In brief, our MVCNN model learns from BIBREF4 to use dynamic k-max pooling to stack multiple convolution layers, and gets insight from BIBREF5 to investigate variable-size filters in a convolution layer. Compared to BIBREF4 , MVCNN has rich feature maps as input and as output of each convolution layer. Its convolution operation is not only more flexible to extract features of variable-range phrases, but also able to model dependency among all dimensions of representations. MVCNN extends the network in BIBREF5 by hierarchical convolution architecture and further exploration of multichannel and variable-size feature detectors.
Model Enhancements
This part introduces two training tricks that enhance the performance of MVCNN in practice.
Mutual-Learning of Embedding Versions. One observation in using multiple embedding versions is that they have different vocabulary coverage. An unknown word in an embedding version may be a known word in another version. Thus, there exists a proportion of words that can only be partially initialized by certain versions of word embeddings, which means these words lack the description from other versions.
To alleviate this problem, we design a mutual-learning regime to predict representations of unknown words for each embedding version by learning projections between versions. As a result, all embedding versions have the same vocabulary. This processing ensures that more words in each embedding version receive a good representation, and is expected to give most words occurring in a classification dataset more comprehensive initialization (as opposed to just being randomly initialized).
Let $c$ be the number of embedding versions in consideration, $V_1, V_2, \ldots , V_i, \ldots , V_c$ their vocabularies, $V^*=\cup ^c_{i=1} V_i$ their union, and $V_i^-=V^*\backslash V_i$ ( $i=1, \ldots , c$ ) the vocabulary of unknown words for embedding version $i$ . Our goal is to learn embeddings for the words in $V_i^-$ by knowledge from the other $c-1$ embedding versions.
We use the overlapping vocabulary between $V_i$ and $V_j$ , denoted as $V_{ij}$ , as training set, formalizing a projection $f_{ij}$ from space $V_i$ to space $V_j$ ( $i\ne j; i, j\in \lbrace 1,2,\ldots ,c\rbrace $ ) as follows:
$$\mathbf {\hat{w}}_j=\mathbf {M}_{ij}\mathbf {w}_i$$ (Eq. 6)
where $\mathbf {M}_{ij}\in \mathbb {R}^{d\times d}$ , $\mathbf {w}_i\in \mathbb {R}^d$ denotes the representation of word $w$ in space $V_i$ and $\mathbf {\hat{w}}_j$ is the projected (or learned) representation of word $w$ in space $V_j$ . Squared error between $\mathbf {w}_j$ and $\mathbf {\hat{w}}_j$ is the training loss to minimize. We use $\hat{\mathbf {}{w}_j=f_{ij}(\mathbf {w}_i) to reformat Equation \ref {equ:proj}. Totally c(c-1)/2 projections f_{ij} are trained, each on the vocabulary intersection V_{ij}. }Let $ w $\mathbf {w}_i\in \mathbb {R}^d$0 Vi $\mathbf {w}_i\in \mathbb {R}^d$1 V1, V2, ..., Vk $\mathbf {w}_i\in \mathbb {R}^d$2 w $\mathbf {w}_i\in \mathbb {R}^d$3 Vi $\mathbf {w}_i\in \mathbb {R}^d$4 k $\mathbf {w}_i\in \mathbb {R}^d$5 f1i(w1) $\mathbf {w}_i\in \mathbb {R}^d$6 f2i(w2) $\mathbf {w}_i\in \mathbb {R}^d$7 ... $\mathbf {w}_i\in \mathbb {R}^d$8 fki(wk) $\mathbf {w}_i\in \mathbb {R}^d$9 V1, V2, ..., Vk $w$0 Vi $w$1 f1i(w1) $w$2 f2i(w2) $w$3 ... $w$4 fki(wk) $w$5 w $w$6 Vi $w$7 w $w$8 Vi $w$9
As discussed in Section "Model Description" , we found that for the binary sentiment classification dataset, many words were unknown in at least one embedding version. But of these words, a total of 5022 words did have coverage in another embedding version and so will benefit from mutual-learning. In the experiments, we will show that this is a very effective method to learn representations for unknown words that increases system performance if learned representations are used for initialization.
Pretraining. Sentence classification systems are usually implemented as supervised training regimes where training loss is between true label distribution and predicted label distribution. In this work, we use pretraining on the unlabeled data of each task and show that it can increase the performance of classification systems.
Figure 1 shows our pretraining setup. The “sentence representation” – the output of “Fully connected” hidden layer – is used to predict the component words (“on” in the figure) in the sentence (instead of predicting the sentence label Y/N as in supervised learning). Concretely, the sentence representation is averaged with representations of some surrounding words (“the”, “cat”, “sat”, “the”, “mat”, “,” in the figure) to predict the middle word (“on”).
Given sentence representation $\mathbf {s}\in \mathbb {R}^d$ and initialized representations of $2t$ context words ( $t$ left words and $t$ right words): $\mathbf {w}_{i-t}$ , $\ldots $ , $\mathbf {w}_{i-1}$ , $\mathbf {w}_{i+1}$ , $\ldots $ , $\mathbf {w}_{i+t}$ ; $2t$0 , we average the total $2t$1 vectors element-wise, depicted as “Average” operation in Figure 1 . Then, this resulting vector is treated as a predicted representation of the middle word and is used to find the true middle word by means of noise-contrastive estimation (NCE) BIBREF18 . For each true example, 10 noise words are sampled.
Note that in pretraining, there are three places where each word needs initialization. (i) Each word in the sentence is initialized in the “Multichannel input” layer to the whole network. (ii) Each context word is initialized as input to the average layer (“Average” in the figure). (iii) Each target word is initialized as the output of the “NCE” layer (“on” in the figure). In this work, we use multichannel initialization for case (i) and random initialization for cases (ii) and (iii). Only fine-tuned multichannel representations (case (i)) are kept for subsequent supervised training.
The rationale for this pretraining is similar to auto-encoder: for an object composed of smaller-granular elements, the representations of the whole object and its components can learn each other. The CNN architecture learns sentence features layer by layer, then those features are justified by all constituent words.
During pretraining, all the model parameters, including mutichannel input, convolution parameters and fully connected layer, will be updated until they are mature to extract the sentence features. Subsequently, the same sets of parameters will be fine-tuned for supervised classification tasks.
In sum, this pretraining is designed to produce good initial values for both model parameters and word embeddings. It is especially helpful for pretraining the embeddings of unknown words.
Experiments
We test the network on four classification tasks. We begin by specifying aspects of the implementation and the training of the network. We then report the results of the experiments.
Hyperparameters and Training
In each of the experiments, the top of the network is a logistic regression that predicts the probability distribution over classes given the input sentence. The network is trained to minimize cross-entropy of predicted and true distributions; the objective includes an $L_2$ regularization term over the parameters. The set of parameters comprises the word embeddings, all filter weights and the weights in fully connected layers. A dropout operation BIBREF19 is put before the logistic regression layer. The network is trained by back-propagation in mini-batches and the gradient-based optimization is performed using the AdaGrad update rule BIBREF20
In all data sets, the initial learning rate is 0.01, dropout probability is 0.8, $L_2$ weight is $5\cdot 10^{-3}$ , batch size is 50. In each convolution layer, filter sizes are {3, 5, 7, 9} and each filter has five kernels (independent of filter size).
Datasets and Experimental Setup
Standard Sentiment Treebank BIBREF21 . This small-scale dataset includes two tasks predicting the sentiment of movie reviews. The output variable is binary in one experiment and can have five possible outcomes in the other: {negative, somewhat negative, neutral, somewhat positive, positive}. In the binary case, we use the given split of 6920 training, 872 development and 1821 test sentences. Likewise, in the fine-grained case, we use the standard 8544/1101/2210 split. socher2013recursive used the Stanford Parser BIBREF22 to parse each sentence into subphrases. The subphrases were then labeled by human annotators in the same way as the sentences were labeled. Labeled phrases that occur as subparts of the training sentences are treated as independent training instances as in BIBREF23 , BIBREF4 .
Sentiment140 BIBREF24 . This is a large-scale dataset of tweets about sentiment classification, where a tweet is automatically labeled as positive or negative depending on the emoticon that occurs in it. The training set consists of 1.6 million tweets with emoticon-based labels and the test set of about 400 hand-annotated tweets. We preprocess the tweets minimally as follows. 1) The equivalence class symbol “url” (resp. “username”) replaces all URLs (resp. all words that start with the @ symbol, e.g., @thomasss). 2) A sequence of $k>2$ repetitions of a letter $c$ (e.g., “cooooooool”) is replaced by two occurrences of $c$ (e.g., “cool”). 3) All tokens are lowercased.
Subj. Subjectivity classification dataset released by BIBREF25 has 5000 subjective sentences and 5000 objective sentences. We report the result of 10-fold cross validation as baseline systems did.
In this work, we use five embedding versions, as shown in Table 1 , to initialize words. Four of them are directly downloaded from the Internet. (i) HLBL. Hierarchical log-bilinear model presented by mnih2009scalable and released by turian2010word; size: 246,122 word embeddings; training corpus: RCV1 corpus, one year of Reuters English newswire from August 1996 to August 1997. (ii) Huang. huang2012improving incorporated global context to deal with challenges raised by words with multiple meanings; size: 100,232 word embeddings; training corpus: April 2010 snapshot of Wikipedia. (iii) GloVe. Size: 1,193,514 word embeddings; training corpus: a Twitter corpus of 2B tweets with 27B tokens. (iv) SENNA. Size: 130,000 word embeddings; training corpus: Wikipedia. Note that we use their 50-dimensional embeddings. (v) Word2Vec. It has no 50-dimensional embeddings available online. We use released code to train skip-gram on English Gigaword Corpus BIBREF26 with setup: window size 5, negative sampling, sampling rate $10^{-3}$ , threads 12. It is worth emphasizing that above embeddings sets are derived on different corpora with different algorithms. This is the very property that we want to make use of to promote the system performance.
Table 2 shows the number of unknown words in each task when using corresponding embedding version to initialize (rows “HLBL”, “Huang”, “Glove”, “SENNA”, “W2V”) and the number of words fully initialized by five embedding versions (“Full hit” row), the number of words partially initialized (“Partial hit” row) and the number of words that cannot be initialized by any of the embedding versions (“No hit” row).
About 30% of words in each task have partially initialized embeddings and our mutual-learning is able to initialize the missing embeddings through projections. Pretraining is expected to learn good representations for all words, but pretraining is especially important for words without initialization (“no hit”); a particularly clear example for this is the Senti140 task: 236,484 of 387,877 words or 61% are in the “no hit” category.
Table 3 compares results on test of MVCNN and its variants with other baselines in the four sentence classification tasks. Row 34, “MVCNN (overall)”, shows performance of the best configuration of MVCNN, optimized on dev. This version uses five versions of word embeddings, four filter sizes (3, 5, 7, 9), both mutual-learning and pretraining, three convolution layers for Senti140 task and two convolution layers for the other tasks. Overall, our system gets the best results, beating all baselines.
The table contains five blocks from top to bottom. Each block investigates one specific configurational aspect of the system. All results in the five blocks are with respect to row 34, “MVCNN (overall)”; e.g., row 19 shows what happens when HLBL is removed from row 34, row 28 shows what happens when mutual learning is removed from row 34 etc.
The block “baselines” (1–18) lists some systems representative of previous work on the corresponding datasets, including the state-of-the-art systems (marked as italic). The block “versions” (19–23) shows the results of our system when one of the embedding versions was not used during training. We want to explore to what extend different embedding versions contribute to performance. The block “filters” (24–27) gives the results when individual filter width is discarded. It also tells us how much a filter with specific size influences. The block “tricks” (28–29) shows the system performance when no mutual-learning or no pretraining is used. The block “layers” (30–33) demonstrates how the system performs when it has different numbers of convolution layers.
From the “layers” block, we can see that our system performs best with two layers of convolution in Standard Sentiment Treebank and Subjectivity Classification tasks (row 31), but with three layers of convolution in Sentiment140 (row 32). This is probably due to Sentiment140 being a much larger dataset; in such a case deeper neural networks are beneficial.
The block “tricks” demonstrates the effect of mutual-learning and pretraining. Apparently, pretraining has a bigger impact on performance than mutual-learning. We speculate that it is because pretraining can influence more words and all learned word embeddings are tuned on the dataset after pretraining.
The block “filters” indicates the contribution of each filter size. The system benefits from filters of each size. Sizes 5 and 7 are most important for high performance, especially 7 (rows 25 and 26).
In the block “versions”, we see that each embedding version is crucial for good performance: performance drops in every single case. Though it is not easy to compare fairly different embedding versions in NLP tasks, especially when those embeddings were trained on different corpora of different sizes using different algorithms, our results are potentially instructive for researchers making decision on which embeddings to use for their own tasks.
Conclusion
This work presented MVCNN, a novel CNN architecture for sentence classification. It combines multichannel initialization – diverse versions of pretrained word embeddings are used – and variable-size filters – features of multigranular phrases are extracted with variable-size convolution filters. We demonstrated that multichannel initialization and variable-size filters enhance system performance on sentiment classification and subjectivity classification tasks.
Future Work
As pointed out by the reviewers the success of the multichannel approach is likely due to a combination of several quite different effects.
First, there is the effect of the embedding learning algorithm. These algorithms differ in many aspects, including in sensitivity to word order (e.g., SENNA: yes, word2vec: no), in objective function and in their treatment of ambiguity (explicitly modeled only by huang2012improving.
Second, there is the effect of the corpus. We would expect the size and genre of the corpus to have a big effect even though we did not analyze this effect in this paper.
Third, complementarity of word embeddings is likely to be more useful for some tasks than for others. Sentiment is a good application for complementary word embeddings because solving this task requires drawing on heterogeneous sources of information, including syntax, semantics and genre as well as the core polarity of a word. Other tasks like part of speech (POS) tagging may benefit less from heterogeneity since the benefit of embeddings in POS often comes down to making a correct choice between two alternatives – a single embedding version may be sufficient for this.
We plan to pursue these questions in future work.
Acknowledgments
Thanks to CIS members and anonymous reviewers for constructive comments. This work was supported by Baidu (through a Baidu scholarship awarded to Wenpeng Yin) and by Deutsche Forschungsgemeinschaft (grant DFG SCHU 2246/8-2, SPP 1335). | on the unlabeled data of each task |
d8de12f5eff64d0e9c9e88f6ebdabc4cdf042c22 | d8de12f5eff64d0e9c9e88f6ebdabc4cdf042c22_0 | Q: How much gain does the model achieve with pretraining MVCNN?
Text: Introduction
Different sentence classification tasks are crucial for many Natural Language Processing (NLP) applications. Natural language sentences have complicated structures, both sequential and hierarchical, that are essential for understanding them. In addition, how to decode and compose the features of component units, including single words and variable-size phrases, is central to the sentence classification problem.
In recent years, deep learning models have achieved remarkable results in computer vision BIBREF0 , speech recognition BIBREF1 and NLP BIBREF2 . A problem largely specific to NLP is how to detect features of linguistic units, how to conduct composition over variable-size sequences and how to use them for NLP tasks BIBREF3 , BIBREF4 , BIBREF5 . socher2011dynamic proposed recursive neural networks to form phrases based on parsing trees. This approach depends on the availability of a well performing parser; for many languages and domains, especially noisy domains, reliable parsing is difficult. Hence, convolution neural networks (CNN) are getting increasing attention, for they are able to model long-range dependencies in sentences via hierarchical structures BIBREF6 , BIBREF5 , BIBREF7 . Current CNN systems usually implement a convolution layer with fixed-size filters (i.e., feature detectors), in which the concrete filter size is a hyperparameter. They essentially split a sentence into multiple sub-sentences by a sliding window, then determine the sentence label by using the dominant label across all sub-sentences. The underlying assumption is that the sub-sentence with that granularity is potentially good enough to represent the whole sentence. However, it is hard to find the granularity of a “good sub-sentence” that works well across sentences. This motivates us to implement variable-size filters in a convolution layer in order to extract features of multigranular phrases.
Breakthroughs of deep learning in NLP are also based on learning distributed word representations – also called “word embeddings” – by neural language models BIBREF8 , BIBREF9 , BIBREF10 , BIBREF11 , BIBREF12 . Word embeddings are derived by projecting words from a sparse, 1-of- $V$ encoding ( $V$ : vocabulary size) onto a lower dimensional and dense vector space via hidden layers and can be interpreted as feature extractors that encode semantic and syntactic features of words.
Many papers study the comparative performance of different versions of word embeddings, usually learned by different neural network (NN) architectures. For example, chen2013expressive compared HLBL BIBREF9 , SENNA BIBREF2 , Turian BIBREF13 and Huang BIBREF14 , showing great variance in quality and characteristics of the semantics captured by the tested embedding versions. hill2014not showed that embeddings learned by neural machine translation models outperform three representative monolingual embedding versions: skip-gram BIBREF15 , GloVe BIBREF16 and C&W BIBREF3 in some cases. These prior studies motivate us to explore combining multiple versions of word embeddings, treating each of them as a distinct description of words. Our expectation is that the combination of these embedding versions, trained by different NNs on different corpora, should contain more information than each version individually. We want to leverage this diversity of different embedding versions to extract higher quality sentence features and thereby improve sentence classification performance.
The letters “M” and “V” in the name “MVCNN” of our architecture denote the multichannel and variable-size convolution filters, respectively. “Multichannel” employs language from computer vision where a color image has red, green and blue channels. Here, a channel is a description by an embedding version.
For many sentence classification tasks, only relatively small training sets are available. MVCNN has a large number of parameters, so that overfitting is a danger when they are trained on small training sets. We address this problem by pretraining MVCNN on unlabeled data. These pretrained weights can then be fine-tuned for the specific classification task.
In sum, we attribute the success of MVCNN to: (i) designing variable-size convolution filters to extract variable-range features of sentences and (ii) exploring the combination of multiple public embedding versions to initialize words in sentences. We also employ two “tricks” to further enhance system performance: mutual learning and pretraining.
In remaining parts, Section "Related Work" presents related work. Section "Model Description" gives details of our classification model. Section "Model Enhancements" introduces two tricks that enhance system performance: mutual-learning and pretraining. Section "Experiments" reports experimental results. Section "Conclusion" concludes this work.
Related Work
Much prior work has exploited deep neural networks to model sentences.
blacoe2012comparison represented a sentence by element-wise addition, multiplication, or recursive autoencoder over embeddings of component single words. yin2014exploration extended this approach by composing on words and phrases instead of only single words.
collobert2008unified and yu2014deep used one layer of convolution over phrases detected by a sliding window on a target sentence, then used max- or average-pooling to form a sentence representation.
blunsom2014convolutional stacked multiple layers of one-dimensional convolution by dynamic k-max pooling to model sentences. We also adopt dynamic k-max pooling while our convolution layer has variable-size filters.
kimEMNLP2014 also studied multichannel representation and variable-size filters. Differently, their multichannel relies on a single version of pretrained embeddings (i.e., pretrained Word2Vec embeddings) with two copies: one is kept stable and the other one is fine-tuned by backpropagation. We develop this insight by incorporating diverse embedding versions. Additionally, their idea of variable-size filters is further developed.
le2014distributed initialized the representation of a sentence as a parameter vector, treating it as a global feature and combining this vector with the representations of context words to do word prediction. Finally, this fine-tuned vector is used as representation of this sentence. Apparently, this method can only produce generic sentence representations which encode no task-specific features.
Our work is also inspired by studies that compared the performance of different word embedding versions or investigated the combination of them. For example, turian2010word compared Brown clusters, C&W embeddings and HLBL embeddings in NER and chunking tasks. They found that Brown clusters and word embeddings both can improve the accuracy of supervised NLP systems; and demonstrated empirically that combining different word representations is beneficial. luo2014pre adapted CBOW BIBREF12 to train word embeddings on different datasets: free text documents from Wikipedia, search click-through data and user query data, showing that combining them gets stronger results than using individual word embeddings in web search ranking and word similarity task. However, these two papers either learned word representations on the same corpus BIBREF13 or enhanced the embedding quality by extending training corpora, not learning algorithms BIBREF17 . In our work, there is no limit to the type of embedding versions we can use and they leverage not only the diversity of corpora, but also the different principles of learning algorithms.
Model Description
We now describe the architecture of our model MVCNN, illustrated in Figure 1 .
Multichannel Input. The input of MVCNN includes multichannel feature maps of a considered sentence, each is a matrix initialized by a different embedding version. Let $s$ be sentence length, $d$ dimension of word embeddings and $c$ the total number of different embedding versions (i.e., channels). Hence, the whole initialized input is a three-dimensional array of size $c\times d\times s$ . Figure 1 depicts a sentence with $s=12$ words. Each word is initialized by $c=5$ embeddings, each coming from a different channel. In implementation, sentences in a mini-batch will be padded to the same length, and unknown words for corresponding channel are randomly initialized or can acquire good initialization from the mutual-learning phase described in next section.
Multichannel initialization brings two advantages: 1) a frequent word can have $c$ representations in the beginning (instead of only one), which means it has more available information to leverage; 2) a rare word missed in some embedding versions can be “made up” by others (we call it “partially known word”). Therefore, this kind of initialization is able to make use of information about partially known words, without having to employ full random initialization or removal of unknown words. The vocabulary of the binary sentiment prediction task described in experimental part contains 5232 words unknown in HLBL embeddings, 4273 in Huang embeddings, 3299 in GloVe embeddings, 4136 in SENNA embeddings and 2257 in Word2Vec embeddings. But only 1824 words find no embedding from any channel! Hence, multichannel initialization can considerably reduce the number of unknown words.
Convolution Layer (Conv). For convenience, we first introduce how this work uses a convolution layer on one input feature map to generate one higher-level feature map. Given a sentence of length $s$ : $w_1, w_2, \ldots , w_s$ ; $\mathbf {w}_i\in \mathbb {R}^{d}$ denotes the embedding of word $w_i$ ; a convolution layer uses sliding filters to extract local features of that sentence. The filter width $l$ is a parameter. We first concatenate the initialized embeddings of $l$ consecutive words ( $\mathbf {w}_{i-l+1}, \ldots , \mathbf {w}_i$ ) as $\mathbf {c}_i\in \mathbb {R}^{ld}$ $(1\le i <s+l)$ , then generate the feature value of this phrase as $\textbf {p}_i$ (the whole vector $w_1, w_2, \ldots , w_s$0 contains all the local features) using a tanh activation function and a linear projection vector $w_1, w_2, \ldots , w_s$1 as:
$$\mathbf {p}_i=\mathrm {tanh}(\mathbf {v}^\mathrm {T}\mathbf {c}_i+b)$$ (Eq. 2)
More generally, convolution operation can deal with multiple input feature maps and can be stacked to yield feature maps of increasing layers. In each layer, there are usually multiple filters of the same size, but with different weights BIBREF4 . We refer to a filter with a specific set of weights as a kernel. The goal is often to train a model in which different kernels detect different kinds of features of a local region. However, this traditional way can not detect the features of regions of different granularity. Hence we keep the property of multi-kernel while extending it to variable-size in the same layer.
As in CNN for object recognition, to increase the number of kernels of a certain layer, multiple feature maps may be computed in parallel at the same layer. Further, to increase the size diversity of kernels in the same layer, more feature maps containing various-range dependency features can be learned. We denote a feature map of the $i^{\mathrm {th}}$ layer by $\mathbf {F}_i$ , and assume totally $n$ feature maps exist in layer $i-1$ : $\mathbf {F}_{i-1}^1, \ldots , \mathbf {F}_{i-1}^n$ . Considering a specific filter size $l$ in layer $i$ , each feature map $\mathbf {F}_{i,l}^j$ is computed by convolving a distinct set of filters of size $l$ , arranged in a matrix $\mathbf {V}_{i,l}^{j,k}$ , with each feature map $\mathbf {F}_i$0 and summing the results:
$$\mathbf {F}_{i,l}^j=\sum ^n_{k=1}\mathbf {V}_{i,l}^{j,k}*\mathbf {F}^k_{i-1}$$ (Eq. 3)
where $*$ indicates the convolution operation and $j$ is the index of a feature map in layer $i$ . The weights in $\mathbf {V}$ form a rank 4 tensor.
Note that we use wide convolution in this work: it means word representations $\mathbf {w}_g$ for $g\le 0$ or $g\ge s+1$ are actually zero embeddings. Wide convolution enables that each word can be detected by all filter weights in $\mathbf {V}$ .
In Figure 1 , the first convolution layer deals with an input with $n=5$ feature maps. Its filters have sizes 3 and 5 respectively (i.e., $l=3, 5$ ), and each filter has $j=3$ kernels. This means this convolution layer can detect three kinds of features of phrases with length 3 and 5, respectively.
DCNN in BIBREF4 used one-dimensional convolution: each higher-order feature is produced from values of a single dimension in the lower-layer feature map. Even though that work proposed folding operation to model the dependencies between adjacent dimensions, this type of dependency modeling is still limited. Differently, convolution in present work is able to model dependency across dimensions as well as adjacent words, which obviates the need for a folding step. This change also means our model has substantially fewer parameters than the DCNN since the output of each convolution layer is smaller by a factor of $d$ .
Dynamic k-max Pooling. blunsom2014convolutional pool the $k$ most active features compared with simple max (1-max) pooling BIBREF2 . This property enables it to connect multiple convolution layers to form a deep architecture to extract high-level abstract features. In this work, we directly use it to extract features for variable-size feature maps. For a given feature map in layer $i$ , dynamic k-max pooling extracts $k_{i}$ top values from each dimension and $k_{top}$ top values in the top layer. We set
$$\nonumber k_{i}=\mathrm {max}(k_{top}, \lceil \frac{L-i}{L}s\rceil )$$ (Eq. 5)
where $i\in \lbrace 1,2,\ldots \, L\rbrace $ is the order of convolution layer from bottom to top in Figure 1 ; $L$ is the total numbers of convolution layers; $k_{top}$ is a constant determined empirically, we set it to 4 as BIBREF4 .
As a result, the second convolution layer in Figure 1 has an input with two same-size feature maps, one results from filter size 3, one from filter size 5. The values in the two feature maps are for phrases with different granularity. The motivation of this convolution layer lies in that a feature reflected by a short phrase may be not trustworthy while the longer phrase containing the short one is trustworthy, or the long phrase has no trustworthy feature while its component short phrase is more reliable. This and even higher-order convolution layers therefore can make a trade-off between the features of different granularity.
Hidden Layer. On the top of the final k-max pooling, we stack a fully connected layer to learn sentence representation with given dimension (e.g., $d$ ).
Logistic Regression Layer. Finally, sentence representation is forwarded into logistic regression layer for classification.
In brief, our MVCNN model learns from BIBREF4 to use dynamic k-max pooling to stack multiple convolution layers, and gets insight from BIBREF5 to investigate variable-size filters in a convolution layer. Compared to BIBREF4 , MVCNN has rich feature maps as input and as output of each convolution layer. Its convolution operation is not only more flexible to extract features of variable-range phrases, but also able to model dependency among all dimensions of representations. MVCNN extends the network in BIBREF5 by hierarchical convolution architecture and further exploration of multichannel and variable-size feature detectors.
Model Enhancements
This part introduces two training tricks that enhance the performance of MVCNN in practice.
Mutual-Learning of Embedding Versions. One observation in using multiple embedding versions is that they have different vocabulary coverage. An unknown word in an embedding version may be a known word in another version. Thus, there exists a proportion of words that can only be partially initialized by certain versions of word embeddings, which means these words lack the description from other versions.
To alleviate this problem, we design a mutual-learning regime to predict representations of unknown words for each embedding version by learning projections between versions. As a result, all embedding versions have the same vocabulary. This processing ensures that more words in each embedding version receive a good representation, and is expected to give most words occurring in a classification dataset more comprehensive initialization (as opposed to just being randomly initialized).
Let $c$ be the number of embedding versions in consideration, $V_1, V_2, \ldots , V_i, \ldots , V_c$ their vocabularies, $V^*=\cup ^c_{i=1} V_i$ their union, and $V_i^-=V^*\backslash V_i$ ( $i=1, \ldots , c$ ) the vocabulary of unknown words for embedding version $i$ . Our goal is to learn embeddings for the words in $V_i^-$ by knowledge from the other $c-1$ embedding versions.
We use the overlapping vocabulary between $V_i$ and $V_j$ , denoted as $V_{ij}$ , as training set, formalizing a projection $f_{ij}$ from space $V_i$ to space $V_j$ ( $i\ne j; i, j\in \lbrace 1,2,\ldots ,c\rbrace $ ) as follows:
$$\mathbf {\hat{w}}_j=\mathbf {M}_{ij}\mathbf {w}_i$$ (Eq. 6)
where $\mathbf {M}_{ij}\in \mathbb {R}^{d\times d}$ , $\mathbf {w}_i\in \mathbb {R}^d$ denotes the representation of word $w$ in space $V_i$ and $\mathbf {\hat{w}}_j$ is the projected (or learned) representation of word $w$ in space $V_j$ . Squared error between $\mathbf {w}_j$ and $\mathbf {\hat{w}}_j$ is the training loss to minimize. We use $\hat{\mathbf {}{w}_j=f_{ij}(\mathbf {w}_i) to reformat Equation \ref {equ:proj}. Totally c(c-1)/2 projections f_{ij} are trained, each on the vocabulary intersection V_{ij}. }Let $ w $\mathbf {w}_i\in \mathbb {R}^d$0 Vi $\mathbf {w}_i\in \mathbb {R}^d$1 V1, V2, ..., Vk $\mathbf {w}_i\in \mathbb {R}^d$2 w $\mathbf {w}_i\in \mathbb {R}^d$3 Vi $\mathbf {w}_i\in \mathbb {R}^d$4 k $\mathbf {w}_i\in \mathbb {R}^d$5 f1i(w1) $\mathbf {w}_i\in \mathbb {R}^d$6 f2i(w2) $\mathbf {w}_i\in \mathbb {R}^d$7 ... $\mathbf {w}_i\in \mathbb {R}^d$8 fki(wk) $\mathbf {w}_i\in \mathbb {R}^d$9 V1, V2, ..., Vk $w$0 Vi $w$1 f1i(w1) $w$2 f2i(w2) $w$3 ... $w$4 fki(wk) $w$5 w $w$6 Vi $w$7 w $w$8 Vi $w$9
As discussed in Section "Model Description" , we found that for the binary sentiment classification dataset, many words were unknown in at least one embedding version. But of these words, a total of 5022 words did have coverage in another embedding version and so will benefit from mutual-learning. In the experiments, we will show that this is a very effective method to learn representations for unknown words that increases system performance if learned representations are used for initialization.
Pretraining. Sentence classification systems are usually implemented as supervised training regimes where training loss is between true label distribution and predicted label distribution. In this work, we use pretraining on the unlabeled data of each task and show that it can increase the performance of classification systems.
Figure 1 shows our pretraining setup. The “sentence representation” – the output of “Fully connected” hidden layer – is used to predict the component words (“on” in the figure) in the sentence (instead of predicting the sentence label Y/N as in supervised learning). Concretely, the sentence representation is averaged with representations of some surrounding words (“the”, “cat”, “sat”, “the”, “mat”, “,” in the figure) to predict the middle word (“on”).
Given sentence representation $\mathbf {s}\in \mathbb {R}^d$ and initialized representations of $2t$ context words ( $t$ left words and $t$ right words): $\mathbf {w}_{i-t}$ , $\ldots $ , $\mathbf {w}_{i-1}$ , $\mathbf {w}_{i+1}$ , $\ldots $ , $\mathbf {w}_{i+t}$ ; $2t$0 , we average the total $2t$1 vectors element-wise, depicted as “Average” operation in Figure 1 . Then, this resulting vector is treated as a predicted representation of the middle word and is used to find the true middle word by means of noise-contrastive estimation (NCE) BIBREF18 . For each true example, 10 noise words are sampled.
Note that in pretraining, there are three places where each word needs initialization. (i) Each word in the sentence is initialized in the “Multichannel input” layer to the whole network. (ii) Each context word is initialized as input to the average layer (“Average” in the figure). (iii) Each target word is initialized as the output of the “NCE” layer (“on” in the figure). In this work, we use multichannel initialization for case (i) and random initialization for cases (ii) and (iii). Only fine-tuned multichannel representations (case (i)) are kept for subsequent supervised training.
The rationale for this pretraining is similar to auto-encoder: for an object composed of smaller-granular elements, the representations of the whole object and its components can learn each other. The CNN architecture learns sentence features layer by layer, then those features are justified by all constituent words.
During pretraining, all the model parameters, including mutichannel input, convolution parameters and fully connected layer, will be updated until they are mature to extract the sentence features. Subsequently, the same sets of parameters will be fine-tuned for supervised classification tasks.
In sum, this pretraining is designed to produce good initial values for both model parameters and word embeddings. It is especially helpful for pretraining the embeddings of unknown words.
Experiments
We test the network on four classification tasks. We begin by specifying aspects of the implementation and the training of the network. We then report the results of the experiments.
Hyperparameters and Training
In each of the experiments, the top of the network is a logistic regression that predicts the probability distribution over classes given the input sentence. The network is trained to minimize cross-entropy of predicted and true distributions; the objective includes an $L_2$ regularization term over the parameters. The set of parameters comprises the word embeddings, all filter weights and the weights in fully connected layers. A dropout operation BIBREF19 is put before the logistic regression layer. The network is trained by back-propagation in mini-batches and the gradient-based optimization is performed using the AdaGrad update rule BIBREF20
In all data sets, the initial learning rate is 0.01, dropout probability is 0.8, $L_2$ weight is $5\cdot 10^{-3}$ , batch size is 50. In each convolution layer, filter sizes are {3, 5, 7, 9} and each filter has five kernels (independent of filter size).
Datasets and Experimental Setup
Standard Sentiment Treebank BIBREF21 . This small-scale dataset includes two tasks predicting the sentiment of movie reviews. The output variable is binary in one experiment and can have five possible outcomes in the other: {negative, somewhat negative, neutral, somewhat positive, positive}. In the binary case, we use the given split of 6920 training, 872 development and 1821 test sentences. Likewise, in the fine-grained case, we use the standard 8544/1101/2210 split. socher2013recursive used the Stanford Parser BIBREF22 to parse each sentence into subphrases. The subphrases were then labeled by human annotators in the same way as the sentences were labeled. Labeled phrases that occur as subparts of the training sentences are treated as independent training instances as in BIBREF23 , BIBREF4 .
Sentiment140 BIBREF24 . This is a large-scale dataset of tweets about sentiment classification, where a tweet is automatically labeled as positive or negative depending on the emoticon that occurs in it. The training set consists of 1.6 million tweets with emoticon-based labels and the test set of about 400 hand-annotated tweets. We preprocess the tweets minimally as follows. 1) The equivalence class symbol “url” (resp. “username”) replaces all URLs (resp. all words that start with the @ symbol, e.g., @thomasss). 2) A sequence of $k>2$ repetitions of a letter $c$ (e.g., “cooooooool”) is replaced by two occurrences of $c$ (e.g., “cool”). 3) All tokens are lowercased.
Subj. Subjectivity classification dataset released by BIBREF25 has 5000 subjective sentences and 5000 objective sentences. We report the result of 10-fold cross validation as baseline systems did.
In this work, we use five embedding versions, as shown in Table 1 , to initialize words. Four of them are directly downloaded from the Internet. (i) HLBL. Hierarchical log-bilinear model presented by mnih2009scalable and released by turian2010word; size: 246,122 word embeddings; training corpus: RCV1 corpus, one year of Reuters English newswire from August 1996 to August 1997. (ii) Huang. huang2012improving incorporated global context to deal with challenges raised by words with multiple meanings; size: 100,232 word embeddings; training corpus: April 2010 snapshot of Wikipedia. (iii) GloVe. Size: 1,193,514 word embeddings; training corpus: a Twitter corpus of 2B tweets with 27B tokens. (iv) SENNA. Size: 130,000 word embeddings; training corpus: Wikipedia. Note that we use their 50-dimensional embeddings. (v) Word2Vec. It has no 50-dimensional embeddings available online. We use released code to train skip-gram on English Gigaword Corpus BIBREF26 with setup: window size 5, negative sampling, sampling rate $10^{-3}$ , threads 12. It is worth emphasizing that above embeddings sets are derived on different corpora with different algorithms. This is the very property that we want to make use of to promote the system performance.
Table 2 shows the number of unknown words in each task when using corresponding embedding version to initialize (rows “HLBL”, “Huang”, “Glove”, “SENNA”, “W2V”) and the number of words fully initialized by five embedding versions (“Full hit” row), the number of words partially initialized (“Partial hit” row) and the number of words that cannot be initialized by any of the embedding versions (“No hit” row).
About 30% of words in each task have partially initialized embeddings and our mutual-learning is able to initialize the missing embeddings through projections. Pretraining is expected to learn good representations for all words, but pretraining is especially important for words without initialization (“no hit”); a particularly clear example for this is the Senti140 task: 236,484 of 387,877 words or 61% are in the “no hit” category.
Table 3 compares results on test of MVCNN and its variants with other baselines in the four sentence classification tasks. Row 34, “MVCNN (overall)”, shows performance of the best configuration of MVCNN, optimized on dev. This version uses five versions of word embeddings, four filter sizes (3, 5, 7, 9), both mutual-learning and pretraining, three convolution layers for Senti140 task and two convolution layers for the other tasks. Overall, our system gets the best results, beating all baselines.
The table contains five blocks from top to bottom. Each block investigates one specific configurational aspect of the system. All results in the five blocks are with respect to row 34, “MVCNN (overall)”; e.g., row 19 shows what happens when HLBL is removed from row 34, row 28 shows what happens when mutual learning is removed from row 34 etc.
The block “baselines” (1–18) lists some systems representative of previous work on the corresponding datasets, including the state-of-the-art systems (marked as italic). The block “versions” (19–23) shows the results of our system when one of the embedding versions was not used during training. We want to explore to what extend different embedding versions contribute to performance. The block “filters” (24–27) gives the results when individual filter width is discarded. It also tells us how much a filter with specific size influences. The block “tricks” (28–29) shows the system performance when no mutual-learning or no pretraining is used. The block “layers” (30–33) demonstrates how the system performs when it has different numbers of convolution layers.
From the “layers” block, we can see that our system performs best with two layers of convolution in Standard Sentiment Treebank and Subjectivity Classification tasks (row 31), but with three layers of convolution in Sentiment140 (row 32). This is probably due to Sentiment140 being a much larger dataset; in such a case deeper neural networks are beneficial.
The block “tricks” demonstrates the effect of mutual-learning and pretraining. Apparently, pretraining has a bigger impact on performance than mutual-learning. We speculate that it is because pretraining can influence more words and all learned word embeddings are tuned on the dataset after pretraining.
The block “filters” indicates the contribution of each filter size. The system benefits from filters of each size. Sizes 5 and 7 are most important for high performance, especially 7 (rows 25 and 26).
In the block “versions”, we see that each embedding version is crucial for good performance: performance drops in every single case. Though it is not easy to compare fairly different embedding versions in NLP tasks, especially when those embeddings were trained on different corpora of different sizes using different algorithms, our results are potentially instructive for researchers making decision on which embeddings to use for their own tasks.
Conclusion
This work presented MVCNN, a novel CNN architecture for sentence classification. It combines multichannel initialization – diverse versions of pretrained word embeddings are used – and variable-size filters – features of multigranular phrases are extracted with variable-size convolution filters. We demonstrated that multichannel initialization and variable-size filters enhance system performance on sentiment classification and subjectivity classification tasks.
Future Work
As pointed out by the reviewers the success of the multichannel approach is likely due to a combination of several quite different effects.
First, there is the effect of the embedding learning algorithm. These algorithms differ in many aspects, including in sensitivity to word order (e.g., SENNA: yes, word2vec: no), in objective function and in their treatment of ambiguity (explicitly modeled only by huang2012improving.
Second, there is the effect of the corpus. We would expect the size and genre of the corpus to have a big effect even though we did not analyze this effect in this paper.
Third, complementarity of word embeddings is likely to be more useful for some tasks than for others. Sentiment is a good application for complementary word embeddings because solving this task requires drawing on heterogeneous sources of information, including syntax, semantics and genre as well as the core polarity of a word. Other tasks like part of speech (POS) tagging may benefit less from heterogeneity since the benefit of embeddings in POS often comes down to making a correct choice between two alternatives – a single embedding version may be sufficient for this.
We plan to pursue these questions in future work.
Acknowledgments
Thanks to CIS members and anonymous reviewers for constructive comments. This work was supported by Baidu (through a Baidu scholarship awarded to Wenpeng Yin) and by Deutsche Forschungsgemeinschaft (grant DFG SCHU 2246/8-2, SPP 1335). | 0.8 points on Binary; 0.7 points on Fine-Grained; 0.6 points on Senti140; 0.7 points on Subj |
9cba2ee1f8e1560e48b3099d0d8cf6c854ddea2e | 9cba2ee1f8e1560e48b3099d0d8cf6c854ddea2e_0 | Q: What are the effects of extracting features of multigranular phrases?
Text: Introduction
Different sentence classification tasks are crucial for many Natural Language Processing (NLP) applications. Natural language sentences have complicated structures, both sequential and hierarchical, that are essential for understanding them. In addition, how to decode and compose the features of component units, including single words and variable-size phrases, is central to the sentence classification problem.
In recent years, deep learning models have achieved remarkable results in computer vision BIBREF0 , speech recognition BIBREF1 and NLP BIBREF2 . A problem largely specific to NLP is how to detect features of linguistic units, how to conduct composition over variable-size sequences and how to use them for NLP tasks BIBREF3 , BIBREF4 , BIBREF5 . socher2011dynamic proposed recursive neural networks to form phrases based on parsing trees. This approach depends on the availability of a well performing parser; for many languages and domains, especially noisy domains, reliable parsing is difficult. Hence, convolution neural networks (CNN) are getting increasing attention, for they are able to model long-range dependencies in sentences via hierarchical structures BIBREF6 , BIBREF5 , BIBREF7 . Current CNN systems usually implement a convolution layer with fixed-size filters (i.e., feature detectors), in which the concrete filter size is a hyperparameter. They essentially split a sentence into multiple sub-sentences by a sliding window, then determine the sentence label by using the dominant label across all sub-sentences. The underlying assumption is that the sub-sentence with that granularity is potentially good enough to represent the whole sentence. However, it is hard to find the granularity of a “good sub-sentence” that works well across sentences. This motivates us to implement variable-size filters in a convolution layer in order to extract features of multigranular phrases.
Breakthroughs of deep learning in NLP are also based on learning distributed word representations – also called “word embeddings” – by neural language models BIBREF8 , BIBREF9 , BIBREF10 , BIBREF11 , BIBREF12 . Word embeddings are derived by projecting words from a sparse, 1-of- $V$ encoding ( $V$ : vocabulary size) onto a lower dimensional and dense vector space via hidden layers and can be interpreted as feature extractors that encode semantic and syntactic features of words.
Many papers study the comparative performance of different versions of word embeddings, usually learned by different neural network (NN) architectures. For example, chen2013expressive compared HLBL BIBREF9 , SENNA BIBREF2 , Turian BIBREF13 and Huang BIBREF14 , showing great variance in quality and characteristics of the semantics captured by the tested embedding versions. hill2014not showed that embeddings learned by neural machine translation models outperform three representative monolingual embedding versions: skip-gram BIBREF15 , GloVe BIBREF16 and C&W BIBREF3 in some cases. These prior studies motivate us to explore combining multiple versions of word embeddings, treating each of them as a distinct description of words. Our expectation is that the combination of these embedding versions, trained by different NNs on different corpora, should contain more information than each version individually. We want to leverage this diversity of different embedding versions to extract higher quality sentence features and thereby improve sentence classification performance.
The letters “M” and “V” in the name “MVCNN” of our architecture denote the multichannel and variable-size convolution filters, respectively. “Multichannel” employs language from computer vision where a color image has red, green and blue channels. Here, a channel is a description by an embedding version.
For many sentence classification tasks, only relatively small training sets are available. MVCNN has a large number of parameters, so that overfitting is a danger when they are trained on small training sets. We address this problem by pretraining MVCNN on unlabeled data. These pretrained weights can then be fine-tuned for the specific classification task.
In sum, we attribute the success of MVCNN to: (i) designing variable-size convolution filters to extract variable-range features of sentences and (ii) exploring the combination of multiple public embedding versions to initialize words in sentences. We also employ two “tricks” to further enhance system performance: mutual learning and pretraining.
In remaining parts, Section "Related Work" presents related work. Section "Model Description" gives details of our classification model. Section "Model Enhancements" introduces two tricks that enhance system performance: mutual-learning and pretraining. Section "Experiments" reports experimental results. Section "Conclusion" concludes this work.
Related Work
Much prior work has exploited deep neural networks to model sentences.
blacoe2012comparison represented a sentence by element-wise addition, multiplication, or recursive autoencoder over embeddings of component single words. yin2014exploration extended this approach by composing on words and phrases instead of only single words.
collobert2008unified and yu2014deep used one layer of convolution over phrases detected by a sliding window on a target sentence, then used max- or average-pooling to form a sentence representation.
blunsom2014convolutional stacked multiple layers of one-dimensional convolution by dynamic k-max pooling to model sentences. We also adopt dynamic k-max pooling while our convolution layer has variable-size filters.
kimEMNLP2014 also studied multichannel representation and variable-size filters. Differently, their multichannel relies on a single version of pretrained embeddings (i.e., pretrained Word2Vec embeddings) with two copies: one is kept stable and the other one is fine-tuned by backpropagation. We develop this insight by incorporating diverse embedding versions. Additionally, their idea of variable-size filters is further developed.
le2014distributed initialized the representation of a sentence as a parameter vector, treating it as a global feature and combining this vector with the representations of context words to do word prediction. Finally, this fine-tuned vector is used as representation of this sentence. Apparently, this method can only produce generic sentence representations which encode no task-specific features.
Our work is also inspired by studies that compared the performance of different word embedding versions or investigated the combination of them. For example, turian2010word compared Brown clusters, C&W embeddings and HLBL embeddings in NER and chunking tasks. They found that Brown clusters and word embeddings both can improve the accuracy of supervised NLP systems; and demonstrated empirically that combining different word representations is beneficial. luo2014pre adapted CBOW BIBREF12 to train word embeddings on different datasets: free text documents from Wikipedia, search click-through data and user query data, showing that combining them gets stronger results than using individual word embeddings in web search ranking and word similarity task. However, these two papers either learned word representations on the same corpus BIBREF13 or enhanced the embedding quality by extending training corpora, not learning algorithms BIBREF17 . In our work, there is no limit to the type of embedding versions we can use and they leverage not only the diversity of corpora, but also the different principles of learning algorithms.
Model Description
We now describe the architecture of our model MVCNN, illustrated in Figure 1 .
Multichannel Input. The input of MVCNN includes multichannel feature maps of a considered sentence, each is a matrix initialized by a different embedding version. Let $s$ be sentence length, $d$ dimension of word embeddings and $c$ the total number of different embedding versions (i.e., channels). Hence, the whole initialized input is a three-dimensional array of size $c\times d\times s$ . Figure 1 depicts a sentence with $s=12$ words. Each word is initialized by $c=5$ embeddings, each coming from a different channel. In implementation, sentences in a mini-batch will be padded to the same length, and unknown words for corresponding channel are randomly initialized or can acquire good initialization from the mutual-learning phase described in next section.
Multichannel initialization brings two advantages: 1) a frequent word can have $c$ representations in the beginning (instead of only one), which means it has more available information to leverage; 2) a rare word missed in some embedding versions can be “made up” by others (we call it “partially known word”). Therefore, this kind of initialization is able to make use of information about partially known words, without having to employ full random initialization or removal of unknown words. The vocabulary of the binary sentiment prediction task described in experimental part contains 5232 words unknown in HLBL embeddings, 4273 in Huang embeddings, 3299 in GloVe embeddings, 4136 in SENNA embeddings and 2257 in Word2Vec embeddings. But only 1824 words find no embedding from any channel! Hence, multichannel initialization can considerably reduce the number of unknown words.
Convolution Layer (Conv). For convenience, we first introduce how this work uses a convolution layer on one input feature map to generate one higher-level feature map. Given a sentence of length $s$ : $w_1, w_2, \ldots , w_s$ ; $\mathbf {w}_i\in \mathbb {R}^{d}$ denotes the embedding of word $w_i$ ; a convolution layer uses sliding filters to extract local features of that sentence. The filter width $l$ is a parameter. We first concatenate the initialized embeddings of $l$ consecutive words ( $\mathbf {w}_{i-l+1}, \ldots , \mathbf {w}_i$ ) as $\mathbf {c}_i\in \mathbb {R}^{ld}$ $(1\le i <s+l)$ , then generate the feature value of this phrase as $\textbf {p}_i$ (the whole vector $w_1, w_2, \ldots , w_s$0 contains all the local features) using a tanh activation function and a linear projection vector $w_1, w_2, \ldots , w_s$1 as:
$$\mathbf {p}_i=\mathrm {tanh}(\mathbf {v}^\mathrm {T}\mathbf {c}_i+b)$$ (Eq. 2)
More generally, convolution operation can deal with multiple input feature maps and can be stacked to yield feature maps of increasing layers. In each layer, there are usually multiple filters of the same size, but with different weights BIBREF4 . We refer to a filter with a specific set of weights as a kernel. The goal is often to train a model in which different kernels detect different kinds of features of a local region. However, this traditional way can not detect the features of regions of different granularity. Hence we keep the property of multi-kernel while extending it to variable-size in the same layer.
As in CNN for object recognition, to increase the number of kernels of a certain layer, multiple feature maps may be computed in parallel at the same layer. Further, to increase the size diversity of kernels in the same layer, more feature maps containing various-range dependency features can be learned. We denote a feature map of the $i^{\mathrm {th}}$ layer by $\mathbf {F}_i$ , and assume totally $n$ feature maps exist in layer $i-1$ : $\mathbf {F}_{i-1}^1, \ldots , \mathbf {F}_{i-1}^n$ . Considering a specific filter size $l$ in layer $i$ , each feature map $\mathbf {F}_{i,l}^j$ is computed by convolving a distinct set of filters of size $l$ , arranged in a matrix $\mathbf {V}_{i,l}^{j,k}$ , with each feature map $\mathbf {F}_i$0 and summing the results:
$$\mathbf {F}_{i,l}^j=\sum ^n_{k=1}\mathbf {V}_{i,l}^{j,k}*\mathbf {F}^k_{i-1}$$ (Eq. 3)
where $*$ indicates the convolution operation and $j$ is the index of a feature map in layer $i$ . The weights in $\mathbf {V}$ form a rank 4 tensor.
Note that we use wide convolution in this work: it means word representations $\mathbf {w}_g$ for $g\le 0$ or $g\ge s+1$ are actually zero embeddings. Wide convolution enables that each word can be detected by all filter weights in $\mathbf {V}$ .
In Figure 1 , the first convolution layer deals with an input with $n=5$ feature maps. Its filters have sizes 3 and 5 respectively (i.e., $l=3, 5$ ), and each filter has $j=3$ kernels. This means this convolution layer can detect three kinds of features of phrases with length 3 and 5, respectively.
DCNN in BIBREF4 used one-dimensional convolution: each higher-order feature is produced from values of a single dimension in the lower-layer feature map. Even though that work proposed folding operation to model the dependencies between adjacent dimensions, this type of dependency modeling is still limited. Differently, convolution in present work is able to model dependency across dimensions as well as adjacent words, which obviates the need for a folding step. This change also means our model has substantially fewer parameters than the DCNN since the output of each convolution layer is smaller by a factor of $d$ .
Dynamic k-max Pooling. blunsom2014convolutional pool the $k$ most active features compared with simple max (1-max) pooling BIBREF2 . This property enables it to connect multiple convolution layers to form a deep architecture to extract high-level abstract features. In this work, we directly use it to extract features for variable-size feature maps. For a given feature map in layer $i$ , dynamic k-max pooling extracts $k_{i}$ top values from each dimension and $k_{top}$ top values in the top layer. We set
$$\nonumber k_{i}=\mathrm {max}(k_{top}, \lceil \frac{L-i}{L}s\rceil )$$ (Eq. 5)
where $i\in \lbrace 1,2,\ldots \, L\rbrace $ is the order of convolution layer from bottom to top in Figure 1 ; $L$ is the total numbers of convolution layers; $k_{top}$ is a constant determined empirically, we set it to 4 as BIBREF4 .
As a result, the second convolution layer in Figure 1 has an input with two same-size feature maps, one results from filter size 3, one from filter size 5. The values in the two feature maps are for phrases with different granularity. The motivation of this convolution layer lies in that a feature reflected by a short phrase may be not trustworthy while the longer phrase containing the short one is trustworthy, or the long phrase has no trustworthy feature while its component short phrase is more reliable. This and even higher-order convolution layers therefore can make a trade-off between the features of different granularity.
Hidden Layer. On the top of the final k-max pooling, we stack a fully connected layer to learn sentence representation with given dimension (e.g., $d$ ).
Logistic Regression Layer. Finally, sentence representation is forwarded into logistic regression layer for classification.
In brief, our MVCNN model learns from BIBREF4 to use dynamic k-max pooling to stack multiple convolution layers, and gets insight from BIBREF5 to investigate variable-size filters in a convolution layer. Compared to BIBREF4 , MVCNN has rich feature maps as input and as output of each convolution layer. Its convolution operation is not only more flexible to extract features of variable-range phrases, but also able to model dependency among all dimensions of representations. MVCNN extends the network in BIBREF5 by hierarchical convolution architecture and further exploration of multichannel and variable-size feature detectors.
Model Enhancements
This part introduces two training tricks that enhance the performance of MVCNN in practice.
Mutual-Learning of Embedding Versions. One observation in using multiple embedding versions is that they have different vocabulary coverage. An unknown word in an embedding version may be a known word in another version. Thus, there exists a proportion of words that can only be partially initialized by certain versions of word embeddings, which means these words lack the description from other versions.
To alleviate this problem, we design a mutual-learning regime to predict representations of unknown words for each embedding version by learning projections between versions. As a result, all embedding versions have the same vocabulary. This processing ensures that more words in each embedding version receive a good representation, and is expected to give most words occurring in a classification dataset more comprehensive initialization (as opposed to just being randomly initialized).
Let $c$ be the number of embedding versions in consideration, $V_1, V_2, \ldots , V_i, \ldots , V_c$ their vocabularies, $V^*=\cup ^c_{i=1} V_i$ their union, and $V_i^-=V^*\backslash V_i$ ( $i=1, \ldots , c$ ) the vocabulary of unknown words for embedding version $i$ . Our goal is to learn embeddings for the words in $V_i^-$ by knowledge from the other $c-1$ embedding versions.
We use the overlapping vocabulary between $V_i$ and $V_j$ , denoted as $V_{ij}$ , as training set, formalizing a projection $f_{ij}$ from space $V_i$ to space $V_j$ ( $i\ne j; i, j\in \lbrace 1,2,\ldots ,c\rbrace $ ) as follows:
$$\mathbf {\hat{w}}_j=\mathbf {M}_{ij}\mathbf {w}_i$$ (Eq. 6)
where $\mathbf {M}_{ij}\in \mathbb {R}^{d\times d}$ , $\mathbf {w}_i\in \mathbb {R}^d$ denotes the representation of word $w$ in space $V_i$ and $\mathbf {\hat{w}}_j$ is the projected (or learned) representation of word $w$ in space $V_j$ . Squared error between $\mathbf {w}_j$ and $\mathbf {\hat{w}}_j$ is the training loss to minimize. We use $\hat{\mathbf {}{w}_j=f_{ij}(\mathbf {w}_i) to reformat Equation \ref {equ:proj}. Totally c(c-1)/2 projections f_{ij} are trained, each on the vocabulary intersection V_{ij}. }Let $ w $\mathbf {w}_i\in \mathbb {R}^d$0 Vi $\mathbf {w}_i\in \mathbb {R}^d$1 V1, V2, ..., Vk $\mathbf {w}_i\in \mathbb {R}^d$2 w $\mathbf {w}_i\in \mathbb {R}^d$3 Vi $\mathbf {w}_i\in \mathbb {R}^d$4 k $\mathbf {w}_i\in \mathbb {R}^d$5 f1i(w1) $\mathbf {w}_i\in \mathbb {R}^d$6 f2i(w2) $\mathbf {w}_i\in \mathbb {R}^d$7 ... $\mathbf {w}_i\in \mathbb {R}^d$8 fki(wk) $\mathbf {w}_i\in \mathbb {R}^d$9 V1, V2, ..., Vk $w$0 Vi $w$1 f1i(w1) $w$2 f2i(w2) $w$3 ... $w$4 fki(wk) $w$5 w $w$6 Vi $w$7 w $w$8 Vi $w$9
As discussed in Section "Model Description" , we found that for the binary sentiment classification dataset, many words were unknown in at least one embedding version. But of these words, a total of 5022 words did have coverage in another embedding version and so will benefit from mutual-learning. In the experiments, we will show that this is a very effective method to learn representations for unknown words that increases system performance if learned representations are used for initialization.
Pretraining. Sentence classification systems are usually implemented as supervised training regimes where training loss is between true label distribution and predicted label distribution. In this work, we use pretraining on the unlabeled data of each task and show that it can increase the performance of classification systems.
Figure 1 shows our pretraining setup. The “sentence representation” – the output of “Fully connected” hidden layer – is used to predict the component words (“on” in the figure) in the sentence (instead of predicting the sentence label Y/N as in supervised learning). Concretely, the sentence representation is averaged with representations of some surrounding words (“the”, “cat”, “sat”, “the”, “mat”, “,” in the figure) to predict the middle word (“on”).
Given sentence representation $\mathbf {s}\in \mathbb {R}^d$ and initialized representations of $2t$ context words ( $t$ left words and $t$ right words): $\mathbf {w}_{i-t}$ , $\ldots $ , $\mathbf {w}_{i-1}$ , $\mathbf {w}_{i+1}$ , $\ldots $ , $\mathbf {w}_{i+t}$ ; $2t$0 , we average the total $2t$1 vectors element-wise, depicted as “Average” operation in Figure 1 . Then, this resulting vector is treated as a predicted representation of the middle word and is used to find the true middle word by means of noise-contrastive estimation (NCE) BIBREF18 . For each true example, 10 noise words are sampled.
Note that in pretraining, there are three places where each word needs initialization. (i) Each word in the sentence is initialized in the “Multichannel input” layer to the whole network. (ii) Each context word is initialized as input to the average layer (“Average” in the figure). (iii) Each target word is initialized as the output of the “NCE” layer (“on” in the figure). In this work, we use multichannel initialization for case (i) and random initialization for cases (ii) and (iii). Only fine-tuned multichannel representations (case (i)) are kept for subsequent supervised training.
The rationale for this pretraining is similar to auto-encoder: for an object composed of smaller-granular elements, the representations of the whole object and its components can learn each other. The CNN architecture learns sentence features layer by layer, then those features are justified by all constituent words.
During pretraining, all the model parameters, including mutichannel input, convolution parameters and fully connected layer, will be updated until they are mature to extract the sentence features. Subsequently, the same sets of parameters will be fine-tuned for supervised classification tasks.
In sum, this pretraining is designed to produce good initial values for both model parameters and word embeddings. It is especially helpful for pretraining the embeddings of unknown words.
Experiments
We test the network on four classification tasks. We begin by specifying aspects of the implementation and the training of the network. We then report the results of the experiments.
Hyperparameters and Training
In each of the experiments, the top of the network is a logistic regression that predicts the probability distribution over classes given the input sentence. The network is trained to minimize cross-entropy of predicted and true distributions; the objective includes an $L_2$ regularization term over the parameters. The set of parameters comprises the word embeddings, all filter weights and the weights in fully connected layers. A dropout operation BIBREF19 is put before the logistic regression layer. The network is trained by back-propagation in mini-batches and the gradient-based optimization is performed using the AdaGrad update rule BIBREF20
In all data sets, the initial learning rate is 0.01, dropout probability is 0.8, $L_2$ weight is $5\cdot 10^{-3}$ , batch size is 50. In each convolution layer, filter sizes are {3, 5, 7, 9} and each filter has five kernels (independent of filter size).
Datasets and Experimental Setup
Standard Sentiment Treebank BIBREF21 . This small-scale dataset includes two tasks predicting the sentiment of movie reviews. The output variable is binary in one experiment and can have five possible outcomes in the other: {negative, somewhat negative, neutral, somewhat positive, positive}. In the binary case, we use the given split of 6920 training, 872 development and 1821 test sentences. Likewise, in the fine-grained case, we use the standard 8544/1101/2210 split. socher2013recursive used the Stanford Parser BIBREF22 to parse each sentence into subphrases. The subphrases were then labeled by human annotators in the same way as the sentences were labeled. Labeled phrases that occur as subparts of the training sentences are treated as independent training instances as in BIBREF23 , BIBREF4 .
Sentiment140 BIBREF24 . This is a large-scale dataset of tweets about sentiment classification, where a tweet is automatically labeled as positive or negative depending on the emoticon that occurs in it. The training set consists of 1.6 million tweets with emoticon-based labels and the test set of about 400 hand-annotated tweets. We preprocess the tweets minimally as follows. 1) The equivalence class symbol “url” (resp. “username”) replaces all URLs (resp. all words that start with the @ symbol, e.g., @thomasss). 2) A sequence of $k>2$ repetitions of a letter $c$ (e.g., “cooooooool”) is replaced by two occurrences of $c$ (e.g., “cool”). 3) All tokens are lowercased.
Subj. Subjectivity classification dataset released by BIBREF25 has 5000 subjective sentences and 5000 objective sentences. We report the result of 10-fold cross validation as baseline systems did.
In this work, we use five embedding versions, as shown in Table 1 , to initialize words. Four of them are directly downloaded from the Internet. (i) HLBL. Hierarchical log-bilinear model presented by mnih2009scalable and released by turian2010word; size: 246,122 word embeddings; training corpus: RCV1 corpus, one year of Reuters English newswire from August 1996 to August 1997. (ii) Huang. huang2012improving incorporated global context to deal with challenges raised by words with multiple meanings; size: 100,232 word embeddings; training corpus: April 2010 snapshot of Wikipedia. (iii) GloVe. Size: 1,193,514 word embeddings; training corpus: a Twitter corpus of 2B tweets with 27B tokens. (iv) SENNA. Size: 130,000 word embeddings; training corpus: Wikipedia. Note that we use their 50-dimensional embeddings. (v) Word2Vec. It has no 50-dimensional embeddings available online. We use released code to train skip-gram on English Gigaword Corpus BIBREF26 with setup: window size 5, negative sampling, sampling rate $10^{-3}$ , threads 12. It is worth emphasizing that above embeddings sets are derived on different corpora with different algorithms. This is the very property that we want to make use of to promote the system performance.
Table 2 shows the number of unknown words in each task when using corresponding embedding version to initialize (rows “HLBL”, “Huang”, “Glove”, “SENNA”, “W2V”) and the number of words fully initialized by five embedding versions (“Full hit” row), the number of words partially initialized (“Partial hit” row) and the number of words that cannot be initialized by any of the embedding versions (“No hit” row).
About 30% of words in each task have partially initialized embeddings and our mutual-learning is able to initialize the missing embeddings through projections. Pretraining is expected to learn good representations for all words, but pretraining is especially important for words without initialization (“no hit”); a particularly clear example for this is the Senti140 task: 236,484 of 387,877 words or 61% are in the “no hit” category.
Table 3 compares results on test of MVCNN and its variants with other baselines in the four sentence classification tasks. Row 34, “MVCNN (overall)”, shows performance of the best configuration of MVCNN, optimized on dev. This version uses five versions of word embeddings, four filter sizes (3, 5, 7, 9), both mutual-learning and pretraining, three convolution layers for Senti140 task and two convolution layers for the other tasks. Overall, our system gets the best results, beating all baselines.
The table contains five blocks from top to bottom. Each block investigates one specific configurational aspect of the system. All results in the five blocks are with respect to row 34, “MVCNN (overall)”; e.g., row 19 shows what happens when HLBL is removed from row 34, row 28 shows what happens when mutual learning is removed from row 34 etc.
The block “baselines” (1–18) lists some systems representative of previous work on the corresponding datasets, including the state-of-the-art systems (marked as italic). The block “versions” (19–23) shows the results of our system when one of the embedding versions was not used during training. We want to explore to what extend different embedding versions contribute to performance. The block “filters” (24–27) gives the results when individual filter width is discarded. It also tells us how much a filter with specific size influences. The block “tricks” (28–29) shows the system performance when no mutual-learning or no pretraining is used. The block “layers” (30–33) demonstrates how the system performs when it has different numbers of convolution layers.
From the “layers” block, we can see that our system performs best with two layers of convolution in Standard Sentiment Treebank and Subjectivity Classification tasks (row 31), but with three layers of convolution in Sentiment140 (row 32). This is probably due to Sentiment140 being a much larger dataset; in such a case deeper neural networks are beneficial.
The block “tricks” demonstrates the effect of mutual-learning and pretraining. Apparently, pretraining has a bigger impact on performance than mutual-learning. We speculate that it is because pretraining can influence more words and all learned word embeddings are tuned on the dataset after pretraining.
The block “filters” indicates the contribution of each filter size. The system benefits from filters of each size. Sizes 5 and 7 are most important for high performance, especially 7 (rows 25 and 26).
In the block “versions”, we see that each embedding version is crucial for good performance: performance drops in every single case. Though it is not easy to compare fairly different embedding versions in NLP tasks, especially when those embeddings were trained on different corpora of different sizes using different algorithms, our results are potentially instructive for researchers making decision on which embeddings to use for their own tasks.
Conclusion
This work presented MVCNN, a novel CNN architecture for sentence classification. It combines multichannel initialization – diverse versions of pretrained word embeddings are used – and variable-size filters – features of multigranular phrases are extracted with variable-size convolution filters. We demonstrated that multichannel initialization and variable-size filters enhance system performance on sentiment classification and subjectivity classification tasks.
Future Work
As pointed out by the reviewers the success of the multichannel approach is likely due to a combination of several quite different effects.
First, there is the effect of the embedding learning algorithm. These algorithms differ in many aspects, including in sensitivity to word order (e.g., SENNA: yes, word2vec: no), in objective function and in their treatment of ambiguity (explicitly modeled only by huang2012improving.
Second, there is the effect of the corpus. We would expect the size and genre of the corpus to have a big effect even though we did not analyze this effect in this paper.
Third, complementarity of word embeddings is likely to be more useful for some tasks than for others. Sentiment is a good application for complementary word embeddings because solving this task requires drawing on heterogeneous sources of information, including syntax, semantics and genre as well as the core polarity of a word. Other tasks like part of speech (POS) tagging may benefit less from heterogeneity since the benefit of embeddings in POS often comes down to making a correct choice between two alternatives – a single embedding version may be sufficient for this.
We plan to pursue these questions in future work.
Acknowledgments
Thanks to CIS members and anonymous reviewers for constructive comments. This work was supported by Baidu (through a Baidu scholarship awarded to Wenpeng Yin) and by Deutsche Forschungsgemeinschaft (grant DFG SCHU 2246/8-2, SPP 1335). | The system benefits from filters of each size., features of multigranular phrases are extracted with variable-size convolution filters. |
7975c3e1f61344e3da3b38bb12e1ac6dcb153a18 | 7975c3e1f61344e3da3b38bb12e1ac6dcb153a18_0 | Q: What are the effects of diverse versions of pertained word embeddings?
Text: Introduction
Different sentence classification tasks are crucial for many Natural Language Processing (NLP) applications. Natural language sentences have complicated structures, both sequential and hierarchical, that are essential for understanding them. In addition, how to decode and compose the features of component units, including single words and variable-size phrases, is central to the sentence classification problem.
In recent years, deep learning models have achieved remarkable results in computer vision BIBREF0 , speech recognition BIBREF1 and NLP BIBREF2 . A problem largely specific to NLP is how to detect features of linguistic units, how to conduct composition over variable-size sequences and how to use them for NLP tasks BIBREF3 , BIBREF4 , BIBREF5 . socher2011dynamic proposed recursive neural networks to form phrases based on parsing trees. This approach depends on the availability of a well performing parser; for many languages and domains, especially noisy domains, reliable parsing is difficult. Hence, convolution neural networks (CNN) are getting increasing attention, for they are able to model long-range dependencies in sentences via hierarchical structures BIBREF6 , BIBREF5 , BIBREF7 . Current CNN systems usually implement a convolution layer with fixed-size filters (i.e., feature detectors), in which the concrete filter size is a hyperparameter. They essentially split a sentence into multiple sub-sentences by a sliding window, then determine the sentence label by using the dominant label across all sub-sentences. The underlying assumption is that the sub-sentence with that granularity is potentially good enough to represent the whole sentence. However, it is hard to find the granularity of a “good sub-sentence” that works well across sentences. This motivates us to implement variable-size filters in a convolution layer in order to extract features of multigranular phrases.
Breakthroughs of deep learning in NLP are also based on learning distributed word representations – also called “word embeddings” – by neural language models BIBREF8 , BIBREF9 , BIBREF10 , BIBREF11 , BIBREF12 . Word embeddings are derived by projecting words from a sparse, 1-of- $V$ encoding ( $V$ : vocabulary size) onto a lower dimensional and dense vector space via hidden layers and can be interpreted as feature extractors that encode semantic and syntactic features of words.
Many papers study the comparative performance of different versions of word embeddings, usually learned by different neural network (NN) architectures. For example, chen2013expressive compared HLBL BIBREF9 , SENNA BIBREF2 , Turian BIBREF13 and Huang BIBREF14 , showing great variance in quality and characteristics of the semantics captured by the tested embedding versions. hill2014not showed that embeddings learned by neural machine translation models outperform three representative monolingual embedding versions: skip-gram BIBREF15 , GloVe BIBREF16 and C&W BIBREF3 in some cases. These prior studies motivate us to explore combining multiple versions of word embeddings, treating each of them as a distinct description of words. Our expectation is that the combination of these embedding versions, trained by different NNs on different corpora, should contain more information than each version individually. We want to leverage this diversity of different embedding versions to extract higher quality sentence features and thereby improve sentence classification performance.
The letters “M” and “V” in the name “MVCNN” of our architecture denote the multichannel and variable-size convolution filters, respectively. “Multichannel” employs language from computer vision where a color image has red, green and blue channels. Here, a channel is a description by an embedding version.
For many sentence classification tasks, only relatively small training sets are available. MVCNN has a large number of parameters, so that overfitting is a danger when they are trained on small training sets. We address this problem by pretraining MVCNN on unlabeled data. These pretrained weights can then be fine-tuned for the specific classification task.
In sum, we attribute the success of MVCNN to: (i) designing variable-size convolution filters to extract variable-range features of sentences and (ii) exploring the combination of multiple public embedding versions to initialize words in sentences. We also employ two “tricks” to further enhance system performance: mutual learning and pretraining.
In remaining parts, Section "Related Work" presents related work. Section "Model Description" gives details of our classification model. Section "Model Enhancements" introduces two tricks that enhance system performance: mutual-learning and pretraining. Section "Experiments" reports experimental results. Section "Conclusion" concludes this work.
Related Work
Much prior work has exploited deep neural networks to model sentences.
blacoe2012comparison represented a sentence by element-wise addition, multiplication, or recursive autoencoder over embeddings of component single words. yin2014exploration extended this approach by composing on words and phrases instead of only single words.
collobert2008unified and yu2014deep used one layer of convolution over phrases detected by a sliding window on a target sentence, then used max- or average-pooling to form a sentence representation.
blunsom2014convolutional stacked multiple layers of one-dimensional convolution by dynamic k-max pooling to model sentences. We also adopt dynamic k-max pooling while our convolution layer has variable-size filters.
kimEMNLP2014 also studied multichannel representation and variable-size filters. Differently, their multichannel relies on a single version of pretrained embeddings (i.e., pretrained Word2Vec embeddings) with two copies: one is kept stable and the other one is fine-tuned by backpropagation. We develop this insight by incorporating diverse embedding versions. Additionally, their idea of variable-size filters is further developed.
le2014distributed initialized the representation of a sentence as a parameter vector, treating it as a global feature and combining this vector with the representations of context words to do word prediction. Finally, this fine-tuned vector is used as representation of this sentence. Apparently, this method can only produce generic sentence representations which encode no task-specific features.
Our work is also inspired by studies that compared the performance of different word embedding versions or investigated the combination of them. For example, turian2010word compared Brown clusters, C&W embeddings and HLBL embeddings in NER and chunking tasks. They found that Brown clusters and word embeddings both can improve the accuracy of supervised NLP systems; and demonstrated empirically that combining different word representations is beneficial. luo2014pre adapted CBOW BIBREF12 to train word embeddings on different datasets: free text documents from Wikipedia, search click-through data and user query data, showing that combining them gets stronger results than using individual word embeddings in web search ranking and word similarity task. However, these two papers either learned word representations on the same corpus BIBREF13 or enhanced the embedding quality by extending training corpora, not learning algorithms BIBREF17 . In our work, there is no limit to the type of embedding versions we can use and they leverage not only the diversity of corpora, but also the different principles of learning algorithms.
Model Description
We now describe the architecture of our model MVCNN, illustrated in Figure 1 .
Multichannel Input. The input of MVCNN includes multichannel feature maps of a considered sentence, each is a matrix initialized by a different embedding version. Let $s$ be sentence length, $d$ dimension of word embeddings and $c$ the total number of different embedding versions (i.e., channels). Hence, the whole initialized input is a three-dimensional array of size $c\times d\times s$ . Figure 1 depicts a sentence with $s=12$ words. Each word is initialized by $c=5$ embeddings, each coming from a different channel. In implementation, sentences in a mini-batch will be padded to the same length, and unknown words for corresponding channel are randomly initialized or can acquire good initialization from the mutual-learning phase described in next section.
Multichannel initialization brings two advantages: 1) a frequent word can have $c$ representations in the beginning (instead of only one), which means it has more available information to leverage; 2) a rare word missed in some embedding versions can be “made up” by others (we call it “partially known word”). Therefore, this kind of initialization is able to make use of information about partially known words, without having to employ full random initialization or removal of unknown words. The vocabulary of the binary sentiment prediction task described in experimental part contains 5232 words unknown in HLBL embeddings, 4273 in Huang embeddings, 3299 in GloVe embeddings, 4136 in SENNA embeddings and 2257 in Word2Vec embeddings. But only 1824 words find no embedding from any channel! Hence, multichannel initialization can considerably reduce the number of unknown words.
Convolution Layer (Conv). For convenience, we first introduce how this work uses a convolution layer on one input feature map to generate one higher-level feature map. Given a sentence of length $s$ : $w_1, w_2, \ldots , w_s$ ; $\mathbf {w}_i\in \mathbb {R}^{d}$ denotes the embedding of word $w_i$ ; a convolution layer uses sliding filters to extract local features of that sentence. The filter width $l$ is a parameter. We first concatenate the initialized embeddings of $l$ consecutive words ( $\mathbf {w}_{i-l+1}, \ldots , \mathbf {w}_i$ ) as $\mathbf {c}_i\in \mathbb {R}^{ld}$ $(1\le i <s+l)$ , then generate the feature value of this phrase as $\textbf {p}_i$ (the whole vector $w_1, w_2, \ldots , w_s$0 contains all the local features) using a tanh activation function and a linear projection vector $w_1, w_2, \ldots , w_s$1 as:
$$\mathbf {p}_i=\mathrm {tanh}(\mathbf {v}^\mathrm {T}\mathbf {c}_i+b)$$ (Eq. 2)
More generally, convolution operation can deal with multiple input feature maps and can be stacked to yield feature maps of increasing layers. In each layer, there are usually multiple filters of the same size, but with different weights BIBREF4 . We refer to a filter with a specific set of weights as a kernel. The goal is often to train a model in which different kernels detect different kinds of features of a local region. However, this traditional way can not detect the features of regions of different granularity. Hence we keep the property of multi-kernel while extending it to variable-size in the same layer.
As in CNN for object recognition, to increase the number of kernels of a certain layer, multiple feature maps may be computed in parallel at the same layer. Further, to increase the size diversity of kernels in the same layer, more feature maps containing various-range dependency features can be learned. We denote a feature map of the $i^{\mathrm {th}}$ layer by $\mathbf {F}_i$ , and assume totally $n$ feature maps exist in layer $i-1$ : $\mathbf {F}_{i-1}^1, \ldots , \mathbf {F}_{i-1}^n$ . Considering a specific filter size $l$ in layer $i$ , each feature map $\mathbf {F}_{i,l}^j$ is computed by convolving a distinct set of filters of size $l$ , arranged in a matrix $\mathbf {V}_{i,l}^{j,k}$ , with each feature map $\mathbf {F}_i$0 and summing the results:
$$\mathbf {F}_{i,l}^j=\sum ^n_{k=1}\mathbf {V}_{i,l}^{j,k}*\mathbf {F}^k_{i-1}$$ (Eq. 3)
where $*$ indicates the convolution operation and $j$ is the index of a feature map in layer $i$ . The weights in $\mathbf {V}$ form a rank 4 tensor.
Note that we use wide convolution in this work: it means word representations $\mathbf {w}_g$ for $g\le 0$ or $g\ge s+1$ are actually zero embeddings. Wide convolution enables that each word can be detected by all filter weights in $\mathbf {V}$ .
In Figure 1 , the first convolution layer deals with an input with $n=5$ feature maps. Its filters have sizes 3 and 5 respectively (i.e., $l=3, 5$ ), and each filter has $j=3$ kernels. This means this convolution layer can detect three kinds of features of phrases with length 3 and 5, respectively.
DCNN in BIBREF4 used one-dimensional convolution: each higher-order feature is produced from values of a single dimension in the lower-layer feature map. Even though that work proposed folding operation to model the dependencies between adjacent dimensions, this type of dependency modeling is still limited. Differently, convolution in present work is able to model dependency across dimensions as well as adjacent words, which obviates the need for a folding step. This change also means our model has substantially fewer parameters than the DCNN since the output of each convolution layer is smaller by a factor of $d$ .
Dynamic k-max Pooling. blunsom2014convolutional pool the $k$ most active features compared with simple max (1-max) pooling BIBREF2 . This property enables it to connect multiple convolution layers to form a deep architecture to extract high-level abstract features. In this work, we directly use it to extract features for variable-size feature maps. For a given feature map in layer $i$ , dynamic k-max pooling extracts $k_{i}$ top values from each dimension and $k_{top}$ top values in the top layer. We set
$$\nonumber k_{i}=\mathrm {max}(k_{top}, \lceil \frac{L-i}{L}s\rceil )$$ (Eq. 5)
where $i\in \lbrace 1,2,\ldots \, L\rbrace $ is the order of convolution layer from bottom to top in Figure 1 ; $L$ is the total numbers of convolution layers; $k_{top}$ is a constant determined empirically, we set it to 4 as BIBREF4 .
As a result, the second convolution layer in Figure 1 has an input with two same-size feature maps, one results from filter size 3, one from filter size 5. The values in the two feature maps are for phrases with different granularity. The motivation of this convolution layer lies in that a feature reflected by a short phrase may be not trustworthy while the longer phrase containing the short one is trustworthy, or the long phrase has no trustworthy feature while its component short phrase is more reliable. This and even higher-order convolution layers therefore can make a trade-off between the features of different granularity.
Hidden Layer. On the top of the final k-max pooling, we stack a fully connected layer to learn sentence representation with given dimension (e.g., $d$ ).
Logistic Regression Layer. Finally, sentence representation is forwarded into logistic regression layer for classification.
In brief, our MVCNN model learns from BIBREF4 to use dynamic k-max pooling to stack multiple convolution layers, and gets insight from BIBREF5 to investigate variable-size filters in a convolution layer. Compared to BIBREF4 , MVCNN has rich feature maps as input and as output of each convolution layer. Its convolution operation is not only more flexible to extract features of variable-range phrases, but also able to model dependency among all dimensions of representations. MVCNN extends the network in BIBREF5 by hierarchical convolution architecture and further exploration of multichannel and variable-size feature detectors.
Model Enhancements
This part introduces two training tricks that enhance the performance of MVCNN in practice.
Mutual-Learning of Embedding Versions. One observation in using multiple embedding versions is that they have different vocabulary coverage. An unknown word in an embedding version may be a known word in another version. Thus, there exists a proportion of words that can only be partially initialized by certain versions of word embeddings, which means these words lack the description from other versions.
To alleviate this problem, we design a mutual-learning regime to predict representations of unknown words for each embedding version by learning projections between versions. As a result, all embedding versions have the same vocabulary. This processing ensures that more words in each embedding version receive a good representation, and is expected to give most words occurring in a classification dataset more comprehensive initialization (as opposed to just being randomly initialized).
Let $c$ be the number of embedding versions in consideration, $V_1, V_2, \ldots , V_i, \ldots , V_c$ their vocabularies, $V^*=\cup ^c_{i=1} V_i$ their union, and $V_i^-=V^*\backslash V_i$ ( $i=1, \ldots , c$ ) the vocabulary of unknown words for embedding version $i$ . Our goal is to learn embeddings for the words in $V_i^-$ by knowledge from the other $c-1$ embedding versions.
We use the overlapping vocabulary between $V_i$ and $V_j$ , denoted as $V_{ij}$ , as training set, formalizing a projection $f_{ij}$ from space $V_i$ to space $V_j$ ( $i\ne j; i, j\in \lbrace 1,2,\ldots ,c\rbrace $ ) as follows:
$$\mathbf {\hat{w}}_j=\mathbf {M}_{ij}\mathbf {w}_i$$ (Eq. 6)
where $\mathbf {M}_{ij}\in \mathbb {R}^{d\times d}$ , $\mathbf {w}_i\in \mathbb {R}^d$ denotes the representation of word $w$ in space $V_i$ and $\mathbf {\hat{w}}_j$ is the projected (or learned) representation of word $w$ in space $V_j$ . Squared error between $\mathbf {w}_j$ and $\mathbf {\hat{w}}_j$ is the training loss to minimize. We use $\hat{\mathbf {}{w}_j=f_{ij}(\mathbf {w}_i) to reformat Equation \ref {equ:proj}. Totally c(c-1)/2 projections f_{ij} are trained, each on the vocabulary intersection V_{ij}. }Let $ w $\mathbf {w}_i\in \mathbb {R}^d$0 Vi $\mathbf {w}_i\in \mathbb {R}^d$1 V1, V2, ..., Vk $\mathbf {w}_i\in \mathbb {R}^d$2 w $\mathbf {w}_i\in \mathbb {R}^d$3 Vi $\mathbf {w}_i\in \mathbb {R}^d$4 k $\mathbf {w}_i\in \mathbb {R}^d$5 f1i(w1) $\mathbf {w}_i\in \mathbb {R}^d$6 f2i(w2) $\mathbf {w}_i\in \mathbb {R}^d$7 ... $\mathbf {w}_i\in \mathbb {R}^d$8 fki(wk) $\mathbf {w}_i\in \mathbb {R}^d$9 V1, V2, ..., Vk $w$0 Vi $w$1 f1i(w1) $w$2 f2i(w2) $w$3 ... $w$4 fki(wk) $w$5 w $w$6 Vi $w$7 w $w$8 Vi $w$9
As discussed in Section "Model Description" , we found that for the binary sentiment classification dataset, many words were unknown in at least one embedding version. But of these words, a total of 5022 words did have coverage in another embedding version and so will benefit from mutual-learning. In the experiments, we will show that this is a very effective method to learn representations for unknown words that increases system performance if learned representations are used for initialization.
Pretraining. Sentence classification systems are usually implemented as supervised training regimes where training loss is between true label distribution and predicted label distribution. In this work, we use pretraining on the unlabeled data of each task and show that it can increase the performance of classification systems.
Figure 1 shows our pretraining setup. The “sentence representation” – the output of “Fully connected” hidden layer – is used to predict the component words (“on” in the figure) in the sentence (instead of predicting the sentence label Y/N as in supervised learning). Concretely, the sentence representation is averaged with representations of some surrounding words (“the”, “cat”, “sat”, “the”, “mat”, “,” in the figure) to predict the middle word (“on”).
Given sentence representation $\mathbf {s}\in \mathbb {R}^d$ and initialized representations of $2t$ context words ( $t$ left words and $t$ right words): $\mathbf {w}_{i-t}$ , $\ldots $ , $\mathbf {w}_{i-1}$ , $\mathbf {w}_{i+1}$ , $\ldots $ , $\mathbf {w}_{i+t}$ ; $2t$0 , we average the total $2t$1 vectors element-wise, depicted as “Average” operation in Figure 1 . Then, this resulting vector is treated as a predicted representation of the middle word and is used to find the true middle word by means of noise-contrastive estimation (NCE) BIBREF18 . For each true example, 10 noise words are sampled.
Note that in pretraining, there are three places where each word needs initialization. (i) Each word in the sentence is initialized in the “Multichannel input” layer to the whole network. (ii) Each context word is initialized as input to the average layer (“Average” in the figure). (iii) Each target word is initialized as the output of the “NCE” layer (“on” in the figure). In this work, we use multichannel initialization for case (i) and random initialization for cases (ii) and (iii). Only fine-tuned multichannel representations (case (i)) are kept for subsequent supervised training.
The rationale for this pretraining is similar to auto-encoder: for an object composed of smaller-granular elements, the representations of the whole object and its components can learn each other. The CNN architecture learns sentence features layer by layer, then those features are justified by all constituent words.
During pretraining, all the model parameters, including mutichannel input, convolution parameters and fully connected layer, will be updated until they are mature to extract the sentence features. Subsequently, the same sets of parameters will be fine-tuned for supervised classification tasks.
In sum, this pretraining is designed to produce good initial values for both model parameters and word embeddings. It is especially helpful for pretraining the embeddings of unknown words.
Experiments
We test the network on four classification tasks. We begin by specifying aspects of the implementation and the training of the network. We then report the results of the experiments.
Hyperparameters and Training
In each of the experiments, the top of the network is a logistic regression that predicts the probability distribution over classes given the input sentence. The network is trained to minimize cross-entropy of predicted and true distributions; the objective includes an $L_2$ regularization term over the parameters. The set of parameters comprises the word embeddings, all filter weights and the weights in fully connected layers. A dropout operation BIBREF19 is put before the logistic regression layer. The network is trained by back-propagation in mini-batches and the gradient-based optimization is performed using the AdaGrad update rule BIBREF20
In all data sets, the initial learning rate is 0.01, dropout probability is 0.8, $L_2$ weight is $5\cdot 10^{-3}$ , batch size is 50. In each convolution layer, filter sizes are {3, 5, 7, 9} and each filter has five kernels (independent of filter size).
Datasets and Experimental Setup
Standard Sentiment Treebank BIBREF21 . This small-scale dataset includes two tasks predicting the sentiment of movie reviews. The output variable is binary in one experiment and can have five possible outcomes in the other: {negative, somewhat negative, neutral, somewhat positive, positive}. In the binary case, we use the given split of 6920 training, 872 development and 1821 test sentences. Likewise, in the fine-grained case, we use the standard 8544/1101/2210 split. socher2013recursive used the Stanford Parser BIBREF22 to parse each sentence into subphrases. The subphrases were then labeled by human annotators in the same way as the sentences were labeled. Labeled phrases that occur as subparts of the training sentences are treated as independent training instances as in BIBREF23 , BIBREF4 .
Sentiment140 BIBREF24 . This is a large-scale dataset of tweets about sentiment classification, where a tweet is automatically labeled as positive or negative depending on the emoticon that occurs in it. The training set consists of 1.6 million tweets with emoticon-based labels and the test set of about 400 hand-annotated tweets. We preprocess the tweets minimally as follows. 1) The equivalence class symbol “url” (resp. “username”) replaces all URLs (resp. all words that start with the @ symbol, e.g., @thomasss). 2) A sequence of $k>2$ repetitions of a letter $c$ (e.g., “cooooooool”) is replaced by two occurrences of $c$ (e.g., “cool”). 3) All tokens are lowercased.
Subj. Subjectivity classification dataset released by BIBREF25 has 5000 subjective sentences and 5000 objective sentences. We report the result of 10-fold cross validation as baseline systems did.
In this work, we use five embedding versions, as shown in Table 1 , to initialize words. Four of them are directly downloaded from the Internet. (i) HLBL. Hierarchical log-bilinear model presented by mnih2009scalable and released by turian2010word; size: 246,122 word embeddings; training corpus: RCV1 corpus, one year of Reuters English newswire from August 1996 to August 1997. (ii) Huang. huang2012improving incorporated global context to deal with challenges raised by words with multiple meanings; size: 100,232 word embeddings; training corpus: April 2010 snapshot of Wikipedia. (iii) GloVe. Size: 1,193,514 word embeddings; training corpus: a Twitter corpus of 2B tweets with 27B tokens. (iv) SENNA. Size: 130,000 word embeddings; training corpus: Wikipedia. Note that we use their 50-dimensional embeddings. (v) Word2Vec. It has no 50-dimensional embeddings available online. We use released code to train skip-gram on English Gigaword Corpus BIBREF26 with setup: window size 5, negative sampling, sampling rate $10^{-3}$ , threads 12. It is worth emphasizing that above embeddings sets are derived on different corpora with different algorithms. This is the very property that we want to make use of to promote the system performance.
Table 2 shows the number of unknown words in each task when using corresponding embedding version to initialize (rows “HLBL”, “Huang”, “Glove”, “SENNA”, “W2V”) and the number of words fully initialized by five embedding versions (“Full hit” row), the number of words partially initialized (“Partial hit” row) and the number of words that cannot be initialized by any of the embedding versions (“No hit” row).
About 30% of words in each task have partially initialized embeddings and our mutual-learning is able to initialize the missing embeddings through projections. Pretraining is expected to learn good representations for all words, but pretraining is especially important for words without initialization (“no hit”); a particularly clear example for this is the Senti140 task: 236,484 of 387,877 words or 61% are in the “no hit” category.
Table 3 compares results on test of MVCNN and its variants with other baselines in the four sentence classification tasks. Row 34, “MVCNN (overall)”, shows performance of the best configuration of MVCNN, optimized on dev. This version uses five versions of word embeddings, four filter sizes (3, 5, 7, 9), both mutual-learning and pretraining, three convolution layers for Senti140 task and two convolution layers for the other tasks. Overall, our system gets the best results, beating all baselines.
The table contains five blocks from top to bottom. Each block investigates one specific configurational aspect of the system. All results in the five blocks are with respect to row 34, “MVCNN (overall)”; e.g., row 19 shows what happens when HLBL is removed from row 34, row 28 shows what happens when mutual learning is removed from row 34 etc.
The block “baselines” (1–18) lists some systems representative of previous work on the corresponding datasets, including the state-of-the-art systems (marked as italic). The block “versions” (19–23) shows the results of our system when one of the embedding versions was not used during training. We want to explore to what extend different embedding versions contribute to performance. The block “filters” (24–27) gives the results when individual filter width is discarded. It also tells us how much a filter with specific size influences. The block “tricks” (28–29) shows the system performance when no mutual-learning or no pretraining is used. The block “layers” (30–33) demonstrates how the system performs when it has different numbers of convolution layers.
From the “layers” block, we can see that our system performs best with two layers of convolution in Standard Sentiment Treebank and Subjectivity Classification tasks (row 31), but with three layers of convolution in Sentiment140 (row 32). This is probably due to Sentiment140 being a much larger dataset; in such a case deeper neural networks are beneficial.
The block “tricks” demonstrates the effect of mutual-learning and pretraining. Apparently, pretraining has a bigger impact on performance than mutual-learning. We speculate that it is because pretraining can influence more words and all learned word embeddings are tuned on the dataset after pretraining.
The block “filters” indicates the contribution of each filter size. The system benefits from filters of each size. Sizes 5 and 7 are most important for high performance, especially 7 (rows 25 and 26).
In the block “versions”, we see that each embedding version is crucial for good performance: performance drops in every single case. Though it is not easy to compare fairly different embedding versions in NLP tasks, especially when those embeddings were trained on different corpora of different sizes using different algorithms, our results are potentially instructive for researchers making decision on which embeddings to use for their own tasks.
Conclusion
This work presented MVCNN, a novel CNN architecture for sentence classification. It combines multichannel initialization – diverse versions of pretrained word embeddings are used – and variable-size filters – features of multigranular phrases are extracted with variable-size convolution filters. We demonstrated that multichannel initialization and variable-size filters enhance system performance on sentiment classification and subjectivity classification tasks.
Future Work
As pointed out by the reviewers the success of the multichannel approach is likely due to a combination of several quite different effects.
First, there is the effect of the embedding learning algorithm. These algorithms differ in many aspects, including in sensitivity to word order (e.g., SENNA: yes, word2vec: no), in objective function and in their treatment of ambiguity (explicitly modeled only by huang2012improving.
Second, there is the effect of the corpus. We would expect the size and genre of the corpus to have a big effect even though we did not analyze this effect in this paper.
Third, complementarity of word embeddings is likely to be more useful for some tasks than for others. Sentiment is a good application for complementary word embeddings because solving this task requires drawing on heterogeneous sources of information, including syntax, semantics and genre as well as the core polarity of a word. Other tasks like part of speech (POS) tagging may benefit less from heterogeneity since the benefit of embeddings in POS often comes down to making a correct choice between two alternatives – a single embedding version may be sufficient for this.
We plan to pursue these questions in future work.
Acknowledgments
Thanks to CIS members and anonymous reviewers for constructive comments. This work was supported by Baidu (through a Baidu scholarship awarded to Wenpeng Yin) and by Deutsche Forschungsgemeinschaft (grant DFG SCHU 2246/8-2, SPP 1335). | each embedding version is crucial for good performance |
eddb18109495976123e10f9c6946a256a55074bd | eddb18109495976123e10f9c6946a256a55074bd_0 | Q: How is MVCNN compared to CNN?
Text: Introduction
Different sentence classification tasks are crucial for many Natural Language Processing (NLP) applications. Natural language sentences have complicated structures, both sequential and hierarchical, that are essential for understanding them. In addition, how to decode and compose the features of component units, including single words and variable-size phrases, is central to the sentence classification problem.
In recent years, deep learning models have achieved remarkable results in computer vision BIBREF0 , speech recognition BIBREF1 and NLP BIBREF2 . A problem largely specific to NLP is how to detect features of linguistic units, how to conduct composition over variable-size sequences and how to use them for NLP tasks BIBREF3 , BIBREF4 , BIBREF5 . socher2011dynamic proposed recursive neural networks to form phrases based on parsing trees. This approach depends on the availability of a well performing parser; for many languages and domains, especially noisy domains, reliable parsing is difficult. Hence, convolution neural networks (CNN) are getting increasing attention, for they are able to model long-range dependencies in sentences via hierarchical structures BIBREF6 , BIBREF5 , BIBREF7 . Current CNN systems usually implement a convolution layer with fixed-size filters (i.e., feature detectors), in which the concrete filter size is a hyperparameter. They essentially split a sentence into multiple sub-sentences by a sliding window, then determine the sentence label by using the dominant label across all sub-sentences. The underlying assumption is that the sub-sentence with that granularity is potentially good enough to represent the whole sentence. However, it is hard to find the granularity of a “good sub-sentence” that works well across sentences. This motivates us to implement variable-size filters in a convolution layer in order to extract features of multigranular phrases.
Breakthroughs of deep learning in NLP are also based on learning distributed word representations – also called “word embeddings” – by neural language models BIBREF8 , BIBREF9 , BIBREF10 , BIBREF11 , BIBREF12 . Word embeddings are derived by projecting words from a sparse, 1-of- $V$ encoding ( $V$ : vocabulary size) onto a lower dimensional and dense vector space via hidden layers and can be interpreted as feature extractors that encode semantic and syntactic features of words.
Many papers study the comparative performance of different versions of word embeddings, usually learned by different neural network (NN) architectures. For example, chen2013expressive compared HLBL BIBREF9 , SENNA BIBREF2 , Turian BIBREF13 and Huang BIBREF14 , showing great variance in quality and characteristics of the semantics captured by the tested embedding versions. hill2014not showed that embeddings learned by neural machine translation models outperform three representative monolingual embedding versions: skip-gram BIBREF15 , GloVe BIBREF16 and C&W BIBREF3 in some cases. These prior studies motivate us to explore combining multiple versions of word embeddings, treating each of them as a distinct description of words. Our expectation is that the combination of these embedding versions, trained by different NNs on different corpora, should contain more information than each version individually. We want to leverage this diversity of different embedding versions to extract higher quality sentence features and thereby improve sentence classification performance.
The letters “M” and “V” in the name “MVCNN” of our architecture denote the multichannel and variable-size convolution filters, respectively. “Multichannel” employs language from computer vision where a color image has red, green and blue channels. Here, a channel is a description by an embedding version.
For many sentence classification tasks, only relatively small training sets are available. MVCNN has a large number of parameters, so that overfitting is a danger when they are trained on small training sets. We address this problem by pretraining MVCNN on unlabeled data. These pretrained weights can then be fine-tuned for the specific classification task.
In sum, we attribute the success of MVCNN to: (i) designing variable-size convolution filters to extract variable-range features of sentences and (ii) exploring the combination of multiple public embedding versions to initialize words in sentences. We also employ two “tricks” to further enhance system performance: mutual learning and pretraining.
In remaining parts, Section "Related Work" presents related work. Section "Model Description" gives details of our classification model. Section "Model Enhancements" introduces two tricks that enhance system performance: mutual-learning and pretraining. Section "Experiments" reports experimental results. Section "Conclusion" concludes this work.
Related Work
Much prior work has exploited deep neural networks to model sentences.
blacoe2012comparison represented a sentence by element-wise addition, multiplication, or recursive autoencoder over embeddings of component single words. yin2014exploration extended this approach by composing on words and phrases instead of only single words.
collobert2008unified and yu2014deep used one layer of convolution over phrases detected by a sliding window on a target sentence, then used max- or average-pooling to form a sentence representation.
blunsom2014convolutional stacked multiple layers of one-dimensional convolution by dynamic k-max pooling to model sentences. We also adopt dynamic k-max pooling while our convolution layer has variable-size filters.
kimEMNLP2014 also studied multichannel representation and variable-size filters. Differently, their multichannel relies on a single version of pretrained embeddings (i.e., pretrained Word2Vec embeddings) with two copies: one is kept stable and the other one is fine-tuned by backpropagation. We develop this insight by incorporating diverse embedding versions. Additionally, their idea of variable-size filters is further developed.
le2014distributed initialized the representation of a sentence as a parameter vector, treating it as a global feature and combining this vector with the representations of context words to do word prediction. Finally, this fine-tuned vector is used as representation of this sentence. Apparently, this method can only produce generic sentence representations which encode no task-specific features.
Our work is also inspired by studies that compared the performance of different word embedding versions or investigated the combination of them. For example, turian2010word compared Brown clusters, C&W embeddings and HLBL embeddings in NER and chunking tasks. They found that Brown clusters and word embeddings both can improve the accuracy of supervised NLP systems; and demonstrated empirically that combining different word representations is beneficial. luo2014pre adapted CBOW BIBREF12 to train word embeddings on different datasets: free text documents from Wikipedia, search click-through data and user query data, showing that combining them gets stronger results than using individual word embeddings in web search ranking and word similarity task. However, these two papers either learned word representations on the same corpus BIBREF13 or enhanced the embedding quality by extending training corpora, not learning algorithms BIBREF17 . In our work, there is no limit to the type of embedding versions we can use and they leverage not only the diversity of corpora, but also the different principles of learning algorithms.
Model Description
We now describe the architecture of our model MVCNN, illustrated in Figure 1 .
Multichannel Input. The input of MVCNN includes multichannel feature maps of a considered sentence, each is a matrix initialized by a different embedding version. Let $s$ be sentence length, $d$ dimension of word embeddings and $c$ the total number of different embedding versions (i.e., channels). Hence, the whole initialized input is a three-dimensional array of size $c\times d\times s$ . Figure 1 depicts a sentence with $s=12$ words. Each word is initialized by $c=5$ embeddings, each coming from a different channel. In implementation, sentences in a mini-batch will be padded to the same length, and unknown words for corresponding channel are randomly initialized or can acquire good initialization from the mutual-learning phase described in next section.
Multichannel initialization brings two advantages: 1) a frequent word can have $c$ representations in the beginning (instead of only one), which means it has more available information to leverage; 2) a rare word missed in some embedding versions can be “made up” by others (we call it “partially known word”). Therefore, this kind of initialization is able to make use of information about partially known words, without having to employ full random initialization or removal of unknown words. The vocabulary of the binary sentiment prediction task described in experimental part contains 5232 words unknown in HLBL embeddings, 4273 in Huang embeddings, 3299 in GloVe embeddings, 4136 in SENNA embeddings and 2257 in Word2Vec embeddings. But only 1824 words find no embedding from any channel! Hence, multichannel initialization can considerably reduce the number of unknown words.
Convolution Layer (Conv). For convenience, we first introduce how this work uses a convolution layer on one input feature map to generate one higher-level feature map. Given a sentence of length $s$ : $w_1, w_2, \ldots , w_s$ ; $\mathbf {w}_i\in \mathbb {R}^{d}$ denotes the embedding of word $w_i$ ; a convolution layer uses sliding filters to extract local features of that sentence. The filter width $l$ is a parameter. We first concatenate the initialized embeddings of $l$ consecutive words ( $\mathbf {w}_{i-l+1}, \ldots , \mathbf {w}_i$ ) as $\mathbf {c}_i\in \mathbb {R}^{ld}$ $(1\le i <s+l)$ , then generate the feature value of this phrase as $\textbf {p}_i$ (the whole vector $w_1, w_2, \ldots , w_s$0 contains all the local features) using a tanh activation function and a linear projection vector $w_1, w_2, \ldots , w_s$1 as:
$$\mathbf {p}_i=\mathrm {tanh}(\mathbf {v}^\mathrm {T}\mathbf {c}_i+b)$$ (Eq. 2)
More generally, convolution operation can deal with multiple input feature maps and can be stacked to yield feature maps of increasing layers. In each layer, there are usually multiple filters of the same size, but with different weights BIBREF4 . We refer to a filter with a specific set of weights as a kernel. The goal is often to train a model in which different kernels detect different kinds of features of a local region. However, this traditional way can not detect the features of regions of different granularity. Hence we keep the property of multi-kernel while extending it to variable-size in the same layer.
As in CNN for object recognition, to increase the number of kernels of a certain layer, multiple feature maps may be computed in parallel at the same layer. Further, to increase the size diversity of kernels in the same layer, more feature maps containing various-range dependency features can be learned. We denote a feature map of the $i^{\mathrm {th}}$ layer by $\mathbf {F}_i$ , and assume totally $n$ feature maps exist in layer $i-1$ : $\mathbf {F}_{i-1}^1, \ldots , \mathbf {F}_{i-1}^n$ . Considering a specific filter size $l$ in layer $i$ , each feature map $\mathbf {F}_{i,l}^j$ is computed by convolving a distinct set of filters of size $l$ , arranged in a matrix $\mathbf {V}_{i,l}^{j,k}$ , with each feature map $\mathbf {F}_i$0 and summing the results:
$$\mathbf {F}_{i,l}^j=\sum ^n_{k=1}\mathbf {V}_{i,l}^{j,k}*\mathbf {F}^k_{i-1}$$ (Eq. 3)
where $*$ indicates the convolution operation and $j$ is the index of a feature map in layer $i$ . The weights in $\mathbf {V}$ form a rank 4 tensor.
Note that we use wide convolution in this work: it means word representations $\mathbf {w}_g$ for $g\le 0$ or $g\ge s+1$ are actually zero embeddings. Wide convolution enables that each word can be detected by all filter weights in $\mathbf {V}$ .
In Figure 1 , the first convolution layer deals with an input with $n=5$ feature maps. Its filters have sizes 3 and 5 respectively (i.e., $l=3, 5$ ), and each filter has $j=3$ kernels. This means this convolution layer can detect three kinds of features of phrases with length 3 and 5, respectively.
DCNN in BIBREF4 used one-dimensional convolution: each higher-order feature is produced from values of a single dimension in the lower-layer feature map. Even though that work proposed folding operation to model the dependencies between adjacent dimensions, this type of dependency modeling is still limited. Differently, convolution in present work is able to model dependency across dimensions as well as adjacent words, which obviates the need for a folding step. This change also means our model has substantially fewer parameters than the DCNN since the output of each convolution layer is smaller by a factor of $d$ .
Dynamic k-max Pooling. blunsom2014convolutional pool the $k$ most active features compared with simple max (1-max) pooling BIBREF2 . This property enables it to connect multiple convolution layers to form a deep architecture to extract high-level abstract features. In this work, we directly use it to extract features for variable-size feature maps. For a given feature map in layer $i$ , dynamic k-max pooling extracts $k_{i}$ top values from each dimension and $k_{top}$ top values in the top layer. We set
$$\nonumber k_{i}=\mathrm {max}(k_{top}, \lceil \frac{L-i}{L}s\rceil )$$ (Eq. 5)
where $i\in \lbrace 1,2,\ldots \, L\rbrace $ is the order of convolution layer from bottom to top in Figure 1 ; $L$ is the total numbers of convolution layers; $k_{top}$ is a constant determined empirically, we set it to 4 as BIBREF4 .
As a result, the second convolution layer in Figure 1 has an input with two same-size feature maps, one results from filter size 3, one from filter size 5. The values in the two feature maps are for phrases with different granularity. The motivation of this convolution layer lies in that a feature reflected by a short phrase may be not trustworthy while the longer phrase containing the short one is trustworthy, or the long phrase has no trustworthy feature while its component short phrase is more reliable. This and even higher-order convolution layers therefore can make a trade-off between the features of different granularity.
Hidden Layer. On the top of the final k-max pooling, we stack a fully connected layer to learn sentence representation with given dimension (e.g., $d$ ).
Logistic Regression Layer. Finally, sentence representation is forwarded into logistic regression layer for classification.
In brief, our MVCNN model learns from BIBREF4 to use dynamic k-max pooling to stack multiple convolution layers, and gets insight from BIBREF5 to investigate variable-size filters in a convolution layer. Compared to BIBREF4 , MVCNN has rich feature maps as input and as output of each convolution layer. Its convolution operation is not only more flexible to extract features of variable-range phrases, but also able to model dependency among all dimensions of representations. MVCNN extends the network in BIBREF5 by hierarchical convolution architecture and further exploration of multichannel and variable-size feature detectors.
Model Enhancements
This part introduces two training tricks that enhance the performance of MVCNN in practice.
Mutual-Learning of Embedding Versions. One observation in using multiple embedding versions is that they have different vocabulary coverage. An unknown word in an embedding version may be a known word in another version. Thus, there exists a proportion of words that can only be partially initialized by certain versions of word embeddings, which means these words lack the description from other versions.
To alleviate this problem, we design a mutual-learning regime to predict representations of unknown words for each embedding version by learning projections between versions. As a result, all embedding versions have the same vocabulary. This processing ensures that more words in each embedding version receive a good representation, and is expected to give most words occurring in a classification dataset more comprehensive initialization (as opposed to just being randomly initialized).
Let $c$ be the number of embedding versions in consideration, $V_1, V_2, \ldots , V_i, \ldots , V_c$ their vocabularies, $V^*=\cup ^c_{i=1} V_i$ their union, and $V_i^-=V^*\backslash V_i$ ( $i=1, \ldots , c$ ) the vocabulary of unknown words for embedding version $i$ . Our goal is to learn embeddings for the words in $V_i^-$ by knowledge from the other $c-1$ embedding versions.
We use the overlapping vocabulary between $V_i$ and $V_j$ , denoted as $V_{ij}$ , as training set, formalizing a projection $f_{ij}$ from space $V_i$ to space $V_j$ ( $i\ne j; i, j\in \lbrace 1,2,\ldots ,c\rbrace $ ) as follows:
$$\mathbf {\hat{w}}_j=\mathbf {M}_{ij}\mathbf {w}_i$$ (Eq. 6)
where $\mathbf {M}_{ij}\in \mathbb {R}^{d\times d}$ , $\mathbf {w}_i\in \mathbb {R}^d$ denotes the representation of word $w$ in space $V_i$ and $\mathbf {\hat{w}}_j$ is the projected (or learned) representation of word $w$ in space $V_j$ . Squared error between $\mathbf {w}_j$ and $\mathbf {\hat{w}}_j$ is the training loss to minimize. We use $\hat{\mathbf {}{w}_j=f_{ij}(\mathbf {w}_i) to reformat Equation \ref {equ:proj}. Totally c(c-1)/2 projections f_{ij} are trained, each on the vocabulary intersection V_{ij}. }Let $ w $\mathbf {w}_i\in \mathbb {R}^d$0 Vi $\mathbf {w}_i\in \mathbb {R}^d$1 V1, V2, ..., Vk $\mathbf {w}_i\in \mathbb {R}^d$2 w $\mathbf {w}_i\in \mathbb {R}^d$3 Vi $\mathbf {w}_i\in \mathbb {R}^d$4 k $\mathbf {w}_i\in \mathbb {R}^d$5 f1i(w1) $\mathbf {w}_i\in \mathbb {R}^d$6 f2i(w2) $\mathbf {w}_i\in \mathbb {R}^d$7 ... $\mathbf {w}_i\in \mathbb {R}^d$8 fki(wk) $\mathbf {w}_i\in \mathbb {R}^d$9 V1, V2, ..., Vk $w$0 Vi $w$1 f1i(w1) $w$2 f2i(w2) $w$3 ... $w$4 fki(wk) $w$5 w $w$6 Vi $w$7 w $w$8 Vi $w$9
As discussed in Section "Model Description" , we found that for the binary sentiment classification dataset, many words were unknown in at least one embedding version. But of these words, a total of 5022 words did have coverage in another embedding version and so will benefit from mutual-learning. In the experiments, we will show that this is a very effective method to learn representations for unknown words that increases system performance if learned representations are used for initialization.
Pretraining. Sentence classification systems are usually implemented as supervised training regimes where training loss is between true label distribution and predicted label distribution. In this work, we use pretraining on the unlabeled data of each task and show that it can increase the performance of classification systems.
Figure 1 shows our pretraining setup. The “sentence representation” – the output of “Fully connected” hidden layer – is used to predict the component words (“on” in the figure) in the sentence (instead of predicting the sentence label Y/N as in supervised learning). Concretely, the sentence representation is averaged with representations of some surrounding words (“the”, “cat”, “sat”, “the”, “mat”, “,” in the figure) to predict the middle word (“on”).
Given sentence representation $\mathbf {s}\in \mathbb {R}^d$ and initialized representations of $2t$ context words ( $t$ left words and $t$ right words): $\mathbf {w}_{i-t}$ , $\ldots $ , $\mathbf {w}_{i-1}$ , $\mathbf {w}_{i+1}$ , $\ldots $ , $\mathbf {w}_{i+t}$ ; $2t$0 , we average the total $2t$1 vectors element-wise, depicted as “Average” operation in Figure 1 . Then, this resulting vector is treated as a predicted representation of the middle word and is used to find the true middle word by means of noise-contrastive estimation (NCE) BIBREF18 . For each true example, 10 noise words are sampled.
Note that in pretraining, there are three places where each word needs initialization. (i) Each word in the sentence is initialized in the “Multichannel input” layer to the whole network. (ii) Each context word is initialized as input to the average layer (“Average” in the figure). (iii) Each target word is initialized as the output of the “NCE” layer (“on” in the figure). In this work, we use multichannel initialization for case (i) and random initialization for cases (ii) and (iii). Only fine-tuned multichannel representations (case (i)) are kept for subsequent supervised training.
The rationale for this pretraining is similar to auto-encoder: for an object composed of smaller-granular elements, the representations of the whole object and its components can learn each other. The CNN architecture learns sentence features layer by layer, then those features are justified by all constituent words.
During pretraining, all the model parameters, including mutichannel input, convolution parameters and fully connected layer, will be updated until they are mature to extract the sentence features. Subsequently, the same sets of parameters will be fine-tuned for supervised classification tasks.
In sum, this pretraining is designed to produce good initial values for both model parameters and word embeddings. It is especially helpful for pretraining the embeddings of unknown words.
Experiments
We test the network on four classification tasks. We begin by specifying aspects of the implementation and the training of the network. We then report the results of the experiments.
Hyperparameters and Training
In each of the experiments, the top of the network is a logistic regression that predicts the probability distribution over classes given the input sentence. The network is trained to minimize cross-entropy of predicted and true distributions; the objective includes an $L_2$ regularization term over the parameters. The set of parameters comprises the word embeddings, all filter weights and the weights in fully connected layers. A dropout operation BIBREF19 is put before the logistic regression layer. The network is trained by back-propagation in mini-batches and the gradient-based optimization is performed using the AdaGrad update rule BIBREF20
In all data sets, the initial learning rate is 0.01, dropout probability is 0.8, $L_2$ weight is $5\cdot 10^{-3}$ , batch size is 50. In each convolution layer, filter sizes are {3, 5, 7, 9} and each filter has five kernels (independent of filter size).
Datasets and Experimental Setup
Standard Sentiment Treebank BIBREF21 . This small-scale dataset includes two tasks predicting the sentiment of movie reviews. The output variable is binary in one experiment and can have five possible outcomes in the other: {negative, somewhat negative, neutral, somewhat positive, positive}. In the binary case, we use the given split of 6920 training, 872 development and 1821 test sentences. Likewise, in the fine-grained case, we use the standard 8544/1101/2210 split. socher2013recursive used the Stanford Parser BIBREF22 to parse each sentence into subphrases. The subphrases were then labeled by human annotators in the same way as the sentences were labeled. Labeled phrases that occur as subparts of the training sentences are treated as independent training instances as in BIBREF23 , BIBREF4 .
Sentiment140 BIBREF24 . This is a large-scale dataset of tweets about sentiment classification, where a tweet is automatically labeled as positive or negative depending on the emoticon that occurs in it. The training set consists of 1.6 million tweets with emoticon-based labels and the test set of about 400 hand-annotated tweets. We preprocess the tweets minimally as follows. 1) The equivalence class symbol “url” (resp. “username”) replaces all URLs (resp. all words that start with the @ symbol, e.g., @thomasss). 2) A sequence of $k>2$ repetitions of a letter $c$ (e.g., “cooooooool”) is replaced by two occurrences of $c$ (e.g., “cool”). 3) All tokens are lowercased.
Subj. Subjectivity classification dataset released by BIBREF25 has 5000 subjective sentences and 5000 objective sentences. We report the result of 10-fold cross validation as baseline systems did.
In this work, we use five embedding versions, as shown in Table 1 , to initialize words. Four of them are directly downloaded from the Internet. (i) HLBL. Hierarchical log-bilinear model presented by mnih2009scalable and released by turian2010word; size: 246,122 word embeddings; training corpus: RCV1 corpus, one year of Reuters English newswire from August 1996 to August 1997. (ii) Huang. huang2012improving incorporated global context to deal with challenges raised by words with multiple meanings; size: 100,232 word embeddings; training corpus: April 2010 snapshot of Wikipedia. (iii) GloVe. Size: 1,193,514 word embeddings; training corpus: a Twitter corpus of 2B tweets with 27B tokens. (iv) SENNA. Size: 130,000 word embeddings; training corpus: Wikipedia. Note that we use their 50-dimensional embeddings. (v) Word2Vec. It has no 50-dimensional embeddings available online. We use released code to train skip-gram on English Gigaword Corpus BIBREF26 with setup: window size 5, negative sampling, sampling rate $10^{-3}$ , threads 12. It is worth emphasizing that above embeddings sets are derived on different corpora with different algorithms. This is the very property that we want to make use of to promote the system performance.
Table 2 shows the number of unknown words in each task when using corresponding embedding version to initialize (rows “HLBL”, “Huang”, “Glove”, “SENNA”, “W2V”) and the number of words fully initialized by five embedding versions (“Full hit” row), the number of words partially initialized (“Partial hit” row) and the number of words that cannot be initialized by any of the embedding versions (“No hit” row).
About 30% of words in each task have partially initialized embeddings and our mutual-learning is able to initialize the missing embeddings through projections. Pretraining is expected to learn good representations for all words, but pretraining is especially important for words without initialization (“no hit”); a particularly clear example for this is the Senti140 task: 236,484 of 387,877 words or 61% are in the “no hit” category.
Table 3 compares results on test of MVCNN and its variants with other baselines in the four sentence classification tasks. Row 34, “MVCNN (overall)”, shows performance of the best configuration of MVCNN, optimized on dev. This version uses five versions of word embeddings, four filter sizes (3, 5, 7, 9), both mutual-learning and pretraining, three convolution layers for Senti140 task and two convolution layers for the other tasks. Overall, our system gets the best results, beating all baselines.
The table contains five blocks from top to bottom. Each block investigates one specific configurational aspect of the system. All results in the five blocks are with respect to row 34, “MVCNN (overall)”; e.g., row 19 shows what happens when HLBL is removed from row 34, row 28 shows what happens when mutual learning is removed from row 34 etc.
The block “baselines” (1–18) lists some systems representative of previous work on the corresponding datasets, including the state-of-the-art systems (marked as italic). The block “versions” (19–23) shows the results of our system when one of the embedding versions was not used during training. We want to explore to what extend different embedding versions contribute to performance. The block “filters” (24–27) gives the results when individual filter width is discarded. It also tells us how much a filter with specific size influences. The block “tricks” (28–29) shows the system performance when no mutual-learning or no pretraining is used. The block “layers” (30–33) demonstrates how the system performs when it has different numbers of convolution layers.
From the “layers” block, we can see that our system performs best with two layers of convolution in Standard Sentiment Treebank and Subjectivity Classification tasks (row 31), but with three layers of convolution in Sentiment140 (row 32). This is probably due to Sentiment140 being a much larger dataset; in such a case deeper neural networks are beneficial.
The block “tricks” demonstrates the effect of mutual-learning and pretraining. Apparently, pretraining has a bigger impact on performance than mutual-learning. We speculate that it is because pretraining can influence more words and all learned word embeddings are tuned on the dataset after pretraining.
The block “filters” indicates the contribution of each filter size. The system benefits from filters of each size. Sizes 5 and 7 are most important for high performance, especially 7 (rows 25 and 26).
In the block “versions”, we see that each embedding version is crucial for good performance: performance drops in every single case. Though it is not easy to compare fairly different embedding versions in NLP tasks, especially when those embeddings were trained on different corpora of different sizes using different algorithms, our results are potentially instructive for researchers making decision on which embeddings to use for their own tasks.
Conclusion
This work presented MVCNN, a novel CNN architecture for sentence classification. It combines multichannel initialization – diverse versions of pretrained word embeddings are used – and variable-size filters – features of multigranular phrases are extracted with variable-size convolution filters. We demonstrated that multichannel initialization and variable-size filters enhance system performance on sentiment classification and subjectivity classification tasks.
Future Work
As pointed out by the reviewers the success of the multichannel approach is likely due to a combination of several quite different effects.
First, there is the effect of the embedding learning algorithm. These algorithms differ in many aspects, including in sensitivity to word order (e.g., SENNA: yes, word2vec: no), in objective function and in their treatment of ambiguity (explicitly modeled only by huang2012improving.
Second, there is the effect of the corpus. We would expect the size and genre of the corpus to have a big effect even though we did not analyze this effect in this paper.
Third, complementarity of word embeddings is likely to be more useful for some tasks than for others. Sentiment is a good application for complementary word embeddings because solving this task requires drawing on heterogeneous sources of information, including syntax, semantics and genre as well as the core polarity of a word. Other tasks like part of speech (POS) tagging may benefit less from heterogeneity since the benefit of embeddings in POS often comes down to making a correct choice between two alternatives – a single embedding version may be sufficient for this.
We plan to pursue these questions in future work.
Acknowledgments
Thanks to CIS members and anonymous reviewers for constructive comments. This work was supported by Baidu (through a Baidu scholarship awarded to Wenpeng Yin) and by Deutsche Forschungsgemeinschaft (grant DFG SCHU 2246/8-2, SPP 1335). | MVCNN, a novel CNN architecture for sentence classification. It combines multichannel initialization – diverse versions of pretrained word embeddings are used – and variable-size filters – features of multigranular phrases are extracted with variable-size convolution filters. |
ea6764a362bac95fb99969e9f8c773a61afd8f39 | ea6764a362bac95fb99969e9f8c773a61afd8f39_0 | Q: What is the highest accuracy score achieved?
Text: Introduction
The challenge in Natural Language Inference (NLI), also known as Recognizing Textual Entailment (RTE), is to correctly decide whether a sentence (referred to as a premise) entails or contradicts or is neutral in respect to another sentence (a hypothesis). This classification task requires various natural language comprehension skills. In this paper, we are focused on the following natural language generation task based on NLI. Given the premise the goal is to generate a stream of hypotheses that comply with the label (entailment, contradiction or neutral). In addition to reading capabilities this task also requires language generation capabilities.
The Stanford Natural Language Inference (SNLI) Corpus BIBREF0 is a NLI dataset that contains over a half a million examples. The size of the dataset is sufficient to train powerful neural networks. Several successful classification neural networks have already been proposed BIBREF1 , BIBREF2 , BIBREF3 , BIBREF4 . In this paper, we utilize SNLI to train generative neural networks. Each example in the dataset consist of two human-written sentences, a premise and a hypothesis, and a corresponding label that describes the relationship between them. Few examples are presented in Table TABREF1 .
The proposed generative networks are trained to generate a hypothesis given a premise and a label, which allow us to construct new, unseen examples. Some generative models are build to generate a single optimal response given the input. Such models have been applied to machine translation BIBREF5 , image caption generation BIBREF6 , or dialogue systems BIBREF7 . Another type of generative models are autoencoders that generate a stream of random samples from the original distribution. For instance, autoencoders have been used to generate text BIBREF8 , BIBREF9 , and images BIBREF10 . In our setting we combine both approaches to generate a stream of random responses (hypotheses) that comply with the input (premise, label).
But what is a good stream of hypotheses? We argue that a good stream contains diverse, comprehensible, accurate and non-trivial hypotheses. A hypothesis is comprehensible if it is grammatical and semantically makes sense. It is accurate if it clearly expresses the relationship (signified by the label) with the premise. Finally, it is non-trivial if it is not trivial to determine the relationship (label) between the hypothesis and premise. For instance, given a premise ”A man drives a red car” and label entailment, the hypothesis ”A man drives a car” is more trivial than ”A person is sitting in a red vehicle”.
The next question is how to automatically measure the quality of generated hypotheses. One way is to use metrics that are standard in text generation tasks, for instance ROUGE BIBREF11 , BLEU BIBREF12 , METEOR BIBREF13 . These metrics estimate the similarity between the generated text and the original reference text. In our task they can be used by comparing the generated and reference hypotheses with the same premise and label. The main issue of these metrics is that they penalize the diversity since they penalize the generated hypotheses that are dissimilar to the reference hypothesis. An alternative metric is to use a NLI classifier to test the generated hypothesis if the input label is correct in respect to the premise. A perfect classifier would not penalize diverse hypotheses and would reward accurate and (arguably to some degree) comprehensible hypotheses. However, it would not reward non-trivial hypotheses.
Non-trivial examples are essential in a dataset for training a capable machine learning model. Furthermore, we make the following hypothesis.
A good dataset for training a NLI classifier consists of a variety of accurate, non-trivial and comprehensible examples.
Based on this hypothesis, we propose the following approach for evaluation of generative models, which is also presented in Figure FIGREF2 . First, the generative model is trained on the original training dataset. Then, the premise and label from an example in the original dataset are taken as the input to the generative model to generate a new random hypothesis. The generated hypothesis is combined with the premise and the label to form a new unseen example. This is done for every example in the original dataset to construct a new dataset. Next, a classifier is trained on the new dataset. Finally, the classifier is evaluated on the original test set. The accuracy of the classifier is the proposed quality metric for the generative model. It can be compared to the accuracy of the classifier trained on the original training set and tested on the original test set.
The generative models learn solely from the original training set to regenerate the dataset. Thus, the model learns the distribution of the original dataset. Furthermore, the generated dataset is just a random sample from the estimated distribution. To determine how well did the generative model learn the distribution, we observe how close does the accuracy of the classifier trained on the generated dataset approach the accuracy of classifier trained on the original dataset.
Our flagship generative network EmbedDecoder works in a similar fashion as the encoder-decoder networks, where the encoder is used to transform the input into a low-dimensional latent representation, from which the decoder reconstructs the input. The difference is that EmbedDecoder consists only of the decoder, and the latent representation is learned as an embedding for each training example separately. In our models, the latent representation represents the mapping between the premise and the label on one side and the hypothesis on the other side.
Our main contributions are i) a novel generative neural network, which consist of the decoder that learns a mapping embedding for each training example separately, ii) a procedure for generating NLI datasets automatically, iii) and a novel evaluation metric for NLI generative models – the accuracy of the classifier trained on the generated dataset.
In Section SECREF2 we present the related work. In Section SECREF3 the considered neural networks are presented. Besides the main generative networks, we also present classification and discriminative networks, which are used for evaluation. The results are presented in Section SECREF5 , where the generative models are evaluated and compared. From the experiments we can see that the best dataset was generated by the attention-based model EmbedDecoder. The classifier on this dataset achieved accuracy of INLINEFORM0 , which is INLINEFORM1 less than the accuracy achieved on the original dataset. We also investigate the influence of latent dimensionality on the performance, compare different evaluation metrics, and provide deeper insights of the generated datasets. The conclusion is presented in Section SECREF6 .
Related Work
NLI has been the focal point of Recognizing Textual Entailment (RTE) Challenges, where the goal is to determine if the premise entails the hypothesis or not. The proposed approaches for RTE include bag-of-words matching approach BIBREF14 , matching predicate argument structure approach BIBREF15 and logical inference approach BIBREF16 , BIBREF17 . Another rule-based inference approach was proposed by BIBREF18 . This approach allows generation of new hypotheses by transforming parse trees of the premise while maintaining entailment. BIBREF19 proposes an approach for constructing training datasets by extracting sentences from news articles that tend to be in an entailment relationship.
After SNLI dataset was released several neural network approaches for NLI classification have emerged. BIBREF1 , BIBREF2 , BIBREF3 , BIBREF4 . The state-of-the-art model BIBREF4 achieves INLINEFORM0 accuracy on the SNLI dataset. A similar generation approach to ours was proposed by BIBREF20 , The goal of this work is generating entailment inference chains, where only examples with entailment label are used.
Natural Lanuguage Generation (NLG) is a task of generating natural language from a structured form such as knowledge base or logic form BIBREF21 , BIBREF22 , BIBREF23 . The input in our task is unstructured text (premise) and label. On the other side of this spectrum, there are tasks that deal solely with unstructured text, like machine translation BIBREF24 , BIBREF25 , BIBREF26 , summarization BIBREF27 , BIBREF28 and conversational dialogue systems BIBREF7 , BIBREF29 . Another recently popular task is generating captions from images BIBREF30 , BIBREF31 .
With the advancement of deep learning, many neural network approaches have been introduced for generating sequences. The Recurrent Neural Network Language Model (RNNLM) BIBREF32 is one of the simplest neural architectures for generating text. The approach was extended by BIBREF5 , which use encoder-decoder architecture to generate a sequence from the input sequence. The Hierarchical Recurrent Encoder-Decoder (HRED) architecture BIBREF7 generates sequences from several input sequences. These models offer very little variety of output sequences. It is obtained by modeling the output distribution of the language model. To introduce more variety, models based on variational autoencoder (VAE) BIBREF33 have been proposed. These models use stochastic random variables as a source of variety. In BIBREF8 a latent variable is used to initial the RNN that generates sentences, while the variational recurrent neural network (VRNN) BIBREF34 models the dependencies between latent variables across subsequent steps of RNN. The Latent Variable Hierarchical Recurrent Encoder-Decoder (VHRED) BIBREF35 extends the HRED by incorporating latent variables, which are learned similarly than in VAE. The latent variables are, like in some of our models, used to represent the mappings between sequences. Conditional variational autoencoders (CVAEs) BIBREF36 were used to generate images from continuous visual attributes. These attributes are conditional information that is fed to the models, like the discrete label is in our models.
As recognized by BIBREF37 , the evaluation metrics of text-generating models fall into three categories: manual evaluation, automatic evaluation metrics, task-based evaluation. In evaluation based on human judgment each generated textual example is inspected manually. The automatic evaluation metrics, like ROUGE, BLEU and METEOR, compare human texts and generated texts. BIBREF38 shows METEOR has the strongest correlation with human judgments in image description evaluation. The last category is task-based evaluation, where the impact of the generated texts on a particular task is measured. This type of evaluation usually involves costly and lengthy human involvement, like measuring the effectiveness of smoking-cessation letters BIBREF39 . On the other hand, the task in our evaluation, the NLI classification, is automatic. In BIBREF40 ranking was used as an automatic task-based evaluation for associating images with captions.
Models
In this section, we present several neural networks used in the experiments. We start with variants of Recurrent Neural Networks, which are essential layers in all our models. Then, we present classification networks, which are needed in evaluation of generative neural networks presented in the following section. Next, we present how to use generative networks to generate hypothesis. Finally, we present discriminative networks, which are used for evaluation and analysis of the hypotheses.
The premise INLINEFORM0 and hypothesis INLINEFORM1 are represented with word embeddings INLINEFORM2 and INLINEFORM3 respectively. Each INLINEFORM4 is a INLINEFORM5 -dimensional vector that represents the corresponding word, INLINEFORM6 is the length of premise, and INLINEFORM7 is the length of hypothesis. The labels (entailment, contradiction, neutral) are represented by a 3-dimensional vector INLINEFORM8 if the label is the output of the model, or INLINEFORM9 if the label is the input to the model.
Recurrent Neural Networks
The Recurrent Neural Networks (RNNs) are neural networks suitable for processing sequences. They are the basic building block in all our networks. We use two variants of RNNs – Long short term memory (LSTM) network BIBREF41 and an attention-based extension of LSTM, the mLSTM BIBREF2 . The LSTM tends to learn long-term dependencies better than vanilla RNNs. The input to the LSTM is a sequence of vectors INLINEFORM0 , and the output is a sequence of vectors INLINEFORM1 . At each time point INLINEFORM2 , input gate INLINEFORM3 , forget gate INLINEFORM4 , output gate INLINEFORM5 , cell state INLINEFORM6 and one output vector INLINEFORM7 are calculated. DISPLAYFORM0
where INLINEFORM0 is a sigmoid function, INLINEFORM1 is the element-wise multiplication operator, INLINEFORM2 and INLINEFORM3 are parameter matrices, INLINEFORM4 parameter vectors, INLINEFORM5 is the input vector dimension, and INLINEFORM6 is the output vector dimension. The vectors INLINEFORM7 and INLINEFORM8 are set to zero in the standard setting, however, in some cases in our models, they are set to a value that is the result of previous layers.
The mLSTM is an attention-based model with two input sequences – premise and hypothesis in case of NLI. Each word of the premise is matched against each word of the hypothesis to find the soft alignment between the sentences. The mLSTM is based on LSTM in such a way that it remembers the important matches and forgets the less important. The input to the LSTM inside the mLSTM at each time step is INLINEFORM0 , where INLINEFORM1 is an attention vector that represents the weighted sum of premise sequence, where the weights present the degree to which each token of the premise is aligned with the INLINEFORM2 -th token of the hypothesis INLINEFORM3 , and INLINEFORM4 is the concatenation operator. More details about mLSTM are presented in BIBREF2 .
Classification model
The classification model predicts the label of the example given the premise and the hypothesis. We use the mLSTM-based model proposed by BIBREF2 .
The architecture of the model is presented in Figure FIGREF9 . The embeddings of the premise INLINEFORM0 and hypothesis INLINEFORM1 are the input to the first two LSTMs to obtain the hidden states of the premise INLINEFORM2 and hypothesis INLINEFORM3 . DISPLAYFORM0
All the hidden states in our models are INLINEFORM0 -dimensional unless otherwise noted. The hidden states INLINEFORM1 and INLINEFORM2 are the input to the mLSTM layer. The output of mLSTM are hidden states INLINEFORM3 , although only the last state INLINEFORM4 is further used. A fully connected layer transforms it into a 3-dimensional vector, on top of which softmax function is applied to obtain the probabilities INLINEFORM5 of labels. DISPLAYFORM0
where INLINEFORM0 represents the fully connected layer, whose output size is INLINEFORM1 .
Generative models
The goal of the proposed generative models, is to generate a diverse stream of hypotheses given the premise and the label. In this section, we present four variants of generative models, two variants of EmbedDecoder model presented in Figure FIGREF11 , and two variants of EncoderDecoder model presented in Figure FIGREF11 .
All models learn a latent representation INLINEFORM0 that represents the mapping between the premise and the label on one side, and the hypothesis on the other side. The EmbedDecoder models learn the latent representation by learning an embedding of the mapping for each training example separately. The embedding for INLINEFORM1 -th training example INLINEFORM2 is a INLINEFORM3 -dimensional trainable parameter vector. Consequentely, INLINEFORM4 is a parameter matrix of all embeddings, where INLINEFORM5 is the number of training examples. On the other hand, in EncoderDecoder models latent representation is the output of the decoder.
The EmbedDecoder models are trained to predict the next word of the hypothesis given the previous words of hypothesis, the premise, the label, and the latent representation of the example. DISPLAYFORM0
where INLINEFORM0 represent parameters other than INLINEFORM1 , and INLINEFORM2 is the length of the hypothesis INLINEFORM3 .
The AttEmbedDecoder, presented in Figure FIGREF26 , is attention based variant of EmbedDecoder. The same mLSTM layer is used as in classification model. However, the initial cell state INLINEFORM0 of mLSTM is constructed from the latent vector and the label input. DISPLAYFORM0
For the sake of simplifying the notation, we dropped the superscript INLINEFORM0 from the equations, except in INLINEFORM1 , where we explicitly want to state that the embedding vector is used.
The premise and the hypothesis are first processed by LSTM and then fed into the mLSTM, like in the classification model, however here the hypothesis is shifted. The first word of the hypothesis input is an empty token INLINEFORM0 null INLINEFORM1 , symbolizing the empty input sequence when predicting the first word. The output of the mLSTM is a hidden state INLINEFORM2 , where each INLINEFORM3 represents an output word. To obtain the probabilities for all the words in the vocabulary INLINEFORM4 for the position INLINEFORM5 in the output sequence, INLINEFORM6 is first transformed into a vocabulary-sized vector, then the softmax function is applied. DISPLAYFORM0
where V is the size of the vocabulary. But, due to the large size of the vocabulary, a two-level hierarchical softmax BIBREF42 was used instead of a regular softmax to reduce the number of parameters updated during each training step. DISPLAYFORM0
In the training step, the last output word INLINEFORM0 is set to INLINEFORM1 null INLINEFORM2 , while in the generating step, it is ignored.
In the EmbedDecoder model without attention, BaseEmbedDecoder, the mLSTM is replaced by a regular LSTM. The input to this LSTM is the shifted hypothesis. But, here the premise is provided through the initial cell state INLINEFORM0 . Specifically, last hidden state of the premise is merged with class input and the latent representation, then fed to the LSTM. DISPLAYFORM0
In order to not lose information INLINEFORM0 was picked to be equal to sum of the sizes of INLINEFORM1 , INLINEFORM2 and INLINEFORM3 . Thus, INLINEFORM4 . Since the size of INLINEFORM5 is INLINEFORM6 , the output vectors of the LSTM are also the size of INLINEFORM7 .
We also present two variants of EncoderDecoder models, a regular one BaseEncodeDecoder, and a regularized one VarEncoderDecoder, which is based on Variational Bayesian approach. As presented in Figure FIGREF11 , all the information (premise, hypothesis, label) is available to the encoder, whose output is the latent representation INLINEFORM0 . On the other hand, the decoder is provided with the same premise and label, but the hypothesis is shifted. This forces the encoder to learn to encode only the missing information – the mapping between premise-label pair and the hypothesis. The encoder has a similar structure as the classification model in Figure FIGREF9 . Except that the label is connected to the initial cell state of the mLSTM DISPLAYFORM0
and the output of mLSTM INLINEFORM0 is transformed into latent representation INLINEFORM1 DISPLAYFORM0
The decoder is the same as in EmbedDecoder.
The VarEncoderDecoder models is based on Variational Autoencoder from BIBREF33 . Instead of using single points for latent representation as in all previous models, the latent representation in VarEncoderDecoder is presented as a continuous variable INLINEFORM0 . Thus, the mappings are presented as a soft elliptical regions in the latent space, instead of a single points, which forces the model to fill up the latent space BIBREF8 . Both INLINEFORM1 and INLINEFORM2 are calculated form the output of the encoder using two different fully connected layers. INLINEFORM3
To sample from the distribution the reparametrization trick is applied DISPLAYFORM0
When training, a single sample is generated per example to generate INLINEFORM0 .
As in BIBREF33 , the following regularization term is added to the loss function DISPLAYFORM0
Generating hypotheses
In the generation phase only decoder of a trained generative model is used. It generates a hypothesis given the premise, label, and a randomly selected latent vector INLINEFORM0 . A single word is generated in each step, and it becomes the hypothesis input in the next step. DISPLAYFORM0
We also used beam search to optimize hypothesis generation. Similarly as in BIBREF5 , a small number of hypotheses are generated given a single input, then the best is selected. In INLINEFORM0 -beam search, in each time step INLINEFORM1 best partial hypotheses are expanded by all the words in the vocabulary producing INLINEFORM2 partial hypothesis. Out of these INLINEFORM3 best partial hypotheses are selected for the next step according to the joint probability of each partial hypothesis. Thus, when INLINEFORM4 is 1, the procedure is the same as the one presented in Eq EQREF24 . The generation ends when INLINEFORM5 null INLINEFORM6 symbol is encountered or maximum hypothesis length is reached. The random latent vector INLINEFORM10 is selected randomly from a normal distribution INLINEFORM11 , where INLINEFORM12 is the standard deviation of INLINEFORM13 .
Discriminative model
The discriminative model is used to measure the distinguishability between the original human written sentences and the generated ones. Higher error rate of the model means that the generative distribution is similar to the original distribution, which is one of the goals on the generative model. The model is based on Generative Adversarial Nets BIBREF10 , where in a single network the generative part tires to trick the discriminative part by generating images that are similar to the original images, and the discriminative part tries to distinguish between the original and generated images. Due to the discreteness of words (the output of our generative model) it is difficult to connect both the discriminative and generative part in a single differentiable network, thus we construct them separately. The generative models have already been defined in Section SECREF10 . Here we define the discriminative model.
The discriminative model INLINEFORM0 takes sequence INLINEFORM1 and process it with LSTM and fully connected layer DISPLAYFORM0
In the training step, one original sequence INLINEFORM0 and one generated sequence INLINEFORM1 are processed by the discriminative model. The optimization function maximizes the following objective DISPLAYFORM0
In the testing step, the discriminative model predicts correctly if DISPLAYFORM0
Dataset Generation
To construct a new dataset, first a generative model is trained on the training set of the original dataset. Then, a new dataset is constructed by generating a new hypotheses with a generative model. The premises and labels from the examples of the original dataset are taken as an input for the generative model. The new hypotheses replace the training hypotheses in the new dataset.
Next, the classifier, presented in Section SECREF6 , is trained on the generated dataset. The accuracy of the new classifier is the main metric for evaluating the quality of the generated dataset.
Experiment details
All the experiments are performed on the SNLI dataset. There are 549,367 examples in the dataset, divided into training, development and test set. Both the development and test set contain around 10.000 examples. Some examples are labeled with '-', which means there was not enough consensus on them. These examples are excluded. Also, to speed up the computation we excluded examples, which have the premise longer than 25 words, or the hypothesis longer than 15 words. There were still INLINEFORM0 remaining examples. Both premises and hypothesis were padded with INLINEFORM1 null INLINEFORM2 symbols (empty words), so that all premises consisted of 25 words, and all hypotheses consisted of 15 tokens.
We use 50-dimensional word vectors trained with GloVe BIBREF43 . For words without pretrained embeddings, the embeddings are randomly selected from the normal distribution. Word embeddings are not updated during training.
For optimization Adam method BIBREF44 was used with suggested hyperparameters.
Classification models are trained until the loss on the validation set does not improve for three epochs. The model with best validation loss is retained.
Generative models are trained for 20 epochs, since it turned out that none of the stopping criteria were useful. With each generative model a new dataset is created. The new dataset consists of training set, which is generated using examples from the original training set, and a development set, which is generated from the original development set. The beam size for beam search was set to 1. The details of the decision are presented in Section SECREF35 .
Some datasets were constructed by filtering the generated datasets according to various thresholds. Thus, the generated datasets were constructed to contain enough examples, so that the filtered datasets had at least the number of examples as the original dataset. In the end, all the datasets were trimmed down to the size of the original dataset by selecting the samples sequentially from the beginning until the dataset had the right size. Also, the datasets were filtered so that each of the labels was represented equally. All the models, including classification and discriminative models, were trained with hidden dimension INLINEFORM0 set to 150, unless otherwise noted.
Our implementation is accessible at http://github.com/jstarc/nli_generation. It is based on libraries Keras and Theano BIBREF45 .
Results
First, the classification model OrigClass was trained on the original dataset. This model was then used throughout the experiments for filtering the datasets, comparison, etc. Notice that we have assumed OrigClass to be ground truth for the purpose of our experiments. However, the accuracy of this model on the original test set was INLINEFORM0 , which is less than INLINEFORM1 , which was attained by mLSTM (d=150) model in BIBREF2 . Both models are very similar, including the experimental settings, however ours was trained and evaluated on a slightly smaller dataset.
Preliminary evaluation
Several AttEmbedDecoder models with various latent dimensions INLINEFORM0 were first trained and then used to generate new datasets. A couple of generated examples are presented in Table TABREF36 .
Figure FIGREF37 shows the accuracies of the generated development datasets evaluated by the OrigClass. The maximum accuracy of INLINEFORM0 was achieved by EmbedDecoder (z=2), and the accuracy is decreasing with the number of dimensions in the latent variable. The analysis for each label shows that the accuracy of contradiction and neutral labels is quite stable, while the accuracy of the entailment examples drops significantly with latent dimensionality. One reason for this is that the hypothesis space of the entailment label is smaller than the spaces of other two labels. Thus, when the dimensionality is higher, more creative examples are generated, and these examples less often comply with the entailment label.
Since none of the generated datasets' accuracies is as high as the accuracy of the OrigClass on the original test set, we used OrigClass to filter the datasets subject to various prediction thresholds. The examples from the generated dataset were classified by OrigClass and if the probability of the label of the example exceeded the threshold INLINEFORM0 , then the example was retained.
For each filtered dataset a classifier was trained. Figure FIGREF38 shows the accuracies of these classifiers on the original test set. Filtering out the examples that have incorrect labels (according to the OrigClass) improves the accuracy of the classifier. However, if the threshold is set too high, the accuracy drops, since the dataset contains examples that are too trivial. Figure FIGREF38 , which represents the accuracy of classifiers on their corresponding generated development sets, further shows the trade-off between the accuracy and triviality of the examples. The classifiers trained on datasets with low latent dimension or high filtering threshold have higher accuracies. Notice that the training dataset and test dataset were generated by the same generative model.
The unfiltered datasets have been evaluated with five other metrics besides classification accuracy. The results are presented in Figure FIGREF41 . The whole figure shows the effect of latent dimensionality of the models on different metrics. The main purpose of the figure is not show absolute values for each of the metrics, but to compare the metrics' curves to the curve of our main metric, the accuracy of the classifier.
The first metric – Premise-Hypothesis Distance – represents the average Jaccard distance between the premise and the generated hypothesis. Datasets generated with low latent dimensions have hypotheses more similar to premises, which indicates that the generated hypotheses are more trivial and less diverse than hypothesis generated with higher latent dimensions.
We also evaluated the models with standard language generation metrics ROUGE-L and METEOR. The metrics are negatively correlated with the accuracy of the classifier. We believe this is because the two metrics reward hypotheses that are similar to their reference (original) hypothesis. However, the classifier is better if trained on more diverse hypotheses.
The next metric is the log-likelihood of hypotheses in the development set. This metric is the negative of the training loss function. The log-likelihood improves with dimensionality since it is easier to fit the hypotheses in the training step having more dimensions. Consequently, the hypothesis in the generating step are more confident – they have lower log-likelihood.
The last metric – discriminative error rate – is calculated with the discriminative model. The model is trained on the hypotheses from the unfiltered generated dataset on one side and the original hypotheses on the other side. Error rate is calculated on the (generated and original) development sets. Higher error rate indicates that it is more difficult for discriminative model to distinguish between the generated and the original hypotheses, which suggests that the original generating distribution and the distribution of the generative model are more similar. The discriminative model detects that low dimensional generative models generate more trivial examples as also indicated by the distance between premise and hypotheses. On the other hand, it also detects the hypotheses of high dimensional models, which more frequently contain grammatic or semantic errors.
There is a positive correlation between the discriminative error rate and the accuracy of the classifier. This observation led us to the experiment, where the generated dataset was filtered according to the prediction probability of the discriminative model. Two disjoint filtered datasets were created. One with hypotheses that had high probability that they come from the original distribution and the other one with low probability. However, the accuracies of classifiers trained on these datasets were very similar to the accuracy of the classifier on the unfiltered dataset. Similar test was also done with the log-likelihood metric. The examples with higher log-likelihood had similar performance than the ones with lower log-likelihood. This also lead us to set the size of the beam to 1. Also, the run time of generating hypothesis is INLINEFORM0 , where INLINEFORM1 is beam size. Thus, with lower beam sizes much more hypotheses can be generated.
To accept the hypothesis from Section SECREF1 we have shown that a quality dataset requires accurate examples by showing that filtering the dataset with the original classifier improves the performance (Figure FIGREF38 ). Next, we have shown that non-trivial examples are also required. If the filtering threshold is set too high, these examples are excluded, and the accuracy drops. Also, the more trivial examples are produced by low-dimensional models, which is indicated by lower premise-hypothesis distances, and lower discriminative error rate (Figure FIGREF41 ). Finally, a quality dataset requires more comprehensible examples. The high dimensional models produce less comprehensible hypotheses. They are detected by the discriminative model (see discriminator error rate in Figure FIGREF41 ).
Other models
We also compared AttEmbedDecoder model to all other models. Table TABREF43 presents the results. For all the models the latent dimension INLINEFORM0 is set to 8, as it was previously shown to be one of the best dimensions.
For all the models the number of total parameters is relatively high, however only a portion of parameters get updated each time. The AttEmbedDecoder model was the best model according to our main metric – the accuracy of the classifier trained on the generated dataset.
The hidden dimension INLINEFORM0 of the BaseEmbedDecoder was selected so that the model was comparable to AttEmbedDecoder in terms of the number of parameters INLINEFORM1 . The accuracies of classifiers generated by BaseEmbedDecoder are still lower than the accuracies of classifiers generated by AttEmbedDecoder, which shows that the attention mechanism helps the models.
Table TABREF44 shows the performance of generated datasets compared to the original one. The best generated dataset was generated by AttEmbedDecoder. The accuracy of its classifier is only 2.7 % lower than the accuracy of classifier generated on the original human crafted dataset. The comparison of the best generated dataset to the original dataset shows that the datasets had only INLINEFORM0 of identical examples. The average length of the hypothesis was INLINEFORM1 and INLINEFORM2 in the original dataset and in the generated dataset, respectively. In another experiment the generated dataset and the original dataset were merged to train a new classifier. Thus, the merged dataset contained twice as many examples as other datasets. The accuracy of this classifier was 82.0%, which is 0.8 % better than the classifier trained solely on the original training set. However, the lowest average loss is achieved by the classifier trained on the original dataset.
Qualitative evaluation
We also did a qualitative evaluation of the generated hypothesis. Hypotheses are mostly grammatically sound. Sometimes the models incorrectly use indefinite articles, for instance ”an phone”, or possessive pronouns ”a man uses her umbrella”. These may be due to the fact the system must learn the right indefinite article for every word separately. On the other hand, the models sometimes generate hypotheses that showcase more advanced grammatical patterns. For instance, hypothesis ”The man and woman have a cake for their family” shows that the model can correctly use plural in a non-trivial setting. Generative neural networks have a tendency to repeat words, which sometimes make sentences meaningless, like ”A cup is drinking from a cup of coffee” or even ungrammatical, like ”Several people in a car car”.
As shown previously the larger is the latent dimension more creative hypotheses are generated. However, with more creativity semantic errors emerge. Some hypotheses are correct, just unlikely to be written by a human, like ”A shirtless man is holding a guitar with a woman and a woman”. Others present improbable events, like ”The girls were sitting in the park watching tv”, or even impossible events, for instance ”The child is waiting for his wife”. This type of errors arise because the models have not learned enough common sense logic. Finally, there are hypotheses, which make no sense. For instance, ”Two women with grassy beach has no tennis equipment”. On the contrary, the models are able to generate some non-trivial hypotheses. From the original premise ”A band performing with a girl singing and a guy next to her singing as well while playing the guitar”, the model has generated some hypotheses that do not contain concepts explicitly found in the premise. For instance, ”People are playing instruments” (entailment), ”The band was entirely silent” (contradiction), or ”The girl is playing at the concert” (neutral).
Regarding the compliance of the hypotheses with the label and premise, we observed that many generated hypotheses are not complying with the label, however they would be a very good example with a different label. For instance, the generated hypotheses represent entailment instead of contradiction. This also explains why the accuracy of the generated dataset measured by the original classifier is low in Figure FIGREF37 . On the other hand, the models generate examples that are more ambiguous and not as clear as those in the original dataset. These examples are harder to classify even for a human. For instance, the relationship between premise ”A kid hitting a baseball in a baseball field” and hypothesis ”The baseball player is trying to get the ball” can be either interpreted either as an entailment if verb get is intepreted as not to miss or contradiction if get is intepreted as possess. For a deeper insight into generated hypothesis more examples are presented in SECREF7 .
The gap between the discriminative error rates (disc-er) of EncoderDecoder models and EmbedDecoder models in Table TABREF43 is significant. To further investigate, the same experiment was performed again by a human evaluator and the discriminative model. This time on a sample of 200 examples. To recap, both the model and human were asked to select the generated hypothesis given a random original and generated hypothesis without knowing which one is which.
Human evaluation confirms that AttEmbedDecoder hypotheses are more difficult to separate from the original one than the hypotheses of VaeEncoderDecoder. Table TABREF46 presents the results. The discriminative model discriminates better than the human evaluator. This may be due to the fact that the discriminative model has learned from a large training set, while the human was not shown any training examples. Human evaluation has shown that generated hypotheses are positively recognized if they contain a grammatical or semantic error. But even if the generated hypothesis does not contain these errors, it sometimes reveals itself by not being as sophisticated as the original example. On the other hand, the discriminative model does not always recognize these discrepancies. It relies more on the differences in distributions learned form a big training set. The true number of non-distinguishable examples may be even higher than indicated by the human discriminator error rate since the human may have correctly guessed some of the examples he could not distinguish.
Conclusion
In this paper, we have proposed several generative neural networks for generating hypothesis using NLI dataset. To evaluate these models we propose the accuracy of classifier trained on the generated dataset as the main metric. The best model achieved INLINEFORM0 accuracy, which is only INLINEFORM1 less than the accuracy of the classifier trained on the original human written dataset, while the best dataset combined with the original dataset has achieved the highest accuracy. This model learns a decoder and a mapping embedding for each training example. It outperforms the more standard encoder-decoder networks. Although more parameters are needed to be trained, less are updated on each batch. We have also shown that the attention mechanism improves the model. The analysis has confirmed our hypothesis that a good dataset contains accurate, non-trivial and comprehensible examples. To further examine the quality of generated hypothesis, they were compared against the original human written hypotheses. The discriminative evaluation shows that in INLINEFORM2 of cases the human evaluator incorrectly distinguished between the original and the generated hypothesis. The discriminative model was actually better in distinguishing. We have also compared the accuracy of classifier to other metrics. The standard text generation metrics ROUGE and METEOR do not indicate if a generated dataset is good for training a classifier.
To obtain higher accuracies of the generated datasets, they need to be filtered, because the generative models produce examples, whose label is not always accurate. Thus, we propose for future work incorporating the classifier into the generative model, in a similar fashion that it was done on images by BIBREF46 . This network could also include the discriminative model to generate examples from a distribution that is more similar to the original training distribution. Finally, constructing a dataset requires a lot of intensive manual work that mainly consists of writing text with some creativity. To extend the original dataset human users could just validate or correct the generated examples. On top of that we would like to develop active learning methods to identify incorrect generated examples that would most improve the dataset if corrected.
Acknowledgements
This work was supported by the Slovenian Research Agency and the ICT Programme of the EC under XLike (ICT-STREP-288342) and XLime (FP7-ICT-611346).
More Examples
In this section more generated hypotheses are presented. Each example starts with the original example data. Then, several hypotheses generated with from the original example with our best model are displayed. | 82.0% |
62c4c8b46982c3fcf5d7c78cd24113635e2d7010 | 62c4c8b46982c3fcf5d7c78cd24113635e2d7010_0 | Q: What is the size range of the datasets?
Text: Introduction
The challenge in Natural Language Inference (NLI), also known as Recognizing Textual Entailment (RTE), is to correctly decide whether a sentence (referred to as a premise) entails or contradicts or is neutral in respect to another sentence (a hypothesis). This classification task requires various natural language comprehension skills. In this paper, we are focused on the following natural language generation task based on NLI. Given the premise the goal is to generate a stream of hypotheses that comply with the label (entailment, contradiction or neutral). In addition to reading capabilities this task also requires language generation capabilities.
The Stanford Natural Language Inference (SNLI) Corpus BIBREF0 is a NLI dataset that contains over a half a million examples. The size of the dataset is sufficient to train powerful neural networks. Several successful classification neural networks have already been proposed BIBREF1 , BIBREF2 , BIBREF3 , BIBREF4 . In this paper, we utilize SNLI to train generative neural networks. Each example in the dataset consist of two human-written sentences, a premise and a hypothesis, and a corresponding label that describes the relationship between them. Few examples are presented in Table TABREF1 .
The proposed generative networks are trained to generate a hypothesis given a premise and a label, which allow us to construct new, unseen examples. Some generative models are build to generate a single optimal response given the input. Such models have been applied to machine translation BIBREF5 , image caption generation BIBREF6 , or dialogue systems BIBREF7 . Another type of generative models are autoencoders that generate a stream of random samples from the original distribution. For instance, autoencoders have been used to generate text BIBREF8 , BIBREF9 , and images BIBREF10 . In our setting we combine both approaches to generate a stream of random responses (hypotheses) that comply with the input (premise, label).
But what is a good stream of hypotheses? We argue that a good stream contains diverse, comprehensible, accurate and non-trivial hypotheses. A hypothesis is comprehensible if it is grammatical and semantically makes sense. It is accurate if it clearly expresses the relationship (signified by the label) with the premise. Finally, it is non-trivial if it is not trivial to determine the relationship (label) between the hypothesis and premise. For instance, given a premise ”A man drives a red car” and label entailment, the hypothesis ”A man drives a car” is more trivial than ”A person is sitting in a red vehicle”.
The next question is how to automatically measure the quality of generated hypotheses. One way is to use metrics that are standard in text generation tasks, for instance ROUGE BIBREF11 , BLEU BIBREF12 , METEOR BIBREF13 . These metrics estimate the similarity between the generated text and the original reference text. In our task they can be used by comparing the generated and reference hypotheses with the same premise and label. The main issue of these metrics is that they penalize the diversity since they penalize the generated hypotheses that are dissimilar to the reference hypothesis. An alternative metric is to use a NLI classifier to test the generated hypothesis if the input label is correct in respect to the premise. A perfect classifier would not penalize diverse hypotheses and would reward accurate and (arguably to some degree) comprehensible hypotheses. However, it would not reward non-trivial hypotheses.
Non-trivial examples are essential in a dataset for training a capable machine learning model. Furthermore, we make the following hypothesis.
A good dataset for training a NLI classifier consists of a variety of accurate, non-trivial and comprehensible examples.
Based on this hypothesis, we propose the following approach for evaluation of generative models, which is also presented in Figure FIGREF2 . First, the generative model is trained on the original training dataset. Then, the premise and label from an example in the original dataset are taken as the input to the generative model to generate a new random hypothesis. The generated hypothesis is combined with the premise and the label to form a new unseen example. This is done for every example in the original dataset to construct a new dataset. Next, a classifier is trained on the new dataset. Finally, the classifier is evaluated on the original test set. The accuracy of the classifier is the proposed quality metric for the generative model. It can be compared to the accuracy of the classifier trained on the original training set and tested on the original test set.
The generative models learn solely from the original training set to regenerate the dataset. Thus, the model learns the distribution of the original dataset. Furthermore, the generated dataset is just a random sample from the estimated distribution. To determine how well did the generative model learn the distribution, we observe how close does the accuracy of the classifier trained on the generated dataset approach the accuracy of classifier trained on the original dataset.
Our flagship generative network EmbedDecoder works in a similar fashion as the encoder-decoder networks, where the encoder is used to transform the input into a low-dimensional latent representation, from which the decoder reconstructs the input. The difference is that EmbedDecoder consists only of the decoder, and the latent representation is learned as an embedding for each training example separately. In our models, the latent representation represents the mapping between the premise and the label on one side and the hypothesis on the other side.
Our main contributions are i) a novel generative neural network, which consist of the decoder that learns a mapping embedding for each training example separately, ii) a procedure for generating NLI datasets automatically, iii) and a novel evaluation metric for NLI generative models – the accuracy of the classifier trained on the generated dataset.
In Section SECREF2 we present the related work. In Section SECREF3 the considered neural networks are presented. Besides the main generative networks, we also present classification and discriminative networks, which are used for evaluation. The results are presented in Section SECREF5 , where the generative models are evaluated and compared. From the experiments we can see that the best dataset was generated by the attention-based model EmbedDecoder. The classifier on this dataset achieved accuracy of INLINEFORM0 , which is INLINEFORM1 less than the accuracy achieved on the original dataset. We also investigate the influence of latent dimensionality on the performance, compare different evaluation metrics, and provide deeper insights of the generated datasets. The conclusion is presented in Section SECREF6 .
Related Work
NLI has been the focal point of Recognizing Textual Entailment (RTE) Challenges, where the goal is to determine if the premise entails the hypothesis or not. The proposed approaches for RTE include bag-of-words matching approach BIBREF14 , matching predicate argument structure approach BIBREF15 and logical inference approach BIBREF16 , BIBREF17 . Another rule-based inference approach was proposed by BIBREF18 . This approach allows generation of new hypotheses by transforming parse trees of the premise while maintaining entailment. BIBREF19 proposes an approach for constructing training datasets by extracting sentences from news articles that tend to be in an entailment relationship.
After SNLI dataset was released several neural network approaches for NLI classification have emerged. BIBREF1 , BIBREF2 , BIBREF3 , BIBREF4 . The state-of-the-art model BIBREF4 achieves INLINEFORM0 accuracy on the SNLI dataset. A similar generation approach to ours was proposed by BIBREF20 , The goal of this work is generating entailment inference chains, where only examples with entailment label are used.
Natural Lanuguage Generation (NLG) is a task of generating natural language from a structured form such as knowledge base or logic form BIBREF21 , BIBREF22 , BIBREF23 . The input in our task is unstructured text (premise) and label. On the other side of this spectrum, there are tasks that deal solely with unstructured text, like machine translation BIBREF24 , BIBREF25 , BIBREF26 , summarization BIBREF27 , BIBREF28 and conversational dialogue systems BIBREF7 , BIBREF29 . Another recently popular task is generating captions from images BIBREF30 , BIBREF31 .
With the advancement of deep learning, many neural network approaches have been introduced for generating sequences. The Recurrent Neural Network Language Model (RNNLM) BIBREF32 is one of the simplest neural architectures for generating text. The approach was extended by BIBREF5 , which use encoder-decoder architecture to generate a sequence from the input sequence. The Hierarchical Recurrent Encoder-Decoder (HRED) architecture BIBREF7 generates sequences from several input sequences. These models offer very little variety of output sequences. It is obtained by modeling the output distribution of the language model. To introduce more variety, models based on variational autoencoder (VAE) BIBREF33 have been proposed. These models use stochastic random variables as a source of variety. In BIBREF8 a latent variable is used to initial the RNN that generates sentences, while the variational recurrent neural network (VRNN) BIBREF34 models the dependencies between latent variables across subsequent steps of RNN. The Latent Variable Hierarchical Recurrent Encoder-Decoder (VHRED) BIBREF35 extends the HRED by incorporating latent variables, which are learned similarly than in VAE. The latent variables are, like in some of our models, used to represent the mappings between sequences. Conditional variational autoencoders (CVAEs) BIBREF36 were used to generate images from continuous visual attributes. These attributes are conditional information that is fed to the models, like the discrete label is in our models.
As recognized by BIBREF37 , the evaluation metrics of text-generating models fall into three categories: manual evaluation, automatic evaluation metrics, task-based evaluation. In evaluation based on human judgment each generated textual example is inspected manually. The automatic evaluation metrics, like ROUGE, BLEU and METEOR, compare human texts and generated texts. BIBREF38 shows METEOR has the strongest correlation with human judgments in image description evaluation. The last category is task-based evaluation, where the impact of the generated texts on a particular task is measured. This type of evaluation usually involves costly and lengthy human involvement, like measuring the effectiveness of smoking-cessation letters BIBREF39 . On the other hand, the task in our evaluation, the NLI classification, is automatic. In BIBREF40 ranking was used as an automatic task-based evaluation for associating images with captions.
Models
In this section, we present several neural networks used in the experiments. We start with variants of Recurrent Neural Networks, which are essential layers in all our models. Then, we present classification networks, which are needed in evaluation of generative neural networks presented in the following section. Next, we present how to use generative networks to generate hypothesis. Finally, we present discriminative networks, which are used for evaluation and analysis of the hypotheses.
The premise INLINEFORM0 and hypothesis INLINEFORM1 are represented with word embeddings INLINEFORM2 and INLINEFORM3 respectively. Each INLINEFORM4 is a INLINEFORM5 -dimensional vector that represents the corresponding word, INLINEFORM6 is the length of premise, and INLINEFORM7 is the length of hypothesis. The labels (entailment, contradiction, neutral) are represented by a 3-dimensional vector INLINEFORM8 if the label is the output of the model, or INLINEFORM9 if the label is the input to the model.
Recurrent Neural Networks
The Recurrent Neural Networks (RNNs) are neural networks suitable for processing sequences. They are the basic building block in all our networks. We use two variants of RNNs – Long short term memory (LSTM) network BIBREF41 and an attention-based extension of LSTM, the mLSTM BIBREF2 . The LSTM tends to learn long-term dependencies better than vanilla RNNs. The input to the LSTM is a sequence of vectors INLINEFORM0 , and the output is a sequence of vectors INLINEFORM1 . At each time point INLINEFORM2 , input gate INLINEFORM3 , forget gate INLINEFORM4 , output gate INLINEFORM5 , cell state INLINEFORM6 and one output vector INLINEFORM7 are calculated. DISPLAYFORM0
where INLINEFORM0 is a sigmoid function, INLINEFORM1 is the element-wise multiplication operator, INLINEFORM2 and INLINEFORM3 are parameter matrices, INLINEFORM4 parameter vectors, INLINEFORM5 is the input vector dimension, and INLINEFORM6 is the output vector dimension. The vectors INLINEFORM7 and INLINEFORM8 are set to zero in the standard setting, however, in some cases in our models, they are set to a value that is the result of previous layers.
The mLSTM is an attention-based model with two input sequences – premise and hypothesis in case of NLI. Each word of the premise is matched against each word of the hypothesis to find the soft alignment between the sentences. The mLSTM is based on LSTM in such a way that it remembers the important matches and forgets the less important. The input to the LSTM inside the mLSTM at each time step is INLINEFORM0 , where INLINEFORM1 is an attention vector that represents the weighted sum of premise sequence, where the weights present the degree to which each token of the premise is aligned with the INLINEFORM2 -th token of the hypothesis INLINEFORM3 , and INLINEFORM4 is the concatenation operator. More details about mLSTM are presented in BIBREF2 .
Classification model
The classification model predicts the label of the example given the premise and the hypothesis. We use the mLSTM-based model proposed by BIBREF2 .
The architecture of the model is presented in Figure FIGREF9 . The embeddings of the premise INLINEFORM0 and hypothesis INLINEFORM1 are the input to the first two LSTMs to obtain the hidden states of the premise INLINEFORM2 and hypothesis INLINEFORM3 . DISPLAYFORM0
All the hidden states in our models are INLINEFORM0 -dimensional unless otherwise noted. The hidden states INLINEFORM1 and INLINEFORM2 are the input to the mLSTM layer. The output of mLSTM are hidden states INLINEFORM3 , although only the last state INLINEFORM4 is further used. A fully connected layer transforms it into a 3-dimensional vector, on top of which softmax function is applied to obtain the probabilities INLINEFORM5 of labels. DISPLAYFORM0
where INLINEFORM0 represents the fully connected layer, whose output size is INLINEFORM1 .
Generative models
The goal of the proposed generative models, is to generate a diverse stream of hypotheses given the premise and the label. In this section, we present four variants of generative models, two variants of EmbedDecoder model presented in Figure FIGREF11 , and two variants of EncoderDecoder model presented in Figure FIGREF11 .
All models learn a latent representation INLINEFORM0 that represents the mapping between the premise and the label on one side, and the hypothesis on the other side. The EmbedDecoder models learn the latent representation by learning an embedding of the mapping for each training example separately. The embedding for INLINEFORM1 -th training example INLINEFORM2 is a INLINEFORM3 -dimensional trainable parameter vector. Consequentely, INLINEFORM4 is a parameter matrix of all embeddings, where INLINEFORM5 is the number of training examples. On the other hand, in EncoderDecoder models latent representation is the output of the decoder.
The EmbedDecoder models are trained to predict the next word of the hypothesis given the previous words of hypothesis, the premise, the label, and the latent representation of the example. DISPLAYFORM0
where INLINEFORM0 represent parameters other than INLINEFORM1 , and INLINEFORM2 is the length of the hypothesis INLINEFORM3 .
The AttEmbedDecoder, presented in Figure FIGREF26 , is attention based variant of EmbedDecoder. The same mLSTM layer is used as in classification model. However, the initial cell state INLINEFORM0 of mLSTM is constructed from the latent vector and the label input. DISPLAYFORM0
For the sake of simplifying the notation, we dropped the superscript INLINEFORM0 from the equations, except in INLINEFORM1 , where we explicitly want to state that the embedding vector is used.
The premise and the hypothesis are first processed by LSTM and then fed into the mLSTM, like in the classification model, however here the hypothesis is shifted. The first word of the hypothesis input is an empty token INLINEFORM0 null INLINEFORM1 , symbolizing the empty input sequence when predicting the first word. The output of the mLSTM is a hidden state INLINEFORM2 , where each INLINEFORM3 represents an output word. To obtain the probabilities for all the words in the vocabulary INLINEFORM4 for the position INLINEFORM5 in the output sequence, INLINEFORM6 is first transformed into a vocabulary-sized vector, then the softmax function is applied. DISPLAYFORM0
where V is the size of the vocabulary. But, due to the large size of the vocabulary, a two-level hierarchical softmax BIBREF42 was used instead of a regular softmax to reduce the number of parameters updated during each training step. DISPLAYFORM0
In the training step, the last output word INLINEFORM0 is set to INLINEFORM1 null INLINEFORM2 , while in the generating step, it is ignored.
In the EmbedDecoder model without attention, BaseEmbedDecoder, the mLSTM is replaced by a regular LSTM. The input to this LSTM is the shifted hypothesis. But, here the premise is provided through the initial cell state INLINEFORM0 . Specifically, last hidden state of the premise is merged with class input and the latent representation, then fed to the LSTM. DISPLAYFORM0
In order to not lose information INLINEFORM0 was picked to be equal to sum of the sizes of INLINEFORM1 , INLINEFORM2 and INLINEFORM3 . Thus, INLINEFORM4 . Since the size of INLINEFORM5 is INLINEFORM6 , the output vectors of the LSTM are also the size of INLINEFORM7 .
We also present two variants of EncoderDecoder models, a regular one BaseEncodeDecoder, and a regularized one VarEncoderDecoder, which is based on Variational Bayesian approach. As presented in Figure FIGREF11 , all the information (premise, hypothesis, label) is available to the encoder, whose output is the latent representation INLINEFORM0 . On the other hand, the decoder is provided with the same premise and label, but the hypothesis is shifted. This forces the encoder to learn to encode only the missing information – the mapping between premise-label pair and the hypothesis. The encoder has a similar structure as the classification model in Figure FIGREF9 . Except that the label is connected to the initial cell state of the mLSTM DISPLAYFORM0
and the output of mLSTM INLINEFORM0 is transformed into latent representation INLINEFORM1 DISPLAYFORM0
The decoder is the same as in EmbedDecoder.
The VarEncoderDecoder models is based on Variational Autoencoder from BIBREF33 . Instead of using single points for latent representation as in all previous models, the latent representation in VarEncoderDecoder is presented as a continuous variable INLINEFORM0 . Thus, the mappings are presented as a soft elliptical regions in the latent space, instead of a single points, which forces the model to fill up the latent space BIBREF8 . Both INLINEFORM1 and INLINEFORM2 are calculated form the output of the encoder using two different fully connected layers. INLINEFORM3
To sample from the distribution the reparametrization trick is applied DISPLAYFORM0
When training, a single sample is generated per example to generate INLINEFORM0 .
As in BIBREF33 , the following regularization term is added to the loss function DISPLAYFORM0
Generating hypotheses
In the generation phase only decoder of a trained generative model is used. It generates a hypothesis given the premise, label, and a randomly selected latent vector INLINEFORM0 . A single word is generated in each step, and it becomes the hypothesis input in the next step. DISPLAYFORM0
We also used beam search to optimize hypothesis generation. Similarly as in BIBREF5 , a small number of hypotheses are generated given a single input, then the best is selected. In INLINEFORM0 -beam search, in each time step INLINEFORM1 best partial hypotheses are expanded by all the words in the vocabulary producing INLINEFORM2 partial hypothesis. Out of these INLINEFORM3 best partial hypotheses are selected for the next step according to the joint probability of each partial hypothesis. Thus, when INLINEFORM4 is 1, the procedure is the same as the one presented in Eq EQREF24 . The generation ends when INLINEFORM5 null INLINEFORM6 symbol is encountered or maximum hypothesis length is reached. The random latent vector INLINEFORM10 is selected randomly from a normal distribution INLINEFORM11 , where INLINEFORM12 is the standard deviation of INLINEFORM13 .
Discriminative model
The discriminative model is used to measure the distinguishability between the original human written sentences and the generated ones. Higher error rate of the model means that the generative distribution is similar to the original distribution, which is one of the goals on the generative model. The model is based on Generative Adversarial Nets BIBREF10 , where in a single network the generative part tires to trick the discriminative part by generating images that are similar to the original images, and the discriminative part tries to distinguish between the original and generated images. Due to the discreteness of words (the output of our generative model) it is difficult to connect both the discriminative and generative part in a single differentiable network, thus we construct them separately. The generative models have already been defined in Section SECREF10 . Here we define the discriminative model.
The discriminative model INLINEFORM0 takes sequence INLINEFORM1 and process it with LSTM and fully connected layer DISPLAYFORM0
In the training step, one original sequence INLINEFORM0 and one generated sequence INLINEFORM1 are processed by the discriminative model. The optimization function maximizes the following objective DISPLAYFORM0
In the testing step, the discriminative model predicts correctly if DISPLAYFORM0
Dataset Generation
To construct a new dataset, first a generative model is trained on the training set of the original dataset. Then, a new dataset is constructed by generating a new hypotheses with a generative model. The premises and labels from the examples of the original dataset are taken as an input for the generative model. The new hypotheses replace the training hypotheses in the new dataset.
Next, the classifier, presented in Section SECREF6 , is trained on the generated dataset. The accuracy of the new classifier is the main metric for evaluating the quality of the generated dataset.
Experiment details
All the experiments are performed on the SNLI dataset. There are 549,367 examples in the dataset, divided into training, development and test set. Both the development and test set contain around 10.000 examples. Some examples are labeled with '-', which means there was not enough consensus on them. These examples are excluded. Also, to speed up the computation we excluded examples, which have the premise longer than 25 words, or the hypothesis longer than 15 words. There were still INLINEFORM0 remaining examples. Both premises and hypothesis were padded with INLINEFORM1 null INLINEFORM2 symbols (empty words), so that all premises consisted of 25 words, and all hypotheses consisted of 15 tokens.
We use 50-dimensional word vectors trained with GloVe BIBREF43 . For words without pretrained embeddings, the embeddings are randomly selected from the normal distribution. Word embeddings are not updated during training.
For optimization Adam method BIBREF44 was used with suggested hyperparameters.
Classification models are trained until the loss on the validation set does not improve for three epochs. The model with best validation loss is retained.
Generative models are trained for 20 epochs, since it turned out that none of the stopping criteria were useful. With each generative model a new dataset is created. The new dataset consists of training set, which is generated using examples from the original training set, and a development set, which is generated from the original development set. The beam size for beam search was set to 1. The details of the decision are presented in Section SECREF35 .
Some datasets were constructed by filtering the generated datasets according to various thresholds. Thus, the generated datasets were constructed to contain enough examples, so that the filtered datasets had at least the number of examples as the original dataset. In the end, all the datasets were trimmed down to the size of the original dataset by selecting the samples sequentially from the beginning until the dataset had the right size. Also, the datasets were filtered so that each of the labels was represented equally. All the models, including classification and discriminative models, were trained with hidden dimension INLINEFORM0 set to 150, unless otherwise noted.
Our implementation is accessible at http://github.com/jstarc/nli_generation. It is based on libraries Keras and Theano BIBREF45 .
Results
First, the classification model OrigClass was trained on the original dataset. This model was then used throughout the experiments for filtering the datasets, comparison, etc. Notice that we have assumed OrigClass to be ground truth for the purpose of our experiments. However, the accuracy of this model on the original test set was INLINEFORM0 , which is less than INLINEFORM1 , which was attained by mLSTM (d=150) model in BIBREF2 . Both models are very similar, including the experimental settings, however ours was trained and evaluated on a slightly smaller dataset.
Preliminary evaluation
Several AttEmbedDecoder models with various latent dimensions INLINEFORM0 were first trained and then used to generate new datasets. A couple of generated examples are presented in Table TABREF36 .
Figure FIGREF37 shows the accuracies of the generated development datasets evaluated by the OrigClass. The maximum accuracy of INLINEFORM0 was achieved by EmbedDecoder (z=2), and the accuracy is decreasing with the number of dimensions in the latent variable. The analysis for each label shows that the accuracy of contradiction and neutral labels is quite stable, while the accuracy of the entailment examples drops significantly with latent dimensionality. One reason for this is that the hypothesis space of the entailment label is smaller than the spaces of other two labels. Thus, when the dimensionality is higher, more creative examples are generated, and these examples less often comply with the entailment label.
Since none of the generated datasets' accuracies is as high as the accuracy of the OrigClass on the original test set, we used OrigClass to filter the datasets subject to various prediction thresholds. The examples from the generated dataset were classified by OrigClass and if the probability of the label of the example exceeded the threshold INLINEFORM0 , then the example was retained.
For each filtered dataset a classifier was trained. Figure FIGREF38 shows the accuracies of these classifiers on the original test set. Filtering out the examples that have incorrect labels (according to the OrigClass) improves the accuracy of the classifier. However, if the threshold is set too high, the accuracy drops, since the dataset contains examples that are too trivial. Figure FIGREF38 , which represents the accuracy of classifiers on their corresponding generated development sets, further shows the trade-off between the accuracy and triviality of the examples. The classifiers trained on datasets with low latent dimension or high filtering threshold have higher accuracies. Notice that the training dataset and test dataset were generated by the same generative model.
The unfiltered datasets have been evaluated with five other metrics besides classification accuracy. The results are presented in Figure FIGREF41 . The whole figure shows the effect of latent dimensionality of the models on different metrics. The main purpose of the figure is not show absolute values for each of the metrics, but to compare the metrics' curves to the curve of our main metric, the accuracy of the classifier.
The first metric – Premise-Hypothesis Distance – represents the average Jaccard distance between the premise and the generated hypothesis. Datasets generated with low latent dimensions have hypotheses more similar to premises, which indicates that the generated hypotheses are more trivial and less diverse than hypothesis generated with higher latent dimensions.
We also evaluated the models with standard language generation metrics ROUGE-L and METEOR. The metrics are negatively correlated with the accuracy of the classifier. We believe this is because the two metrics reward hypotheses that are similar to their reference (original) hypothesis. However, the classifier is better if trained on more diverse hypotheses.
The next metric is the log-likelihood of hypotheses in the development set. This metric is the negative of the training loss function. The log-likelihood improves with dimensionality since it is easier to fit the hypotheses in the training step having more dimensions. Consequently, the hypothesis in the generating step are more confident – they have lower log-likelihood.
The last metric – discriminative error rate – is calculated with the discriminative model. The model is trained on the hypotheses from the unfiltered generated dataset on one side and the original hypotheses on the other side. Error rate is calculated on the (generated and original) development sets. Higher error rate indicates that it is more difficult for discriminative model to distinguish between the generated and the original hypotheses, which suggests that the original generating distribution and the distribution of the generative model are more similar. The discriminative model detects that low dimensional generative models generate more trivial examples as also indicated by the distance between premise and hypotheses. On the other hand, it also detects the hypotheses of high dimensional models, which more frequently contain grammatic or semantic errors.
There is a positive correlation between the discriminative error rate and the accuracy of the classifier. This observation led us to the experiment, where the generated dataset was filtered according to the prediction probability of the discriminative model. Two disjoint filtered datasets were created. One with hypotheses that had high probability that they come from the original distribution and the other one with low probability. However, the accuracies of classifiers trained on these datasets were very similar to the accuracy of the classifier on the unfiltered dataset. Similar test was also done with the log-likelihood metric. The examples with higher log-likelihood had similar performance than the ones with lower log-likelihood. This also lead us to set the size of the beam to 1. Also, the run time of generating hypothesis is INLINEFORM0 , where INLINEFORM1 is beam size. Thus, with lower beam sizes much more hypotheses can be generated.
To accept the hypothesis from Section SECREF1 we have shown that a quality dataset requires accurate examples by showing that filtering the dataset with the original classifier improves the performance (Figure FIGREF38 ). Next, we have shown that non-trivial examples are also required. If the filtering threshold is set too high, these examples are excluded, and the accuracy drops. Also, the more trivial examples are produced by low-dimensional models, which is indicated by lower premise-hypothesis distances, and lower discriminative error rate (Figure FIGREF41 ). Finally, a quality dataset requires more comprehensible examples. The high dimensional models produce less comprehensible hypotheses. They are detected by the discriminative model (see discriminator error rate in Figure FIGREF41 ).
Other models
We also compared AttEmbedDecoder model to all other models. Table TABREF43 presents the results. For all the models the latent dimension INLINEFORM0 is set to 8, as it was previously shown to be one of the best dimensions.
For all the models the number of total parameters is relatively high, however only a portion of parameters get updated each time. The AttEmbedDecoder model was the best model according to our main metric – the accuracy of the classifier trained on the generated dataset.
The hidden dimension INLINEFORM0 of the BaseEmbedDecoder was selected so that the model was comparable to AttEmbedDecoder in terms of the number of parameters INLINEFORM1 . The accuracies of classifiers generated by BaseEmbedDecoder are still lower than the accuracies of classifiers generated by AttEmbedDecoder, which shows that the attention mechanism helps the models.
Table TABREF44 shows the performance of generated datasets compared to the original one. The best generated dataset was generated by AttEmbedDecoder. The accuracy of its classifier is only 2.7 % lower than the accuracy of classifier generated on the original human crafted dataset. The comparison of the best generated dataset to the original dataset shows that the datasets had only INLINEFORM0 of identical examples. The average length of the hypothesis was INLINEFORM1 and INLINEFORM2 in the original dataset and in the generated dataset, respectively. In another experiment the generated dataset and the original dataset were merged to train a new classifier. Thus, the merged dataset contained twice as many examples as other datasets. The accuracy of this classifier was 82.0%, which is 0.8 % better than the classifier trained solely on the original training set. However, the lowest average loss is achieved by the classifier trained on the original dataset.
Qualitative evaluation
We also did a qualitative evaluation of the generated hypothesis. Hypotheses are mostly grammatically sound. Sometimes the models incorrectly use indefinite articles, for instance ”an phone”, or possessive pronouns ”a man uses her umbrella”. These may be due to the fact the system must learn the right indefinite article for every word separately. On the other hand, the models sometimes generate hypotheses that showcase more advanced grammatical patterns. For instance, hypothesis ”The man and woman have a cake for their family” shows that the model can correctly use plural in a non-trivial setting. Generative neural networks have a tendency to repeat words, which sometimes make sentences meaningless, like ”A cup is drinking from a cup of coffee” or even ungrammatical, like ”Several people in a car car”.
As shown previously the larger is the latent dimension more creative hypotheses are generated. However, with more creativity semantic errors emerge. Some hypotheses are correct, just unlikely to be written by a human, like ”A shirtless man is holding a guitar with a woman and a woman”. Others present improbable events, like ”The girls were sitting in the park watching tv”, or even impossible events, for instance ”The child is waiting for his wife”. This type of errors arise because the models have not learned enough common sense logic. Finally, there are hypotheses, which make no sense. For instance, ”Two women with grassy beach has no tennis equipment”. On the contrary, the models are able to generate some non-trivial hypotheses. From the original premise ”A band performing with a girl singing and a guy next to her singing as well while playing the guitar”, the model has generated some hypotheses that do not contain concepts explicitly found in the premise. For instance, ”People are playing instruments” (entailment), ”The band was entirely silent” (contradiction), or ”The girl is playing at the concert” (neutral).
Regarding the compliance of the hypotheses with the label and premise, we observed that many generated hypotheses are not complying with the label, however they would be a very good example with a different label. For instance, the generated hypotheses represent entailment instead of contradiction. This also explains why the accuracy of the generated dataset measured by the original classifier is low in Figure FIGREF37 . On the other hand, the models generate examples that are more ambiguous and not as clear as those in the original dataset. These examples are harder to classify even for a human. For instance, the relationship between premise ”A kid hitting a baseball in a baseball field” and hypothesis ”The baseball player is trying to get the ball” can be either interpreted either as an entailment if verb get is intepreted as not to miss or contradiction if get is intepreted as possess. For a deeper insight into generated hypothesis more examples are presented in SECREF7 .
The gap between the discriminative error rates (disc-er) of EncoderDecoder models and EmbedDecoder models in Table TABREF43 is significant. To further investigate, the same experiment was performed again by a human evaluator and the discriminative model. This time on a sample of 200 examples. To recap, both the model and human were asked to select the generated hypothesis given a random original and generated hypothesis without knowing which one is which.
Human evaluation confirms that AttEmbedDecoder hypotheses are more difficult to separate from the original one than the hypotheses of VaeEncoderDecoder. Table TABREF46 presents the results. The discriminative model discriminates better than the human evaluator. This may be due to the fact that the discriminative model has learned from a large training set, while the human was not shown any training examples. Human evaluation has shown that generated hypotheses are positively recognized if they contain a grammatical or semantic error. But even if the generated hypothesis does not contain these errors, it sometimes reveals itself by not being as sophisticated as the original example. On the other hand, the discriminative model does not always recognize these discrepancies. It relies more on the differences in distributions learned form a big training set. The true number of non-distinguishable examples may be even higher than indicated by the human discriminator error rate since the human may have correctly guessed some of the examples he could not distinguish.
Conclusion
In this paper, we have proposed several generative neural networks for generating hypothesis using NLI dataset. To evaluate these models we propose the accuracy of classifier trained on the generated dataset as the main metric. The best model achieved INLINEFORM0 accuracy, which is only INLINEFORM1 less than the accuracy of the classifier trained on the original human written dataset, while the best dataset combined with the original dataset has achieved the highest accuracy. This model learns a decoder and a mapping embedding for each training example. It outperforms the more standard encoder-decoder networks. Although more parameters are needed to be trained, less are updated on each batch. We have also shown that the attention mechanism improves the model. The analysis has confirmed our hypothesis that a good dataset contains accurate, non-trivial and comprehensible examples. To further examine the quality of generated hypothesis, they were compared against the original human written hypotheses. The discriminative evaluation shows that in INLINEFORM2 of cases the human evaluator incorrectly distinguished between the original and the generated hypothesis. The discriminative model was actually better in distinguishing. We have also compared the accuracy of classifier to other metrics. The standard text generation metrics ROUGE and METEOR do not indicate if a generated dataset is good for training a classifier.
To obtain higher accuracies of the generated datasets, they need to be filtered, because the generative models produce examples, whose label is not always accurate. Thus, we propose for future work incorporating the classifier into the generative model, in a similar fashion that it was done on images by BIBREF46 . This network could also include the discriminative model to generate examples from a distribution that is more similar to the original training distribution. Finally, constructing a dataset requires a lot of intensive manual work that mainly consists of writing text with some creativity. To extend the original dataset human users could just validate or correct the generated examples. On top of that we would like to develop active learning methods to identify incorrect generated examples that would most improve the dataset if corrected.
Acknowledgements
This work was supported by the Slovenian Research Agency and the ICT Programme of the EC under XLike (ICT-STREP-288342) and XLime (FP7-ICT-611346).
More Examples
In this section more generated hypotheses are presented. Each example starts with the original example data. Then, several hypotheses generated with from the original example with our best model are displayed. | Unanswerable |
e9cfe3f15735e2b0d5c59a54c9940ed1d00401a2 | e9cfe3f15735e2b0d5c59a54c9940ed1d00401a2_0 | Q: Does the paper report F1-scores for the age and language variety tasks?
Text: Introduction
The proliferation of social media has made it possible to collect user data in unprecedented ways. These data can come in the form of usage and behavior (e.g., who likes what on Facebook), network (e.g., who follows a given user on Instagram), and content (e.g., what people post to Twitter). Availability of such data have made it possible to make discoveries about individuals and communities, mobilizing social and psychological research and employing natural language processing methods. In this work, we focus on predicting social media user age, dialect, and gender based on posted language. More specifically, we use the total of 100 tweets from each manually-labeled user to predict each of these attributes. Our dataset comes from the Arabic author profiling and deception detection shared task (APDA) BIBREF0. We focus on building simple models using pre-trained bidirectional encoders from transformers(BERT) BIBREF1 under various data conditions. Our results show (1) the utility of augmenting training data, and (2) the benefit of using majority votes from our simple classifiers.
In the rest of the paper, we introduce the dataset, followed by our experimental conditions and results. We then provide a literature review and conclude.
Data
For the purpose of our experiments, we use data released by the APDA shared task organizers. The dataset is divided into train and test by organizers. The training set is distributed with labels for the three tasks of age, dialect, and gender. Following the standard shared tasks set up, the test set is distributed without labels and participants were expected to submit their predictions on test. The shared task predictions are expected by organizers at the level of users. The distribution has 100 tweets for each user, and so each tweet is distributed with a corresponding user id. As such, in total, the distributed training data has 2,250 users, contributing a total of 225,000 tweets. The official task test set contains 720,00 tweets posted by 720 users. For our experiments, we split the training data released by organizers into 90% TRAIN set (202,500 tweets from 2,025 users) and 10% DEV set (22,500 tweets from 225 users). The age task labels come from the tagset {under-25, between-25 and 34, above-35}. For dialects, the data are labeled with 15 classes, from the set {Algeria, Egypt, Iraq, Kuwait, Lebanon-Syria, Lybia, Morocco, Oman, Palestine-Jordan, Qatar, Saudi Arabia, Sudan, Tunisia, UAE, Yemen}. The gender task involves binary labels from the set {male, female}.
Experiments
As explained earlier, the shared task is set up at the user level where the age, dialect, and gender of each user are the required predictions. In our experiments, we first model the task at the tweet level and then port these predictions at the user level. For our core modelling, we fine-tune BERT on the shared task data. We also introduce an additional in-house dataset labeled with dialect and gender tags to the task as we will explain below. As a baseline, we use a small gated recurrent units (GRU) model. We now introduce our tweet-level models.
Experiments ::: Tweet-Level Models ::: Baseline GRU.
Our baseline is a GRU network for each of the three tasks. We use the same network architecture across the 3 tasks. For each network, the network contains a layer unidirectional GRU, with 500 units and an output linear layer. The network is trained end-to-end. Our input embedding layer is initialized with a standard normal distribution, with $\mu =0$, and $\sigma =1$, i.e., $W \sim N(0,1)$. We use a maximum sequence length of 50 tokens, and choose an arbitrary vocabulary size of 100,000 types, where we use the 100,000 most frequent words in TRAIN. To avoid over-fitting, we use dropout BIBREF2 with a rate of 0.5 on the hidden layer. For the training, we use the Adam BIBREF3 optimizer with a fixed learning rate of $1e-3$. We employ batch training with a batch size of 32 for this model. We train the network for 15 epochs and save the model at the end of each epoch, choosing the model that performs highest accuracy on DEV as our best model. We present our best result on DEV in Table TABREF7. We report all our results using accuracy. Our best model obtains 42.48% for age, 37.50% for dialect, and 57.81% for gender. All models obtains best results with 2 epochs.
Experiments ::: Tweet-Level Models ::: BERT.
For each task, we fine-tune on the BERT-Base Muultilingual Cased model relesed by the authors BIBREF1 . The model was pre-trained on Wikipedia of 104 languages (including Arabic) with 12 layer, 768 hidden units each, 12 attention heads, and has 110M parameters in entire model. The vocabulary of the model is 119,547 shared WordPices. We fine-tune the model with maximum sequence length of 50 tokens and a batch size of 32. We set the learning rate to $2e-5$ and train for 15 epochs. We use the same network architecture and parameters across the 3 tasks. As Table TABREF7 shows, comparing with GRU, BERT is 3.16% better for age, 4.85% better for dialect, and 2.45% higher for gender.
Experiments ::: Tweet-Level Models ::: Data Augmentation.
To further improve the performance of our models, we introduce in-house labeled data that we use to fine-tune BERT. For the gender classification task, we manually label an in-house dataset of 1,100 users with gender tags, including 550 female users, 550 male users. We obtain 162,829 tweets by crawling the 1,100 users' timelines. We combine this new gender dataset with the gender TRAIN data (from shared task) to obtain an extended dataset, to which we refer as EXTENDED_Gender. For the dialect identification task, we randomly sample 20,000 tweets for each class from an in-house dataset gold labeled with the same 15 classes as the shared task. In this way, we obtain 298,929 tweets (Sudan only has 18,929 tweets). We combine this new dialect data with the shared task dialect TRAIN data to form EXTENDED_Dialect. For both the dialect and gender tasks, we fine-tune BERT on EXTENDED_Dialect and EXTENDED_Gender independently and report performance on DEV. We refer to this iteration of experiments as BERT_EXT. As Table TABREF7 shows, BERT_EXT is 2.18% better than BERT for dialect and 0.75% better than BERT for gender.
Experiments ::: User-Level Models
Our afore-mentioned models identify user's profiling on the tweet-level, rather than directly detecting the labels of a user. Hence, we follow the work of Zhang & Abdul-Mageed BIBREF4 to identify user-level labels. For each of the three tasks, we use tweet-level predicted labels (and associated softmax values) as a proxy for user-level labels. For each predicted label, we use the softmax value as a threshold for including only highest confidently predicted tweets. Since in some cases softmax values can be low, we try all values between 0.00 and 0.99 to take a softmax-based majority class as the user-level predicted label, fine-tuning on our DEV set. Using this method, we acquire the following results at the user level: BERT models obtain an accuracy of 55.56% for age, 96.00% for dialect, and 80.00% for gender. BERT_EXT models achieve 95.56% accuracy for dialect and 84.00% accuracy for gender.
Experiments ::: APDA@FIRE2019 submission
First submission. For the shared task submission, we use the predictions of BERT_EXT as out first submission for gender and dialect, but only BERT for age (since we have no BERT_EXT models for age, as explained earlier). In each case, we acquire results at tweet-level first, then port the labels at the user-level as explained in the previous section. For our second and third submitted models, we also follow this method of going from tweet to user level. Second submission. We combine our DEV data with our EXTENDED_Dialect and EXTENDED_Gender data, for dialect and gender respectively, and train our second submssions for the two tasks. For age second submsision, we concatenate DEV data to TRAIN and fine-tune the BERT model. We refer to the settings for our second submission models collectively as BERT_EXT+DEV.
Third submission. Finally, for our third submission, we use a majority vote of (1) first submission, (2) second submission, and (3) predictions from our user-level BERT model. These majority class models (i.e., our third submission) achieve best results on the official test data. We acquire 54.72% accuracy for age, 81.67% accuracy for gender, 93.75% accuracy for dialect, and 40.97% joint accuracy.
Conclusion
In this work, we described our submitted models to the Arabic author profiling and deception detection shared task (APDA) BIBREF0. We focused on detecting age, dialect, and gender using BERT models under various data conditions, showing the utility of additional, in-house data on the task. We also showed that a majority vote of our models trained under different conditions outperforms single models on the official evaluation. In the future, we will investigate automatically extending training data for these tasks as well as better representation learning methods.
Acknowledgement
We acknowledge the support of the Natural Sciences and Engineering Research Council of Canada (NSERC), the Social Sciences Research Council of Canada (SSHRC), and Compute Canada (www.computecanada.ca). | No |
52ed2eb6f4d1f74ebdc4dcddcae201786d4c0463 | 52ed2eb6f4d1f74ebdc4dcddcae201786d4c0463_0 | Q: Are the models compared to some baseline models?
Text: Introduction
The proliferation of social media has made it possible to collect user data in unprecedented ways. These data can come in the form of usage and behavior (e.g., who likes what on Facebook), network (e.g., who follows a given user on Instagram), and content (e.g., what people post to Twitter). Availability of such data have made it possible to make discoveries about individuals and communities, mobilizing social and psychological research and employing natural language processing methods. In this work, we focus on predicting social media user age, dialect, and gender based on posted language. More specifically, we use the total of 100 tweets from each manually-labeled user to predict each of these attributes. Our dataset comes from the Arabic author profiling and deception detection shared task (APDA) BIBREF0. We focus on building simple models using pre-trained bidirectional encoders from transformers(BERT) BIBREF1 under various data conditions. Our results show (1) the utility of augmenting training data, and (2) the benefit of using majority votes from our simple classifiers.
In the rest of the paper, we introduce the dataset, followed by our experimental conditions and results. We then provide a literature review and conclude.
Data
For the purpose of our experiments, we use data released by the APDA shared task organizers. The dataset is divided into train and test by organizers. The training set is distributed with labels for the three tasks of age, dialect, and gender. Following the standard shared tasks set up, the test set is distributed without labels and participants were expected to submit their predictions on test. The shared task predictions are expected by organizers at the level of users. The distribution has 100 tweets for each user, and so each tweet is distributed with a corresponding user id. As such, in total, the distributed training data has 2,250 users, contributing a total of 225,000 tweets. The official task test set contains 720,00 tweets posted by 720 users. For our experiments, we split the training data released by organizers into 90% TRAIN set (202,500 tweets from 2,025 users) and 10% DEV set (22,500 tweets from 225 users). The age task labels come from the tagset {under-25, between-25 and 34, above-35}. For dialects, the data are labeled with 15 classes, from the set {Algeria, Egypt, Iraq, Kuwait, Lebanon-Syria, Lybia, Morocco, Oman, Palestine-Jordan, Qatar, Saudi Arabia, Sudan, Tunisia, UAE, Yemen}. The gender task involves binary labels from the set {male, female}.
Experiments
As explained earlier, the shared task is set up at the user level where the age, dialect, and gender of each user are the required predictions. In our experiments, we first model the task at the tweet level and then port these predictions at the user level. For our core modelling, we fine-tune BERT on the shared task data. We also introduce an additional in-house dataset labeled with dialect and gender tags to the task as we will explain below. As a baseline, we use a small gated recurrent units (GRU) model. We now introduce our tweet-level models.
Experiments ::: Tweet-Level Models ::: Baseline GRU.
Our baseline is a GRU network for each of the three tasks. We use the same network architecture across the 3 tasks. For each network, the network contains a layer unidirectional GRU, with 500 units and an output linear layer. The network is trained end-to-end. Our input embedding layer is initialized with a standard normal distribution, with $\mu =0$, and $\sigma =1$, i.e., $W \sim N(0,1)$. We use a maximum sequence length of 50 tokens, and choose an arbitrary vocabulary size of 100,000 types, where we use the 100,000 most frequent words in TRAIN. To avoid over-fitting, we use dropout BIBREF2 with a rate of 0.5 on the hidden layer. For the training, we use the Adam BIBREF3 optimizer with a fixed learning rate of $1e-3$. We employ batch training with a batch size of 32 for this model. We train the network for 15 epochs and save the model at the end of each epoch, choosing the model that performs highest accuracy on DEV as our best model. We present our best result on DEV in Table TABREF7. We report all our results using accuracy. Our best model obtains 42.48% for age, 37.50% for dialect, and 57.81% for gender. All models obtains best results with 2 epochs.
Experiments ::: Tweet-Level Models ::: BERT.
For each task, we fine-tune on the BERT-Base Muultilingual Cased model relesed by the authors BIBREF1 . The model was pre-trained on Wikipedia of 104 languages (including Arabic) with 12 layer, 768 hidden units each, 12 attention heads, and has 110M parameters in entire model. The vocabulary of the model is 119,547 shared WordPices. We fine-tune the model with maximum sequence length of 50 tokens and a batch size of 32. We set the learning rate to $2e-5$ and train for 15 epochs. We use the same network architecture and parameters across the 3 tasks. As Table TABREF7 shows, comparing with GRU, BERT is 3.16% better for age, 4.85% better for dialect, and 2.45% higher for gender.
Experiments ::: Tweet-Level Models ::: Data Augmentation.
To further improve the performance of our models, we introduce in-house labeled data that we use to fine-tune BERT. For the gender classification task, we manually label an in-house dataset of 1,100 users with gender tags, including 550 female users, 550 male users. We obtain 162,829 tweets by crawling the 1,100 users' timelines. We combine this new gender dataset with the gender TRAIN data (from shared task) to obtain an extended dataset, to which we refer as EXTENDED_Gender. For the dialect identification task, we randomly sample 20,000 tweets for each class from an in-house dataset gold labeled with the same 15 classes as the shared task. In this way, we obtain 298,929 tweets (Sudan only has 18,929 tweets). We combine this new dialect data with the shared task dialect TRAIN data to form EXTENDED_Dialect. For both the dialect and gender tasks, we fine-tune BERT on EXTENDED_Dialect and EXTENDED_Gender independently and report performance on DEV. We refer to this iteration of experiments as BERT_EXT. As Table TABREF7 shows, BERT_EXT is 2.18% better than BERT for dialect and 0.75% better than BERT for gender.
Experiments ::: User-Level Models
Our afore-mentioned models identify user's profiling on the tweet-level, rather than directly detecting the labels of a user. Hence, we follow the work of Zhang & Abdul-Mageed BIBREF4 to identify user-level labels. For each of the three tasks, we use tweet-level predicted labels (and associated softmax values) as a proxy for user-level labels. For each predicted label, we use the softmax value as a threshold for including only highest confidently predicted tweets. Since in some cases softmax values can be low, we try all values between 0.00 and 0.99 to take a softmax-based majority class as the user-level predicted label, fine-tuning on our DEV set. Using this method, we acquire the following results at the user level: BERT models obtain an accuracy of 55.56% for age, 96.00% for dialect, and 80.00% for gender. BERT_EXT models achieve 95.56% accuracy for dialect and 84.00% accuracy for gender.
Experiments ::: APDA@FIRE2019 submission
First submission. For the shared task submission, we use the predictions of BERT_EXT as out first submission for gender and dialect, but only BERT for age (since we have no BERT_EXT models for age, as explained earlier). In each case, we acquire results at tweet-level first, then port the labels at the user-level as explained in the previous section. For our second and third submitted models, we also follow this method of going from tweet to user level. Second submission. We combine our DEV data with our EXTENDED_Dialect and EXTENDED_Gender data, for dialect and gender respectively, and train our second submssions for the two tasks. For age second submsision, we concatenate DEV data to TRAIN and fine-tune the BERT model. We refer to the settings for our second submission models collectively as BERT_EXT+DEV.
Third submission. Finally, for our third submission, we use a majority vote of (1) first submission, (2) second submission, and (3) predictions from our user-level BERT model. These majority class models (i.e., our third submission) achieve best results on the official test data. We acquire 54.72% accuracy for age, 81.67% accuracy for gender, 93.75% accuracy for dialect, and 40.97% joint accuracy.
Conclusion
In this work, we described our submitted models to the Arabic author profiling and deception detection shared task (APDA) BIBREF0. We focused on detecting age, dialect, and gender using BERT models under various data conditions, showing the utility of additional, in-house data on the task. We also showed that a majority vote of our models trained under different conditions outperforms single models on the official evaluation. In the future, we will investigate automatically extending training data for these tasks as well as better representation learning methods.
Acknowledgement
We acknowledge the support of the Natural Sciences and Engineering Research Council of Canada (NSERC), the Social Sciences Research Council of Canada (SSHRC), and Compute Canada (www.computecanada.ca). | Yes |
2c576072e494ab5598667cd6b40bc97fdd7d92d7 | 2c576072e494ab5598667cd6b40bc97fdd7d92d7_0 | Q: What are the in-house data employed?
Text: Introduction
The proliferation of social media has made it possible to collect user data in unprecedented ways. These data can come in the form of usage and behavior (e.g., who likes what on Facebook), network (e.g., who follows a given user on Instagram), and content (e.g., what people post to Twitter). Availability of such data have made it possible to make discoveries about individuals and communities, mobilizing social and psychological research and employing natural language processing methods. In this work, we focus on predicting social media user age, dialect, and gender based on posted language. More specifically, we use the total of 100 tweets from each manually-labeled user to predict each of these attributes. Our dataset comes from the Arabic author profiling and deception detection shared task (APDA) BIBREF0. We focus on building simple models using pre-trained bidirectional encoders from transformers(BERT) BIBREF1 under various data conditions. Our results show (1) the utility of augmenting training data, and (2) the benefit of using majority votes from our simple classifiers.
In the rest of the paper, we introduce the dataset, followed by our experimental conditions and results. We then provide a literature review and conclude.
Data
For the purpose of our experiments, we use data released by the APDA shared task organizers. The dataset is divided into train and test by organizers. The training set is distributed with labels for the three tasks of age, dialect, and gender. Following the standard shared tasks set up, the test set is distributed without labels and participants were expected to submit their predictions on test. The shared task predictions are expected by organizers at the level of users. The distribution has 100 tweets for each user, and so each tweet is distributed with a corresponding user id. As such, in total, the distributed training data has 2,250 users, contributing a total of 225,000 tweets. The official task test set contains 720,00 tweets posted by 720 users. For our experiments, we split the training data released by organizers into 90% TRAIN set (202,500 tweets from 2,025 users) and 10% DEV set (22,500 tweets from 225 users). The age task labels come from the tagset {under-25, between-25 and 34, above-35}. For dialects, the data are labeled with 15 classes, from the set {Algeria, Egypt, Iraq, Kuwait, Lebanon-Syria, Lybia, Morocco, Oman, Palestine-Jordan, Qatar, Saudi Arabia, Sudan, Tunisia, UAE, Yemen}. The gender task involves binary labels from the set {male, female}.
Experiments
As explained earlier, the shared task is set up at the user level where the age, dialect, and gender of each user are the required predictions. In our experiments, we first model the task at the tweet level and then port these predictions at the user level. For our core modelling, we fine-tune BERT on the shared task data. We also introduce an additional in-house dataset labeled with dialect and gender tags to the task as we will explain below. As a baseline, we use a small gated recurrent units (GRU) model. We now introduce our tweet-level models.
Experiments ::: Tweet-Level Models ::: Baseline GRU.
Our baseline is a GRU network for each of the three tasks. We use the same network architecture across the 3 tasks. For each network, the network contains a layer unidirectional GRU, with 500 units and an output linear layer. The network is trained end-to-end. Our input embedding layer is initialized with a standard normal distribution, with $\mu =0$, and $\sigma =1$, i.e., $W \sim N(0,1)$. We use a maximum sequence length of 50 tokens, and choose an arbitrary vocabulary size of 100,000 types, where we use the 100,000 most frequent words in TRAIN. To avoid over-fitting, we use dropout BIBREF2 with a rate of 0.5 on the hidden layer. For the training, we use the Adam BIBREF3 optimizer with a fixed learning rate of $1e-3$. We employ batch training with a batch size of 32 for this model. We train the network for 15 epochs and save the model at the end of each epoch, choosing the model that performs highest accuracy on DEV as our best model. We present our best result on DEV in Table TABREF7. We report all our results using accuracy. Our best model obtains 42.48% for age, 37.50% for dialect, and 57.81% for gender. All models obtains best results with 2 epochs.
Experiments ::: Tweet-Level Models ::: BERT.
For each task, we fine-tune on the BERT-Base Muultilingual Cased model relesed by the authors BIBREF1 . The model was pre-trained on Wikipedia of 104 languages (including Arabic) with 12 layer, 768 hidden units each, 12 attention heads, and has 110M parameters in entire model. The vocabulary of the model is 119,547 shared WordPices. We fine-tune the model with maximum sequence length of 50 tokens and a batch size of 32. We set the learning rate to $2e-5$ and train for 15 epochs. We use the same network architecture and parameters across the 3 tasks. As Table TABREF7 shows, comparing with GRU, BERT is 3.16% better for age, 4.85% better for dialect, and 2.45% higher for gender.
Experiments ::: Tweet-Level Models ::: Data Augmentation.
To further improve the performance of our models, we introduce in-house labeled data that we use to fine-tune BERT. For the gender classification task, we manually label an in-house dataset of 1,100 users with gender tags, including 550 female users, 550 male users. We obtain 162,829 tweets by crawling the 1,100 users' timelines. We combine this new gender dataset with the gender TRAIN data (from shared task) to obtain an extended dataset, to which we refer as EXTENDED_Gender. For the dialect identification task, we randomly sample 20,000 tweets for each class from an in-house dataset gold labeled with the same 15 classes as the shared task. In this way, we obtain 298,929 tweets (Sudan only has 18,929 tweets). We combine this new dialect data with the shared task dialect TRAIN data to form EXTENDED_Dialect. For both the dialect and gender tasks, we fine-tune BERT on EXTENDED_Dialect and EXTENDED_Gender independently and report performance on DEV. We refer to this iteration of experiments as BERT_EXT. As Table TABREF7 shows, BERT_EXT is 2.18% better than BERT for dialect and 0.75% better than BERT for gender.
Experiments ::: User-Level Models
Our afore-mentioned models identify user's profiling on the tweet-level, rather than directly detecting the labels of a user. Hence, we follow the work of Zhang & Abdul-Mageed BIBREF4 to identify user-level labels. For each of the three tasks, we use tweet-level predicted labels (and associated softmax values) as a proxy for user-level labels. For each predicted label, we use the softmax value as a threshold for including only highest confidently predicted tweets. Since in some cases softmax values can be low, we try all values between 0.00 and 0.99 to take a softmax-based majority class as the user-level predicted label, fine-tuning on our DEV set. Using this method, we acquire the following results at the user level: BERT models obtain an accuracy of 55.56% for age, 96.00% for dialect, and 80.00% for gender. BERT_EXT models achieve 95.56% accuracy for dialect and 84.00% accuracy for gender.
Experiments ::: APDA@FIRE2019 submission
First submission. For the shared task submission, we use the predictions of BERT_EXT as out first submission for gender and dialect, but only BERT for age (since we have no BERT_EXT models for age, as explained earlier). In each case, we acquire results at tweet-level first, then port the labels at the user-level as explained in the previous section. For our second and third submitted models, we also follow this method of going from tweet to user level. Second submission. We combine our DEV data with our EXTENDED_Dialect and EXTENDED_Gender data, for dialect and gender respectively, and train our second submssions for the two tasks. For age second submsision, we concatenate DEV data to TRAIN and fine-tune the BERT model. We refer to the settings for our second submission models collectively as BERT_EXT+DEV.
Third submission. Finally, for our third submission, we use a majority vote of (1) first submission, (2) second submission, and (3) predictions from our user-level BERT model. These majority class models (i.e., our third submission) achieve best results on the official test data. We acquire 54.72% accuracy for age, 81.67% accuracy for gender, 93.75% accuracy for dialect, and 40.97% joint accuracy.
Conclusion
In this work, we described our submitted models to the Arabic author profiling and deception detection shared task (APDA) BIBREF0. We focused on detecting age, dialect, and gender using BERT models under various data conditions, showing the utility of additional, in-house data on the task. We also showed that a majority vote of our models trained under different conditions outperforms single models on the official evaluation. In the future, we will investigate automatically extending training data for these tasks as well as better representation learning methods.
Acknowledgement
We acknowledge the support of the Natural Sciences and Engineering Research Council of Canada (NSERC), the Social Sciences Research Council of Canada (SSHRC), and Compute Canada (www.computecanada.ca). | we manually label an in-house dataset of 1,100 users with gender tags, we randomly sample 20,000 tweets for each class from an in-house dataset gold labeled with the same 15 classes as the shared task |
8602160e98e4b2c9c702440da395df5261f55b1f | 8602160e98e4b2c9c702440da395df5261f55b1f_0 | Q: What are the three datasets used in the paper?
Text: Introduction
The proliferation of social media has made it possible to collect user data in unprecedented ways. These data can come in the form of usage and behavior (e.g., who likes what on Facebook), network (e.g., who follows a given user on Instagram), and content (e.g., what people post to Twitter). Availability of such data have made it possible to make discoveries about individuals and communities, mobilizing social and psychological research and employing natural language processing methods. In this work, we focus on predicting social media user age, dialect, and gender based on posted language. More specifically, we use the total of 100 tweets from each manually-labeled user to predict each of these attributes. Our dataset comes from the Arabic author profiling and deception detection shared task (APDA) BIBREF0. We focus on building simple models using pre-trained bidirectional encoders from transformers(BERT) BIBREF1 under various data conditions. Our results show (1) the utility of augmenting training data, and (2) the benefit of using majority votes from our simple classifiers.
In the rest of the paper, we introduce the dataset, followed by our experimental conditions and results. We then provide a literature review and conclude.
Data
For the purpose of our experiments, we use data released by the APDA shared task organizers. The dataset is divided into train and test by organizers. The training set is distributed with labels for the three tasks of age, dialect, and gender. Following the standard shared tasks set up, the test set is distributed without labels and participants were expected to submit their predictions on test. The shared task predictions are expected by organizers at the level of users. The distribution has 100 tweets for each user, and so each tweet is distributed with a corresponding user id. As such, in total, the distributed training data has 2,250 users, contributing a total of 225,000 tweets. The official task test set contains 720,00 tweets posted by 720 users. For our experiments, we split the training data released by organizers into 90% TRAIN set (202,500 tweets from 2,025 users) and 10% DEV set (22,500 tweets from 225 users). The age task labels come from the tagset {under-25, between-25 and 34, above-35}. For dialects, the data are labeled with 15 classes, from the set {Algeria, Egypt, Iraq, Kuwait, Lebanon-Syria, Lybia, Morocco, Oman, Palestine-Jordan, Qatar, Saudi Arabia, Sudan, Tunisia, UAE, Yemen}. The gender task involves binary labels from the set {male, female}.
Experiments
As explained earlier, the shared task is set up at the user level where the age, dialect, and gender of each user are the required predictions. In our experiments, we first model the task at the tweet level and then port these predictions at the user level. For our core modelling, we fine-tune BERT on the shared task data. We also introduce an additional in-house dataset labeled with dialect and gender tags to the task as we will explain below. As a baseline, we use a small gated recurrent units (GRU) model. We now introduce our tweet-level models.
Experiments ::: Tweet-Level Models ::: Baseline GRU.
Our baseline is a GRU network for each of the three tasks. We use the same network architecture across the 3 tasks. For each network, the network contains a layer unidirectional GRU, with 500 units and an output linear layer. The network is trained end-to-end. Our input embedding layer is initialized with a standard normal distribution, with $\mu =0$, and $\sigma =1$, i.e., $W \sim N(0,1)$. We use a maximum sequence length of 50 tokens, and choose an arbitrary vocabulary size of 100,000 types, where we use the 100,000 most frequent words in TRAIN. To avoid over-fitting, we use dropout BIBREF2 with a rate of 0.5 on the hidden layer. For the training, we use the Adam BIBREF3 optimizer with a fixed learning rate of $1e-3$. We employ batch training with a batch size of 32 for this model. We train the network for 15 epochs and save the model at the end of each epoch, choosing the model that performs highest accuracy on DEV as our best model. We present our best result on DEV in Table TABREF7. We report all our results using accuracy. Our best model obtains 42.48% for age, 37.50% for dialect, and 57.81% for gender. All models obtains best results with 2 epochs.
Experiments ::: Tweet-Level Models ::: BERT.
For each task, we fine-tune on the BERT-Base Muultilingual Cased model relesed by the authors BIBREF1 . The model was pre-trained on Wikipedia of 104 languages (including Arabic) with 12 layer, 768 hidden units each, 12 attention heads, and has 110M parameters in entire model. The vocabulary of the model is 119,547 shared WordPices. We fine-tune the model with maximum sequence length of 50 tokens and a batch size of 32. We set the learning rate to $2e-5$ and train for 15 epochs. We use the same network architecture and parameters across the 3 tasks. As Table TABREF7 shows, comparing with GRU, BERT is 3.16% better for age, 4.85% better for dialect, and 2.45% higher for gender.
Experiments ::: Tweet-Level Models ::: Data Augmentation.
To further improve the performance of our models, we introduce in-house labeled data that we use to fine-tune BERT. For the gender classification task, we manually label an in-house dataset of 1,100 users with gender tags, including 550 female users, 550 male users. We obtain 162,829 tweets by crawling the 1,100 users' timelines. We combine this new gender dataset with the gender TRAIN data (from shared task) to obtain an extended dataset, to which we refer as EXTENDED_Gender. For the dialect identification task, we randomly sample 20,000 tweets for each class from an in-house dataset gold labeled with the same 15 classes as the shared task. In this way, we obtain 298,929 tweets (Sudan only has 18,929 tweets). We combine this new dialect data with the shared task dialect TRAIN data to form EXTENDED_Dialect. For both the dialect and gender tasks, we fine-tune BERT on EXTENDED_Dialect and EXTENDED_Gender independently and report performance on DEV. We refer to this iteration of experiments as BERT_EXT. As Table TABREF7 shows, BERT_EXT is 2.18% better than BERT for dialect and 0.75% better than BERT for gender.
Experiments ::: User-Level Models
Our afore-mentioned models identify user's profiling on the tweet-level, rather than directly detecting the labels of a user. Hence, we follow the work of Zhang & Abdul-Mageed BIBREF4 to identify user-level labels. For each of the three tasks, we use tweet-level predicted labels (and associated softmax values) as a proxy for user-level labels. For each predicted label, we use the softmax value as a threshold for including only highest confidently predicted tweets. Since in some cases softmax values can be low, we try all values between 0.00 and 0.99 to take a softmax-based majority class as the user-level predicted label, fine-tuning on our DEV set. Using this method, we acquire the following results at the user level: BERT models obtain an accuracy of 55.56% for age, 96.00% for dialect, and 80.00% for gender. BERT_EXT models achieve 95.56% accuracy for dialect and 84.00% accuracy for gender.
Experiments ::: APDA@FIRE2019 submission
First submission. For the shared task submission, we use the predictions of BERT_EXT as out first submission for gender and dialect, but only BERT for age (since we have no BERT_EXT models for age, as explained earlier). In each case, we acquire results at tweet-level first, then port the labels at the user-level as explained in the previous section. For our second and third submitted models, we also follow this method of going from tweet to user level. Second submission. We combine our DEV data with our EXTENDED_Dialect and EXTENDED_Gender data, for dialect and gender respectively, and train our second submssions for the two tasks. For age second submsision, we concatenate DEV data to TRAIN and fine-tune the BERT model. We refer to the settings for our second submission models collectively as BERT_EXT+DEV.
Third submission. Finally, for our third submission, we use a majority vote of (1) first submission, (2) second submission, and (3) predictions from our user-level BERT model. These majority class models (i.e., our third submission) achieve best results on the official test data. We acquire 54.72% accuracy for age, 81.67% accuracy for gender, 93.75% accuracy for dialect, and 40.97% joint accuracy.
Conclusion
In this work, we described our submitted models to the Arabic author profiling and deception detection shared task (APDA) BIBREF0. We focused on detecting age, dialect, and gender using BERT models under various data conditions, showing the utility of additional, in-house data on the task. We also showed that a majority vote of our models trained under different conditions outperforms single models on the official evaluation. In the future, we will investigate automatically extending training data for these tasks as well as better representation learning methods.
Acknowledgement
We acknowledge the support of the Natural Sciences and Engineering Research Council of Canada (NSERC), the Social Sciences Research Council of Canada (SSHRC), and Compute Canada (www.computecanada.ca). | Data released for APDA shared task contains 3 datasets. |
57fdb0f6cd91b64a000630ecb711550941283091 | 57fdb0f6cd91b64a000630ecb711550941283091_0 | Q: What are the potentials risks of this approach?
Text: Introduction
Practitioners in the development sector have long recognized the potential of qualitative data to inform programming and gain a better understanding of values, behaviors and attitudes of people and communities affected by their efforts. Some organizations mainly rely on interview or focus group data, some also consider policy documents and reports, and others have started tapping into social media data. Regardless of where the data comes from, analyzing it in a systematic way to inform quick decision-making poses challenges, in terms of high costs, time, or expertise required.
The application of natural language processing (NLP) and machine learning (ML) can make feasible the speedy analysis of qualitative data on a large scale.
We start with a brief description of the main approaches to NLP and how they can be augmented by human coding. We then move on to the issue of working with multiple languages and different document formats. We then provide an overview of the recent application of these techniques in a United Nations Development Program (UNDP) study.
Supervised and Unsupervised Learning
There are two broad approaches to NLP - supervised learning and unsupervised learning BIBREF0 . Supervised learning assumes that an outcome variable is known and an algorithm is used to predict the correct variable. Classifying email as spam based on how the user has classified previous mail is a classic example. In social science, we may want to predict voting behavior of a legislator with the goal of inferring ideological positions from such behavior. In development, interest may center around characteristics that predict successful completion of a training program based on a beneficiary's previous experience or demographic characteristics.
Supervised learning requires human coding - data must be read and labelled correctly. This can require substantial resources. At the same time, the advantage is that validation of a supervised learning result is relatively straightforward as it requires comparing prediction results with actual outcomes. Furthermore, there is no need to label all text documents (or interview data from each respondent) prior to analyzing them. Rather, a sufficiently large set of documents can be labelled to train an algorithm and then used to classify the remaining documents.
In unsupervised learning, the outcome variable is unknown. The exercise is, therefore, of a more exploratory nature. Here the purpose is to reveal patterns in the data that allow us to distinguish distinct groups whose differences are small, while variations across groups are large. In a set of text documents, we may be interested in the main topics about which respondents are talking. In social sciences, we may look for groups of nations within the international system that use a similar language or that describe similar issues, like small-island states prioritizing climate change. Identifying such groups is often referred to as `dimension reduction' of data.
Validation of unsupervised learning results is less straight-forward than with supervised learning. We use data external to our analysis to validate the findings BIBREF1 .
A complementary approach to unsupervised and supervised learning is the use of crowdsourced human coders. BIBREF2 show that crowdsourcing text analysis is a way to achieve reliable and replicable results quickly and inexpensively through the CrowdFlower platform. This approach can work well in supporting a supervised approach where outcomes need to be labelled. For example, BIBREF2 use this technique to produce party positions on the economic left-right and liberal-conservative dimensions from party manifestos. Online coders receive small specific tasks to reduce individual biases. Their individual responses are then aggregated to create an overall measure of party positions.
Working with Multiple Languages
A common obstacle to analyzing textual data in many fields, including the international development sector, is the plethora of languages that practitioners and researchers need to consider – each with subtle but meaningful differences. Fortunately, significant commercial interest in being able to translate large quantities of text inexpensively has led to major advances in recent years driven by Microsoft, Google, and Yandex with the introduction of neural machine translation BIBREF3 . They provide services that are free of charge that can be easily integrated into standard programming languages like Python and R. Open source neural machine translation systems are also being made available BIBREF4 .
In a recent application, BIBREF5 estimate the policy preferences of Swiss legislators using debates in the federal parliament. With speeches delivered in multiple languages, the authors first translate from German, French, and Italian into English using Google Translate API. They then estimate the positions of legislators using common supervised learning methods from text and compare to estimates of positions from roll-call votes.
BIBREF6 evaluate the quality of automatic translation for social science research. The authors utilize the europarl dataset BIBREF7 of debate transcripts in the European Parliament and compare English, Danish, German, Spanish, French, and Polish official versions of the debates with their translations performed using Google Translate. BIBREF6 find that features identified from texts are very similar between automatically translated documents and official manual translations. Furthermore, topic model estimates are also similar across languages when comparing machine and human translations of EU Parliament debates.
Working with Documents
In recent years, great strides have been made into leveraging information from text documents. For example, researchers have analyzed speeches, legislative bills, religious texts, press communications, newspaper articles, stakeholder consultations, policy documents, and regulations. Such documents often contain many different dimensions or aspects of information and it is usually impossible to manually process them for systematic analysis. The analytical methods used to research the content of such documents are similar. We introduce prominent applications from the social sciences to provide an intuition about what can be done with such data.
Open-ended survey questions
Open-ended questions are a rich source of information that should be leveraged to inform decision-making. We could be interested in several aspects of such a text document. One useful approach would be to find common, recurring topics across multiple respondents. This is an unsupervised learning task because we do not know what the topics are. Such models are known as topic models. They summarize multiple text documents into a number of common, semantic topics. BIBREF8 use a structural topic model (STM) that allows for the evaluation of the effects of structural covariates on topical structure, with the aim of analyzing several survey experiments and open-ended questions in the American National Election Study.
Religious statements
BIBREF9 analyze Islamic fatwas to determine whether Jihadist clerics write about different topics to those by non-Jihadists. Using an STM model, they uncover fifteen topics within the collection of religious writings. The model successfully identifies characteristic words in each topic that are common within the topic but occur infrequently in the fourteen other topics. The fifteen topics are labelled manually where the labels are human interpretations of the common underlying meaning of the characteristic words within each topic. Some of the topics deal with fighting, excommunication, prayer, or Ramadan. While the topics are relatively distinct, there is some overlap, i.e. the distance between them varies. This information can be leveraged to map out rhetorical network.
Public debates
BIBREF10 uses topic modeling to link the content of parliamentary speeches in the UK's House of Commons with the number of signatures of constituency-level petitions. He then investigates whether the signatures have any bearing on the responsiveness of representatives, i.e. whether Members of Parliament take up an issue if more people sign a petition on that issue. Also using speeches from the UK House of Commons, BIBREF11 produces evidence for the female role-model hypothesis. He shows that the appointment of female ministers leads to more speaking time and speech centrality of female backbenchers.
News reports
BIBREF12 uses supervised machine learning to generate data on UN peacekeeping activities in Cote d'Ivoire. The text data inputs are news articles from the website of the UN peacekeeping mission in Cote d'Ivoire. Based on a manually classified subset of articles, an algorithm is trained to classify terms into activity categories. Based on this algorithm, the remaining articles are then categorized. Analyzing these data yields new micro-level insights into the activities of peacekeepers on the ground and their effects.
BIBREF13 take a similar approach to researching the effectiveness of self-promotion strategies of politicians. They analyze 170,000 press releases from the U.S. House of Representatives. First, 500 documents are classified by hand into five categories of credit claiming, next the supervised learning algorithm, ReadMe BIBREF14 , is used to code the remaining documents automatically. Using this data, they show that the number of times legislators claim credit generates more support than whether or not the subject they claim credit for amounts to much.
Sentiment analysis
Instead of uncovering topics, we may want to know of a positive or negative tone of any given document. In an open-ended survey response, we could be interested in how the respondent rates the experience with the program. Sentiment analysis is a common tool for such a task. It is based on dictionaries of words that are associated with positive or negative emotions. The sentiment of a document such as an open-ended question would then be based on relative word counts and the sentiment scores with which these words are associated. BIBREF15 find positive and negative keywords and count their frequency in Chinese newspaper articles that mention the United States. With this data, they identify attitudes towards the United States in China.
Text reuse
Another application is to study text reuse in order to trace the flow of ideas. BIBREF16 analyze bill sections to identify whether two sections of a bill propose similar ideas. The algorithm they use was devised to trance gene sequencing and takes the frequency and order of words into account. Using this technique, they can measure the influence of one bill on another. Similarly, BIBREF17 studies policy diffusion by shedding light on how the American Supreme Court and Courts of Appeals influence diffusion of state policies. He uses plagiarism software to quantify the exact degree to which an existing law is reflected in a new proposal. Tracing ideas or influence over time and space can generate insights into the sources of information, the degree of spillover, and the influence of certain actors, ideas, or policies. It can shed light on network structures and long-term effects that would otherwise be hidden to us.
Estimating preferences of actors
In the social sciences, text is often used to infer preferences. Various scaling techniques have been developed and refined over recent years. Wordfish is a scaling algorithm that enables us to estimate policy positions based on word frequencies BIBREF18 . Researchers have used this approach to measure policy positions on European integration in the European Parliament BIBREF19 , on austerity preferences in Ireland BIBREF20 , and intra-party preferences in the energy debate in Switzerland BIBREF21 . Recent developments in the field allow researchers to estimate attitudes in multiple issue dimensions and, therefore, allow for more fine-grained preference estimates BIBREF22 .
Taking context into account
More recent developments in NLP depart from frequencies of single words or groupings of multiple words. Instead, each word is an observation and its variables are other words or characters. Thus, each word is represented by a vector that describes words and their frequencies in the neighborhood. This approach allows for the capture of text semantics BIBREF23 .
BIBREF24 apply this to evaluating real estate in the U.S. by comparing property descriptions with words that are associated with high quality. BIBREF25 collected all country statements made during the United Nations General Debate where heads of state and government make statements that they consider to be important for their countries. Using this data, BIBREF26 run a neural network. They construct an index of similarity between nations and policy themes that allows us to identify preference alliances. This enables them to identify major players using network centrality and show that speeches contain information that goes beyond mere voting records.
In a large exploratory effort, BIBREF27 use dynamic topic modeling which captures the evolution of topics over time, along with topological analytical techniques that allow for the analysis of multidimensional data in a scale invariant way. This enables them to understand political institutions, in this case through the use of speeches from the UK House of Commons. They classify representatives into groups according to speech content and verbosity, and identify a general pattern of political cohesion. They further show that it is feasible to track the performance of politicians with regard to specific issues using text. Topological techniques are especially useful to discover networks of relations using text. BIBREF26 apply this to uncover ideological communities in the network of states in the international system using UN General Assembly speeches.
Working with Short Text, Micro-Blogs, Social Media
Social media networks such as Twitter, the microblogging service, or the social network, Facebook, connect a vast amount of people in most societies. They generally contain shorter text excerpts compared to the sources of text previously discussed. However, their size and dynamic nature make them a compelling source of information. Furthermore, social networks online reflect social networks offline BIBREF28 . They provide a rare and cheap source of information on dynamic micro-level processes.
Twitter
Similar to our discussion above, topic models can be used to analyze social media data. BIBREF9 use such a model to analyze how the United States is viewed in China and in Arabic-speaking countries in response to the disclosure of classified information by Edward Snowden. They collect tweets containing the word “Snowden” in both languages. The tweets are then translated to English using machine translation. BIBREF9 show that Chinese posts are concerned more about attacks in terms of spying, while Arabic posts discuss human rights violations.
We can use social media to analyze networks and sentiments. Similar to word counts, volume of posts can carry information. BIBREF29 collect tweets originating from and referring to political actors around the 2014 elections to the European Parliament. They consider the language and national distribution as well as the dynamics of social media usage. Using network graphs depicting the conversations within and between countries, they identify topics debated nationally, and also find evidence for a Europe-wide debate around the EP elections and the European Union generally. Using sentiment analysis, they further show that positive statements were correlated with pro-integration attitudes whereas negative debates were more national and anti-integration.
This EU example translates well to national conversations involving multiple ethnic or linguistic groups elsewhere. Moreover, we can learn how information spreads from social networks. Consequently, within ethical boundaries, we may also be able to target information more efficiently. An analysis of Twitter data from the Arab Spring suggests that coordination that originated from the periphery of a network rather than the center sparked more protest BIBREF30 . Coordination was measured as a Gini index of Hashtags while centrality was measured by a count of followers of an account.
Facebook
Social media has been used to estimate preferences as well. The advantage of social media compared to speeches or any other preference indicator is coverage. BIBREF31 use endorsement of official pages on Facebook to scale ideological positions of politicians from different levels of government and the public into a common space. Their method extends to other social media such as Twitter where endorsements and likes could be leveraged.
Weibo, RenRen, and Chinese microblogs
The most prominent example of supervised classification with social media data involves the first large scale study of censorship in China. BIBREF32 automatically downloaded Chinese blogposts as they appeared online. Later they returned to the same posts and checked whether or not they had been censored. Furthermore, they analyzed the content of the blog posts and showed that rather than banning critique directed at the government, censorship efforts concentrate on calls for collective expression, such as demonstrations.
Further investigations of Chinese censorship were made possible by leaked correspondence from the Chinese Zhanggong District. The leaks are emails in which individuals claim credit for propaganda posts in the name of the regime. The emails contain social media posts and account names. BIBREF33 used the leaked posts as training data for a classification algorithm that subsequently helped them to identify more propaganda posts. In conjunction with a follow-up survey experiment they found that most content constitutes cheerleading for the regime rather than, for example, critique of foreign governments.
In the next section we discuss an application of natural language processing in international development research.
UNDP Fragments of Impact Initiative
In 2015, the United Nations Development Programme (UNDP) Regional Hub for Europe and CIS launched a Fragments of Impact Initiative (FoI) that helped to collect qualitative (micro-narratives) and quantitative data from multiple countries.
Within a six-month period, around 10,000 interviews were conducted in multiple languages. These covered the perception of the local population in countries including Tajikistan, Yemen, Serbia, Kyrgyzstan and Moldova on peace and reconciliation, local and rural development, value chain, female entrepreneurship and empowerment, and youth unemployment issues. The micro-narratives were collected using SenseMaker(r), a commercial tool for collecting qualitative and quantitative data. The micro-narratives were individual responses to context-tailored questions. An example of such a question is: “Share a recent example of an event that made it easier or harder to support how your family lives.”
While the analysis and visualization of quantitative data was not problematic, systematic analysis and visualization of qualitative data, collected in a format of micro-narratives, would have been impossible.
To find a way to deal with the expensive body of micro-narrative data, UNDP engaged a group of students from the School of Public Policy, University College London, under the supervision of Prof Slava Mikhaylov (University of Essex) and research coordination of Dr Anna Hanchar (The Data Atelier). The objective of this work was to explore how to systematize the analysis of country-level qualitative data, visualize the data, and inform quick decision-making and timely experiment design. The results of the project were presented at the Data for Policy 2016 BIBREF34 .
The UCL team had access to micro-narratives, as well as context specific meta-data such as demographic information and project details. For a cross-national comparison for policy-makers, the team translated the responses in multiple languages into English using machine translation, in this case Translate API (Yandex Technologies). As a pre-processing step, words without functional meaning (e.g. `I'), rare words that occurred in only one narrative, numbers, and punctuation were all removed. The remaining words were stemmed to remove plural forms of nouns or conjugations of verbs.
As part of this exploration exercise, and guided by UNDP country project leads, the UCL team applied structural topic modeling BIBREF8 as an NLP approach and created an online dashboard containing data visualization per country. The dashboard included descriptive data, as well as results. Figure FIGREF13 illustrates an example of the dashboard. The analysis also allowed for the extraction of general themes described by respondents in the micro-narratives, and looked for predictors such as demographics that correlated with these themes. In Moldova, the major topic among men was rising energy prices. Among women the main topic was political participation and protest, which suggests that female empowerment programs could potentially be fruitful.
In Kyrgyzstan, the team found that the main topics revolved around finding work, access to resources and national borders. Using the meta-data on urbanization, it became clear that rural respondents described losing livestock that had crossed the border to Tajikistan, or that water sources were located across the border. The urban population was concerned about being able to cross the border to Russia for work. Figure FIGREF14 shows word probabilities from the main “agriculture/trade” topic across respondents from urban and rural communities.
For Serbia, the analysis compared issues faced by Roma populations in areas of high and low Roma concentration. Figure FIGREF15 shows the relationship between topics discussed by Roma respondents in areas of high concentration.
BIBREF34 found that Roma respondents identified education as the overarching main topic, independent of the density of the Roma population. Differences were found between respondents across the level of integration with society. In areas of high Roma concentration, respondents were aware of available channels for inclusion. In low Roma density areas, respondents were mainly concerned with severe poverty and discrimination preventing societal inclusion.
In Tajikistan, BIBREF34 investigated the relationship between household labor migration and female entrepreneurship success. They found strong regional differences between the Sughd and Khatlon regions where topics in less successful Khatlon revolved around red tape. Moreover, successful entrepreneurship was very much related to receiving remittances. Figure FIGREF16 illustrates topics that correlate with success.
Analysis of micro-narratives from Yemen showed that the most recurrent themes focused on family issues (Figure FIGREF17 ). There are significant differences in terms of engagement in civil society between young people and the older population. Young respondents emphasized pro-active behavior, political engagement, and interest in community-driven initiatives fostering political development.
Conclusion
In this overview, our aim has been to demonstrate how new forms of data can be leveraged to complement the work of practitioners in international development. We have demonstrated that a wide variety of questions can be asked.
Exploratory work can be performed to systematize large quantities of text. Additionally, we can learn about the sentiment that specific groups express towards specific topics. Networks can be uncovered, the spread of information or ideas can be traced, and influential actors identified. We can classify documents based on human coding of a subset of documents and establish which topics predict/correlate with predefined outcomes such as successful employment or completion of a program.
While the application used here to illustrate the discussion focuses on texts in the form of open-ended questions, social networks can be used and their coverage and topicality can be leveraged.
Natural language processing has the potential to unlock large quantities of untapped knowledge that could enhance our understanding of micro-level processes and enable us to make better context-tailored decisions.
Acknowledgment
Authors' names are listed in alphabetical order. Authors have contributed equally to all work. | Unanswerable |
3aa43a0d543b88d40e4f3500c7471e263515be40 | 3aa43a0d543b88d40e4f3500c7471e263515be40_0 | Q: What elements of natural language processing are proposed to analyze qualitative data?
Text: Introduction
Practitioners in the development sector have long recognized the potential of qualitative data to inform programming and gain a better understanding of values, behaviors and attitudes of people and communities affected by their efforts. Some organizations mainly rely on interview or focus group data, some also consider policy documents and reports, and others have started tapping into social media data. Regardless of where the data comes from, analyzing it in a systematic way to inform quick decision-making poses challenges, in terms of high costs, time, or expertise required.
The application of natural language processing (NLP) and machine learning (ML) can make feasible the speedy analysis of qualitative data on a large scale.
We start with a brief description of the main approaches to NLP and how they can be augmented by human coding. We then move on to the issue of working with multiple languages and different document formats. We then provide an overview of the recent application of these techniques in a United Nations Development Program (UNDP) study.
Supervised and Unsupervised Learning
There are two broad approaches to NLP - supervised learning and unsupervised learning BIBREF0 . Supervised learning assumes that an outcome variable is known and an algorithm is used to predict the correct variable. Classifying email as spam based on how the user has classified previous mail is a classic example. In social science, we may want to predict voting behavior of a legislator with the goal of inferring ideological positions from such behavior. In development, interest may center around characteristics that predict successful completion of a training program based on a beneficiary's previous experience or demographic characteristics.
Supervised learning requires human coding - data must be read and labelled correctly. This can require substantial resources. At the same time, the advantage is that validation of a supervised learning result is relatively straightforward as it requires comparing prediction results with actual outcomes. Furthermore, there is no need to label all text documents (or interview data from each respondent) prior to analyzing them. Rather, a sufficiently large set of documents can be labelled to train an algorithm and then used to classify the remaining documents.
In unsupervised learning, the outcome variable is unknown. The exercise is, therefore, of a more exploratory nature. Here the purpose is to reveal patterns in the data that allow us to distinguish distinct groups whose differences are small, while variations across groups are large. In a set of text documents, we may be interested in the main topics about which respondents are talking. In social sciences, we may look for groups of nations within the international system that use a similar language or that describe similar issues, like small-island states prioritizing climate change. Identifying such groups is often referred to as `dimension reduction' of data.
Validation of unsupervised learning results is less straight-forward than with supervised learning. We use data external to our analysis to validate the findings BIBREF1 .
A complementary approach to unsupervised and supervised learning is the use of crowdsourced human coders. BIBREF2 show that crowdsourcing text analysis is a way to achieve reliable and replicable results quickly and inexpensively through the CrowdFlower platform. This approach can work well in supporting a supervised approach where outcomes need to be labelled. For example, BIBREF2 use this technique to produce party positions on the economic left-right and liberal-conservative dimensions from party manifestos. Online coders receive small specific tasks to reduce individual biases. Their individual responses are then aggregated to create an overall measure of party positions.
Working with Multiple Languages
A common obstacle to analyzing textual data in many fields, including the international development sector, is the plethora of languages that practitioners and researchers need to consider – each with subtle but meaningful differences. Fortunately, significant commercial interest in being able to translate large quantities of text inexpensively has led to major advances in recent years driven by Microsoft, Google, and Yandex with the introduction of neural machine translation BIBREF3 . They provide services that are free of charge that can be easily integrated into standard programming languages like Python and R. Open source neural machine translation systems are also being made available BIBREF4 .
In a recent application, BIBREF5 estimate the policy preferences of Swiss legislators using debates in the federal parliament. With speeches delivered in multiple languages, the authors first translate from German, French, and Italian into English using Google Translate API. They then estimate the positions of legislators using common supervised learning methods from text and compare to estimates of positions from roll-call votes.
BIBREF6 evaluate the quality of automatic translation for social science research. The authors utilize the europarl dataset BIBREF7 of debate transcripts in the European Parliament and compare English, Danish, German, Spanish, French, and Polish official versions of the debates with their translations performed using Google Translate. BIBREF6 find that features identified from texts are very similar between automatically translated documents and official manual translations. Furthermore, topic model estimates are also similar across languages when comparing machine and human translations of EU Parliament debates.
Working with Documents
In recent years, great strides have been made into leveraging information from text documents. For example, researchers have analyzed speeches, legislative bills, religious texts, press communications, newspaper articles, stakeholder consultations, policy documents, and regulations. Such documents often contain many different dimensions or aspects of information and it is usually impossible to manually process them for systematic analysis. The analytical methods used to research the content of such documents are similar. We introduce prominent applications from the social sciences to provide an intuition about what can be done with such data.
Open-ended survey questions
Open-ended questions are a rich source of information that should be leveraged to inform decision-making. We could be interested in several aspects of such a text document. One useful approach would be to find common, recurring topics across multiple respondents. This is an unsupervised learning task because we do not know what the topics are. Such models are known as topic models. They summarize multiple text documents into a number of common, semantic topics. BIBREF8 use a structural topic model (STM) that allows for the evaluation of the effects of structural covariates on topical structure, with the aim of analyzing several survey experiments and open-ended questions in the American National Election Study.
Religious statements
BIBREF9 analyze Islamic fatwas to determine whether Jihadist clerics write about different topics to those by non-Jihadists. Using an STM model, they uncover fifteen topics within the collection of religious writings. The model successfully identifies characteristic words in each topic that are common within the topic but occur infrequently in the fourteen other topics. The fifteen topics are labelled manually where the labels are human interpretations of the common underlying meaning of the characteristic words within each topic. Some of the topics deal with fighting, excommunication, prayer, or Ramadan. While the topics are relatively distinct, there is some overlap, i.e. the distance between them varies. This information can be leveraged to map out rhetorical network.
Public debates
BIBREF10 uses topic modeling to link the content of parliamentary speeches in the UK's House of Commons with the number of signatures of constituency-level petitions. He then investigates whether the signatures have any bearing on the responsiveness of representatives, i.e. whether Members of Parliament take up an issue if more people sign a petition on that issue. Also using speeches from the UK House of Commons, BIBREF11 produces evidence for the female role-model hypothesis. He shows that the appointment of female ministers leads to more speaking time and speech centrality of female backbenchers.
News reports
BIBREF12 uses supervised machine learning to generate data on UN peacekeeping activities in Cote d'Ivoire. The text data inputs are news articles from the website of the UN peacekeeping mission in Cote d'Ivoire. Based on a manually classified subset of articles, an algorithm is trained to classify terms into activity categories. Based on this algorithm, the remaining articles are then categorized. Analyzing these data yields new micro-level insights into the activities of peacekeepers on the ground and their effects.
BIBREF13 take a similar approach to researching the effectiveness of self-promotion strategies of politicians. They analyze 170,000 press releases from the U.S. House of Representatives. First, 500 documents are classified by hand into five categories of credit claiming, next the supervised learning algorithm, ReadMe BIBREF14 , is used to code the remaining documents automatically. Using this data, they show that the number of times legislators claim credit generates more support than whether or not the subject they claim credit for amounts to much.
Sentiment analysis
Instead of uncovering topics, we may want to know of a positive or negative tone of any given document. In an open-ended survey response, we could be interested in how the respondent rates the experience with the program. Sentiment analysis is a common tool for such a task. It is based on dictionaries of words that are associated with positive or negative emotions. The sentiment of a document such as an open-ended question would then be based on relative word counts and the sentiment scores with which these words are associated. BIBREF15 find positive and negative keywords and count their frequency in Chinese newspaper articles that mention the United States. With this data, they identify attitudes towards the United States in China.
Text reuse
Another application is to study text reuse in order to trace the flow of ideas. BIBREF16 analyze bill sections to identify whether two sections of a bill propose similar ideas. The algorithm they use was devised to trance gene sequencing and takes the frequency and order of words into account. Using this technique, they can measure the influence of one bill on another. Similarly, BIBREF17 studies policy diffusion by shedding light on how the American Supreme Court and Courts of Appeals influence diffusion of state policies. He uses plagiarism software to quantify the exact degree to which an existing law is reflected in a new proposal. Tracing ideas or influence over time and space can generate insights into the sources of information, the degree of spillover, and the influence of certain actors, ideas, or policies. It can shed light on network structures and long-term effects that would otherwise be hidden to us.
Estimating preferences of actors
In the social sciences, text is often used to infer preferences. Various scaling techniques have been developed and refined over recent years. Wordfish is a scaling algorithm that enables us to estimate policy positions based on word frequencies BIBREF18 . Researchers have used this approach to measure policy positions on European integration in the European Parliament BIBREF19 , on austerity preferences in Ireland BIBREF20 , and intra-party preferences in the energy debate in Switzerland BIBREF21 . Recent developments in the field allow researchers to estimate attitudes in multiple issue dimensions and, therefore, allow for more fine-grained preference estimates BIBREF22 .
Taking context into account
More recent developments in NLP depart from frequencies of single words or groupings of multiple words. Instead, each word is an observation and its variables are other words or characters. Thus, each word is represented by a vector that describes words and their frequencies in the neighborhood. This approach allows for the capture of text semantics BIBREF23 .
BIBREF24 apply this to evaluating real estate in the U.S. by comparing property descriptions with words that are associated with high quality. BIBREF25 collected all country statements made during the United Nations General Debate where heads of state and government make statements that they consider to be important for their countries. Using this data, BIBREF26 run a neural network. They construct an index of similarity between nations and policy themes that allows us to identify preference alliances. This enables them to identify major players using network centrality and show that speeches contain information that goes beyond mere voting records.
In a large exploratory effort, BIBREF27 use dynamic topic modeling which captures the evolution of topics over time, along with topological analytical techniques that allow for the analysis of multidimensional data in a scale invariant way. This enables them to understand political institutions, in this case through the use of speeches from the UK House of Commons. They classify representatives into groups according to speech content and verbosity, and identify a general pattern of political cohesion. They further show that it is feasible to track the performance of politicians with regard to specific issues using text. Topological techniques are especially useful to discover networks of relations using text. BIBREF26 apply this to uncover ideological communities in the network of states in the international system using UN General Assembly speeches.
Working with Short Text, Micro-Blogs, Social Media
Social media networks such as Twitter, the microblogging service, or the social network, Facebook, connect a vast amount of people in most societies. They generally contain shorter text excerpts compared to the sources of text previously discussed. However, their size and dynamic nature make them a compelling source of information. Furthermore, social networks online reflect social networks offline BIBREF28 . They provide a rare and cheap source of information on dynamic micro-level processes.
Twitter
Similar to our discussion above, topic models can be used to analyze social media data. BIBREF9 use such a model to analyze how the United States is viewed in China and in Arabic-speaking countries in response to the disclosure of classified information by Edward Snowden. They collect tweets containing the word “Snowden” in both languages. The tweets are then translated to English using machine translation. BIBREF9 show that Chinese posts are concerned more about attacks in terms of spying, while Arabic posts discuss human rights violations.
We can use social media to analyze networks and sentiments. Similar to word counts, volume of posts can carry information. BIBREF29 collect tweets originating from and referring to political actors around the 2014 elections to the European Parliament. They consider the language and national distribution as well as the dynamics of social media usage. Using network graphs depicting the conversations within and between countries, they identify topics debated nationally, and also find evidence for a Europe-wide debate around the EP elections and the European Union generally. Using sentiment analysis, they further show that positive statements were correlated with pro-integration attitudes whereas negative debates were more national and anti-integration.
This EU example translates well to national conversations involving multiple ethnic or linguistic groups elsewhere. Moreover, we can learn how information spreads from social networks. Consequently, within ethical boundaries, we may also be able to target information more efficiently. An analysis of Twitter data from the Arab Spring suggests that coordination that originated from the periphery of a network rather than the center sparked more protest BIBREF30 . Coordination was measured as a Gini index of Hashtags while centrality was measured by a count of followers of an account.
Facebook
Social media has been used to estimate preferences as well. The advantage of social media compared to speeches or any other preference indicator is coverage. BIBREF31 use endorsement of official pages on Facebook to scale ideological positions of politicians from different levels of government and the public into a common space. Their method extends to other social media such as Twitter where endorsements and likes could be leveraged.
Weibo, RenRen, and Chinese microblogs
The most prominent example of supervised classification with social media data involves the first large scale study of censorship in China. BIBREF32 automatically downloaded Chinese blogposts as they appeared online. Later they returned to the same posts and checked whether or not they had been censored. Furthermore, they analyzed the content of the blog posts and showed that rather than banning critique directed at the government, censorship efforts concentrate on calls for collective expression, such as demonstrations.
Further investigations of Chinese censorship were made possible by leaked correspondence from the Chinese Zhanggong District. The leaks are emails in which individuals claim credit for propaganda posts in the name of the regime. The emails contain social media posts and account names. BIBREF33 used the leaked posts as training data for a classification algorithm that subsequently helped them to identify more propaganda posts. In conjunction with a follow-up survey experiment they found that most content constitutes cheerleading for the regime rather than, for example, critique of foreign governments.
In the next section we discuss an application of natural language processing in international development research.
UNDP Fragments of Impact Initiative
In 2015, the United Nations Development Programme (UNDP) Regional Hub for Europe and CIS launched a Fragments of Impact Initiative (FoI) that helped to collect qualitative (micro-narratives) and quantitative data from multiple countries.
Within a six-month period, around 10,000 interviews were conducted in multiple languages. These covered the perception of the local population in countries including Tajikistan, Yemen, Serbia, Kyrgyzstan and Moldova on peace and reconciliation, local and rural development, value chain, female entrepreneurship and empowerment, and youth unemployment issues. The micro-narratives were collected using SenseMaker(r), a commercial tool for collecting qualitative and quantitative data. The micro-narratives were individual responses to context-tailored questions. An example of such a question is: “Share a recent example of an event that made it easier or harder to support how your family lives.”
While the analysis and visualization of quantitative data was not problematic, systematic analysis and visualization of qualitative data, collected in a format of micro-narratives, would have been impossible.
To find a way to deal with the expensive body of micro-narrative data, UNDP engaged a group of students from the School of Public Policy, University College London, under the supervision of Prof Slava Mikhaylov (University of Essex) and research coordination of Dr Anna Hanchar (The Data Atelier). The objective of this work was to explore how to systematize the analysis of country-level qualitative data, visualize the data, and inform quick decision-making and timely experiment design. The results of the project were presented at the Data for Policy 2016 BIBREF34 .
The UCL team had access to micro-narratives, as well as context specific meta-data such as demographic information and project details. For a cross-national comparison for policy-makers, the team translated the responses in multiple languages into English using machine translation, in this case Translate API (Yandex Technologies). As a pre-processing step, words without functional meaning (e.g. `I'), rare words that occurred in only one narrative, numbers, and punctuation were all removed. The remaining words were stemmed to remove plural forms of nouns or conjugations of verbs.
As part of this exploration exercise, and guided by UNDP country project leads, the UCL team applied structural topic modeling BIBREF8 as an NLP approach and created an online dashboard containing data visualization per country. The dashboard included descriptive data, as well as results. Figure FIGREF13 illustrates an example of the dashboard. The analysis also allowed for the extraction of general themes described by respondents in the micro-narratives, and looked for predictors such as demographics that correlated with these themes. In Moldova, the major topic among men was rising energy prices. Among women the main topic was political participation and protest, which suggests that female empowerment programs could potentially be fruitful.
In Kyrgyzstan, the team found that the main topics revolved around finding work, access to resources and national borders. Using the meta-data on urbanization, it became clear that rural respondents described losing livestock that had crossed the border to Tajikistan, or that water sources were located across the border. The urban population was concerned about being able to cross the border to Russia for work. Figure FIGREF14 shows word probabilities from the main “agriculture/trade” topic across respondents from urban and rural communities.
For Serbia, the analysis compared issues faced by Roma populations in areas of high and low Roma concentration. Figure FIGREF15 shows the relationship between topics discussed by Roma respondents in areas of high concentration.
BIBREF34 found that Roma respondents identified education as the overarching main topic, independent of the density of the Roma population. Differences were found between respondents across the level of integration with society. In areas of high Roma concentration, respondents were aware of available channels for inclusion. In low Roma density areas, respondents were mainly concerned with severe poverty and discrimination preventing societal inclusion.
In Tajikistan, BIBREF34 investigated the relationship between household labor migration and female entrepreneurship success. They found strong regional differences between the Sughd and Khatlon regions where topics in less successful Khatlon revolved around red tape. Moreover, successful entrepreneurship was very much related to receiving remittances. Figure FIGREF16 illustrates topics that correlate with success.
Analysis of micro-narratives from Yemen showed that the most recurrent themes focused on family issues (Figure FIGREF17 ). There are significant differences in terms of engagement in civil society between young people and the older population. Young respondents emphasized pro-active behavior, political engagement, and interest in community-driven initiatives fostering political development.
Conclusion
In this overview, our aim has been to demonstrate how new forms of data can be leveraged to complement the work of practitioners in international development. We have demonstrated that a wide variety of questions can be asked.
Exploratory work can be performed to systematize large quantities of text. Additionally, we can learn about the sentiment that specific groups express towards specific topics. Networks can be uncovered, the spread of information or ideas can be traced, and influential actors identified. We can classify documents based on human coding of a subset of documents and establish which topics predict/correlate with predefined outcomes such as successful employment or completion of a program.
While the application used here to illustrate the discussion focuses on texts in the form of open-ended questions, social networks can be used and their coverage and topicality can be leveraged.
Natural language processing has the potential to unlock large quantities of untapped knowledge that could enhance our understanding of micro-level processes and enable us to make better context-tailored decisions.
Acknowledgment
Authors' names are listed in alphabetical order. Authors have contributed equally to all work. | translated the responses in multiple languages into English using machine translation, words without functional meaning (e.g. `I'), rare words that occurred in only one narrative, numbers, and punctuation were all removed, remaining words were stemmed to remove plural forms of nouns or conjugations of verbs |
d82ec1003a3db7370994c7522590f7e5151b1f33 | d82ec1003a3db7370994c7522590f7e5151b1f33_0 | Q: How does the method measure the impact of the event on market prices?
Text: Introduction
The financial performance of a corporation is correlated with its social responsibility such as whether their products are environmentally friendly, manufacturing safety procedures protect against accidents, or they use child labors in its third world country factories. Consumers care about these factors when making purchasing decisions in the supermarkets and investors integrate environmental, social and governance factors, known as ESG, in their investment decision-making. It has been shown that corporations financial results have a positive correlation with their sustainability business model and the ESG investment methodology can help reduce portfolio risk and generate competitive returns. However, one barrier for ESG evaluation is the lack of relatively complete and centralized information source. Currently, ESG analysts leverage financial reports to collect the the necessary data for proper evaluation such as greenhouse gas emissions or discrimination lawsuits, but this data is inconsistent and latent. In this study, we consider social media a crowdsourcing data feed to be a new data source for this task.
Social media applications such as Twitter offer users a platform to share and disseminate almost any content about various events such as sports, music, and controversial events as well. The content produced through these platforms not only facilitates the spread of information but can also provides meaningful signals about the influence of the events. A large number of responses to an issue on Twitter could inform the public about the significance of an event, widen the scope of the event, and bring more public attention inside and outside the social media circle.
We define a controversial event for a business entity as a credible and newsworthy incident that has the potential to impact an entity in its financial performance and operation, for example, an incident caused by an employee or a representative of the entity that has the potential to hurt the trust of the public to its brand. Such an incident can demonstrate a potential gap in its risk management framework and policy execution, and eventually hurt the interest and trust of its stakeholders'.
Controversial events trigger a large cascade of discussion on social media platforms. The broad connectivity between people propagates their opinions into trending topics that could effect the company financially and operationally. In certain cases, the responsible entity can be forced to take actions, e.g., to recall its product, which can impose a large financial burden on the entity. For instance, in the Takata air bag scandal, the event was discussed widely on Twitter after the New York Times published a comprehensive article on its defective air bag products in 2014. Takata was forced to recall nearly 50 million air bag and filed bankruptcy in June 2017.
To this end, we propose a controversial event detection system utilizing Twitter data. We focus on controversial events which are credible and newsworthy. Twitter data were collected on a given company and various attributes of each tweet were extracted. We verify the credibility of the event by validating the URLs appearing in tweets come from credible news sources. We utilize tweets attributes to detect events specific to the given company and the sentiment of the event to measure the controversy. Relationship between a burst of an entity controversial event and the entity market performance data was qualitatively assessed in our case study, where we found its potential impact on the equity value.
Related Work
There have been a few studies on assessing sustainability of entities. The UN Commission on Sustainable Development (CSD) published a list of about 140 indicators on various dimensions of sustainability BIBREF0 . In BIBREF1 , Singh et al. reviewed various methodologies, indicators, and indices on sustainability assessment, which includes environmental and social domains. All the data, on which the assessments were conducted, mentioned in their works are processed datasets, and some of them are collected from company annual reports and publications, newspaper clips, and management interviews. They stated that the large number of indicators or indices raises the need of data collection. Our work uses the social media data as a new alternative data source to complement the traditional data collection.
Event detection on social media has been a popular research topic for years. Reuters Tracer BIBREF2 is reported as an application built for the journalists to detect news leads in Twitter before the news becomes known to the public. Petrovic et al. BIBREF3 presented a locality-sensitive hashing based first story detection algorithm with a new variance reduction strategy to improve the performance. In BIBREF4 , the signal of a tweet word is built with wavelet analysis and a event is detected by clustering words with similar signal patterns of burst. BIBREF5 describes a detection and analysis system named TEDAS which concentrates on Crime and Disaster related Events (CDE). TEDAS classifies if a tweet is a CDE tweet, predicts its geo-location if missing, and ranks and returns important tweets when user queries in the system. TEDAS treats a tweet as an event if the tweet qualifies, while our definition of an event is different, where an event is a group of tweets discussing a same theme.
Controversy Detection in Social Media
In this section, we describe the main components of our controversy detection system.
Data collection
The system uses Twitter's filtered streaming API to collect relevant tweets data. The data collection pipeline accepts a comma-separated list of phrases as filtering parameters, that the API uses to determine which tweets will be retained from the stream. Once the system receives data from the API, it then separates postings by companies and runs the downstream process on the separated data streams individually.
Feature engineering
The data collection pipeline collects tweet postings for a given entity. For each incoming posting, the system also stores the following attributes: posting_id, creation_time, text, language, source, URLs, and hashtags.
The system parses the text attribute of each tweet. Part-of-speech (POS) tagging and named entity recognition (NER) algorithm are applied to each tweet and terms that are tagged as proper nouns, verbs, and entities are stored. If two proper nouns are next to each other, the system merges them as one proper noun phrase. Entities such as person names, organizations, locations from tweets are the key elements in describing an event and distinguishing it from other events, and are often used by news professionals to describe the complete story of an event. The verbs from POS tagging mainly represent what and why information, while NER helps to identify where, when, and who information. They capture the major aspects of an event, named who, what, where, when, and why (5W). Besides that, the sentiment of each tweet is assessed too.
The system crawls the URLs in a posting and verifies whether the link comes from one or more credible news sources. More specifically, the system may consider the following to be examples of credible news sources: 1) a news outlet that has, and consistently applies, journalistic standards in its reporting or 2) an authoritative government agency not acting in a political capacity. Determining whether a source is a credible news source depends on the context of the event.
Based on all the extracted features, the system can build a tweet vector, which includes the following features: tweet id, creation time, source, hashtags, entity/proper nouns, verbs, sentiment, and news links.
Event detection
When a new tweet is received in the data pipeline, it either forms a new cluster or it will be added to an existing cluster. A new tweet will be added to an existing cluster if it is sufficiently similar to one of the existing clusters based on its distance to the cluster average vector. If more than one cluster is applicable, the cluster that has the highest similarity to the new tweet is picked. If a new tweet is not added to any existing clusters, it would form a new cluster. A candidate event is a cluster that has at least five tweets. Algorithm SECREF6 summarizes our event detection method and the following controversy identification method.
Controversy identification
An event can be controversial if the public expresses dissenting opinions, usually associated with negative sentiments to it. The system filters out irrelevant events and noise from the established controversial events using the following metrics:
The burstiness of an event: To detect the burstiness of an event, the system detects the volume of tweets per time period, e.g., per day, for the entity in question. An event is flagged when the velocity of the volume increase exceeds a threshold.
Newsworthiness detection: The system counts the total number of unique verified news links in each cluster and log that count as a newsworthiness metric.
Sentiment: For each cluster, its overall sentiment score is quantified by the mean of the sentiment scores among all tweets.
Candidate events are ranked based on these metrics, and high ranked events are considered controversial events.
Outline of the controversy detection algorithm [1]
INLINEFORM0 is a stream of tweets about company INLINEFORM1
Controversy INLINEFORM0
INLINEFORM0 (event detection) INLINEFORM1 TweetFeature INLINEFORM2
INLINEFORM0 INLINEFORM1 INLINEFORM2 current event clusters INLINEFORM3 ClusterFeature INLINEFORM4 INLINEFORM5 Distance INLINEFORM6 compute distance INLINEFORM7
INLINEFORM0 find the closet cluster INLINEFORM1 INLINEFORM2 INLINEFORM3 is merge threshold merge INLINEFORM4 in INLINEFORM5 INLINEFORM6 INLINEFORM7 is singleton cluster
INLINEFORM0 INLINEFORM1 is min cluster size as event
INLINEFORM0 (controversy identification) INLINEFORM1 Bustiness INLINEFORM2 INLINEFORM3 Newsworthiness INLINEFORM4 INLINEFORM5 INLINEFORM6 SentimentClassify INLINEFORM7
INLINEFORM0 AVG INLINEFORM1 compute event level sentiment INLINEFORM2 combined controversy score
INLINEFORM0 INLINEFORM1 is controversial events set
Case Study - Starbucks Controversy
In this section, we provide a case study of our model on a Starbucks controversial event captured in the system. We validated the event with the Wikipedia page of Starbucks and the major new agencies reports. After the event was detected, its impact was further assessed by linking to the market equity data.
On April 12th, 2018, an incident occurred in a Starbucks in Philadelphia, PA. Two African-American men were arrested by the police officials inside that Starbucks. It was reported that the two were denied to access the restroom by the store staff because they did not make any purchase. While waiting at the table, they were told by the staff to leave as they were not making any purchase. They did not comply and thus the store manager called the police and reported that they are trespassing. The two were arrested by the officials but released afterwards without any pressed charges. The scene of the arresting was posted on Twitter and quickly garnered public attention. The video had been viewed more than three millions times in a couple of days and the major local and national news agencies like CNN, NPR, and NYTIMES followed the development of the story.
The public outrage originating from the social media universe swiftly triggered a series of chain reaction in the physical world. Protesters gathered together inside and outside the Starbucks store to demand the manager be fired. Several days later, the CEO of the Starbucks issued a public apology for the incident on an ABC's program and stated that he would like to meet the men to show them compassion. To remedy the bad outcome of the event, Starbucks closed its 8,000 stores in the U.S. on May 29th for racial-bias training for its 175K employees. A financial settlement was also established between the two men and Starbucks corporation. This event garnered a serious public relations crisis for Starbucks.
Figure FIGREF10 shows the event clusters for six days sampled between April 10th and April 20th. Given the difficulty in showing all of the tweets that were clustered, we use the volume of key POS tagged words (5Ws) detected in the cluster of tweets to approximate the event content. The keywords on the top of each bar reveal aspects of the event cluster. This controversial Starbucks event was captured in our system on April 13th, one day after the event occurred. Prior to the event, the discussion themes about Starbucks (clusters) on Twitter were more random and included topics such as Starbucks gift card, barista, coffee as shown on 04/11/2018. The size of the clusters and the total volume of the tweets per day is comparably small. The first event cluster the system detected associates with the keyword `black', where twitter users mentioned `[...] arrested for being Black'. After the event, the volume of the tweets per day surged multiple times more than before and kept climbing for about a week as the event was developing. The system clearly uncovers the events by being able to pinpoint the clustering keywords `black men', `philly', `CEO', `close', etc. The sentiment scores of the discussion in the clusters for each day are shown on the top part of Figure FIGREF10 . The sentiment score is in a range of -2 to +2, -2 standing for very negative, 0 for neutral, and +2 for very positive. As the figure shows, twitter users' attitude turned from neutral to negative post the Starbucks event occurrence. The quick turn of sentiment polarity serves as an measurement of the event being controversy. Through the validation of the domain of the URLs quoted in the clustered tweets, the authentication of the event is verified. All of the elements of this event indicate that a controversy, specifically, a social related controversy, has occurred.
We also did a qualitative study on the Starbucks (SBUX) stock movement during this event. Figure FIGREF12 is the daily percentage change of SBUX and NASDAQ index between April 11th and April 20th. SBUX did not follow the upward trend of the whole market before April 17th, and then its change on April 20th, INLINEFORM0 , is quite significant from historical norms. We collected the historical 52 week stock prices prior to this event and calculated the daily stock price change. The distribution of the daily price change of the previous 52 weeks is Figure FIGREF13 with a mean INLINEFORM1 and standard deviation INLINEFORM2 . The INLINEFORM3 down almost equals to two standard deviations below the mean. Our observation is that plausibly, there was a negative aftereffect from the event of the notable decline in Starbucks stock price due to the major public relations crisis.
Conclusions
We present the use of Twitter as a new data source to detect controversial events for business entities. Each tweet is represented by a vector comprising name entities and verbs mentioned in the raw tweet text. Events can be identified by grouping similar tweets in the vector space, the size and burstiness of the event, and the sentiment polarities. This system is a data-driven controversy monitoring tool that sifts through large volumes of Twitter data. It provides investors with data on key insights on social consciousness, which allows investors to make more informed investment decisions. The direction of our future work is to: 1) develop a quantitative measure on the event impact on the equity market; 2) identify the relevance of the events to entities' operations; 3) extract post-event mitigation actions from the entities. | We collected the historical 52 week stock prices prior to this event and calculated the daily stock price change. The distribution of the daily price change of the previous 52 weeks is Figure FIGREF13 with a mean INLINEFORM1 and standard deviation INLINEFORM2 . |
58f08d38bbcffb2dd9d660faa8026718d390d64b | 58f08d38bbcffb2dd9d660faa8026718d390d64b_0 | Q: How is sentiment polarity measured?
Text: Introduction
The financial performance of a corporation is correlated with its social responsibility such as whether their products are environmentally friendly, manufacturing safety procedures protect against accidents, or they use child labors in its third world country factories. Consumers care about these factors when making purchasing decisions in the supermarkets and investors integrate environmental, social and governance factors, known as ESG, in their investment decision-making. It has been shown that corporations financial results have a positive correlation with their sustainability business model and the ESG investment methodology can help reduce portfolio risk and generate competitive returns. However, one barrier for ESG evaluation is the lack of relatively complete and centralized information source. Currently, ESG analysts leverage financial reports to collect the the necessary data for proper evaluation such as greenhouse gas emissions or discrimination lawsuits, but this data is inconsistent and latent. In this study, we consider social media a crowdsourcing data feed to be a new data source for this task.
Social media applications such as Twitter offer users a platform to share and disseminate almost any content about various events such as sports, music, and controversial events as well. The content produced through these platforms not only facilitates the spread of information but can also provides meaningful signals about the influence of the events. A large number of responses to an issue on Twitter could inform the public about the significance of an event, widen the scope of the event, and bring more public attention inside and outside the social media circle.
We define a controversial event for a business entity as a credible and newsworthy incident that has the potential to impact an entity in its financial performance and operation, for example, an incident caused by an employee or a representative of the entity that has the potential to hurt the trust of the public to its brand. Such an incident can demonstrate a potential gap in its risk management framework and policy execution, and eventually hurt the interest and trust of its stakeholders'.
Controversial events trigger a large cascade of discussion on social media platforms. The broad connectivity between people propagates their opinions into trending topics that could effect the company financially and operationally. In certain cases, the responsible entity can be forced to take actions, e.g., to recall its product, which can impose a large financial burden on the entity. For instance, in the Takata air bag scandal, the event was discussed widely on Twitter after the New York Times published a comprehensive article on its defective air bag products in 2014. Takata was forced to recall nearly 50 million air bag and filed bankruptcy in June 2017.
To this end, we propose a controversial event detection system utilizing Twitter data. We focus on controversial events which are credible and newsworthy. Twitter data were collected on a given company and various attributes of each tweet were extracted. We verify the credibility of the event by validating the URLs appearing in tweets come from credible news sources. We utilize tweets attributes to detect events specific to the given company and the sentiment of the event to measure the controversy. Relationship between a burst of an entity controversial event and the entity market performance data was qualitatively assessed in our case study, where we found its potential impact on the equity value.
Related Work
There have been a few studies on assessing sustainability of entities. The UN Commission on Sustainable Development (CSD) published a list of about 140 indicators on various dimensions of sustainability BIBREF0 . In BIBREF1 , Singh et al. reviewed various methodologies, indicators, and indices on sustainability assessment, which includes environmental and social domains. All the data, on which the assessments were conducted, mentioned in their works are processed datasets, and some of them are collected from company annual reports and publications, newspaper clips, and management interviews. They stated that the large number of indicators or indices raises the need of data collection. Our work uses the social media data as a new alternative data source to complement the traditional data collection.
Event detection on social media has been a popular research topic for years. Reuters Tracer BIBREF2 is reported as an application built for the journalists to detect news leads in Twitter before the news becomes known to the public. Petrovic et al. BIBREF3 presented a locality-sensitive hashing based first story detection algorithm with a new variance reduction strategy to improve the performance. In BIBREF4 , the signal of a tweet word is built with wavelet analysis and a event is detected by clustering words with similar signal patterns of burst. BIBREF5 describes a detection and analysis system named TEDAS which concentrates on Crime and Disaster related Events (CDE). TEDAS classifies if a tweet is a CDE tweet, predicts its geo-location if missing, and ranks and returns important tweets when user queries in the system. TEDAS treats a tweet as an event if the tweet qualifies, while our definition of an event is different, where an event is a group of tweets discussing a same theme.
Controversy Detection in Social Media
In this section, we describe the main components of our controversy detection system.
Data collection
The system uses Twitter's filtered streaming API to collect relevant tweets data. The data collection pipeline accepts a comma-separated list of phrases as filtering parameters, that the API uses to determine which tweets will be retained from the stream. Once the system receives data from the API, it then separates postings by companies and runs the downstream process on the separated data streams individually.
Feature engineering
The data collection pipeline collects tweet postings for a given entity. For each incoming posting, the system also stores the following attributes: posting_id, creation_time, text, language, source, URLs, and hashtags.
The system parses the text attribute of each tweet. Part-of-speech (POS) tagging and named entity recognition (NER) algorithm are applied to each tweet and terms that are tagged as proper nouns, verbs, and entities are stored. If two proper nouns are next to each other, the system merges them as one proper noun phrase. Entities such as person names, organizations, locations from tweets are the key elements in describing an event and distinguishing it from other events, and are often used by news professionals to describe the complete story of an event. The verbs from POS tagging mainly represent what and why information, while NER helps to identify where, when, and who information. They capture the major aspects of an event, named who, what, where, when, and why (5W). Besides that, the sentiment of each tweet is assessed too.
The system crawls the URLs in a posting and verifies whether the link comes from one or more credible news sources. More specifically, the system may consider the following to be examples of credible news sources: 1) a news outlet that has, and consistently applies, journalistic standards in its reporting or 2) an authoritative government agency not acting in a political capacity. Determining whether a source is a credible news source depends on the context of the event.
Based on all the extracted features, the system can build a tweet vector, which includes the following features: tweet id, creation time, source, hashtags, entity/proper nouns, verbs, sentiment, and news links.
Event detection
When a new tweet is received in the data pipeline, it either forms a new cluster or it will be added to an existing cluster. A new tweet will be added to an existing cluster if it is sufficiently similar to one of the existing clusters based on its distance to the cluster average vector. If more than one cluster is applicable, the cluster that has the highest similarity to the new tweet is picked. If a new tweet is not added to any existing clusters, it would form a new cluster. A candidate event is a cluster that has at least five tweets. Algorithm SECREF6 summarizes our event detection method and the following controversy identification method.
Controversy identification
An event can be controversial if the public expresses dissenting opinions, usually associated with negative sentiments to it. The system filters out irrelevant events and noise from the established controversial events using the following metrics:
The burstiness of an event: To detect the burstiness of an event, the system detects the volume of tweets per time period, e.g., per day, for the entity in question. An event is flagged when the velocity of the volume increase exceeds a threshold.
Newsworthiness detection: The system counts the total number of unique verified news links in each cluster and log that count as a newsworthiness metric.
Sentiment: For each cluster, its overall sentiment score is quantified by the mean of the sentiment scores among all tweets.
Candidate events are ranked based on these metrics, and high ranked events are considered controversial events.
Outline of the controversy detection algorithm [1]
INLINEFORM0 is a stream of tweets about company INLINEFORM1
Controversy INLINEFORM0
INLINEFORM0 (event detection) INLINEFORM1 TweetFeature INLINEFORM2
INLINEFORM0 INLINEFORM1 INLINEFORM2 current event clusters INLINEFORM3 ClusterFeature INLINEFORM4 INLINEFORM5 Distance INLINEFORM6 compute distance INLINEFORM7
INLINEFORM0 find the closet cluster INLINEFORM1 INLINEFORM2 INLINEFORM3 is merge threshold merge INLINEFORM4 in INLINEFORM5 INLINEFORM6 INLINEFORM7 is singleton cluster
INLINEFORM0 INLINEFORM1 is min cluster size as event
INLINEFORM0 (controversy identification) INLINEFORM1 Bustiness INLINEFORM2 INLINEFORM3 Newsworthiness INLINEFORM4 INLINEFORM5 INLINEFORM6 SentimentClassify INLINEFORM7
INLINEFORM0 AVG INLINEFORM1 compute event level sentiment INLINEFORM2 combined controversy score
INLINEFORM0 INLINEFORM1 is controversial events set
Case Study - Starbucks Controversy
In this section, we provide a case study of our model on a Starbucks controversial event captured in the system. We validated the event with the Wikipedia page of Starbucks and the major new agencies reports. After the event was detected, its impact was further assessed by linking to the market equity data.
On April 12th, 2018, an incident occurred in a Starbucks in Philadelphia, PA. Two African-American men were arrested by the police officials inside that Starbucks. It was reported that the two were denied to access the restroom by the store staff because they did not make any purchase. While waiting at the table, they were told by the staff to leave as they were not making any purchase. They did not comply and thus the store manager called the police and reported that they are trespassing. The two were arrested by the officials but released afterwards without any pressed charges. The scene of the arresting was posted on Twitter and quickly garnered public attention. The video had been viewed more than three millions times in a couple of days and the major local and national news agencies like CNN, NPR, and NYTIMES followed the development of the story.
The public outrage originating from the social media universe swiftly triggered a series of chain reaction in the physical world. Protesters gathered together inside and outside the Starbucks store to demand the manager be fired. Several days later, the CEO of the Starbucks issued a public apology for the incident on an ABC's program and stated that he would like to meet the men to show them compassion. To remedy the bad outcome of the event, Starbucks closed its 8,000 stores in the U.S. on May 29th for racial-bias training for its 175K employees. A financial settlement was also established between the two men and Starbucks corporation. This event garnered a serious public relations crisis for Starbucks.
Figure FIGREF10 shows the event clusters for six days sampled between April 10th and April 20th. Given the difficulty in showing all of the tweets that were clustered, we use the volume of key POS tagged words (5Ws) detected in the cluster of tweets to approximate the event content. The keywords on the top of each bar reveal aspects of the event cluster. This controversial Starbucks event was captured in our system on April 13th, one day after the event occurred. Prior to the event, the discussion themes about Starbucks (clusters) on Twitter were more random and included topics such as Starbucks gift card, barista, coffee as shown on 04/11/2018. The size of the clusters and the total volume of the tweets per day is comparably small. The first event cluster the system detected associates with the keyword `black', where twitter users mentioned `[...] arrested for being Black'. After the event, the volume of the tweets per day surged multiple times more than before and kept climbing for about a week as the event was developing. The system clearly uncovers the events by being able to pinpoint the clustering keywords `black men', `philly', `CEO', `close', etc. The sentiment scores of the discussion in the clusters for each day are shown on the top part of Figure FIGREF10 . The sentiment score is in a range of -2 to +2, -2 standing for very negative, 0 for neutral, and +2 for very positive. As the figure shows, twitter users' attitude turned from neutral to negative post the Starbucks event occurrence. The quick turn of sentiment polarity serves as an measurement of the event being controversy. Through the validation of the domain of the URLs quoted in the clustered tweets, the authentication of the event is verified. All of the elements of this event indicate that a controversy, specifically, a social related controversy, has occurred.
We also did a qualitative study on the Starbucks (SBUX) stock movement during this event. Figure FIGREF12 is the daily percentage change of SBUX and NASDAQ index between April 11th and April 20th. SBUX did not follow the upward trend of the whole market before April 17th, and then its change on April 20th, INLINEFORM0 , is quite significant from historical norms. We collected the historical 52 week stock prices prior to this event and calculated the daily stock price change. The distribution of the daily price change of the previous 52 weeks is Figure FIGREF13 with a mean INLINEFORM1 and standard deviation INLINEFORM2 . The INLINEFORM3 down almost equals to two standard deviations below the mean. Our observation is that plausibly, there was a negative aftereffect from the event of the notable decline in Starbucks stock price due to the major public relations crisis.
Conclusions
We present the use of Twitter as a new data source to detect controversial events for business entities. Each tweet is represented by a vector comprising name entities and verbs mentioned in the raw tweet text. Events can be identified by grouping similar tweets in the vector space, the size and burstiness of the event, and the sentiment polarities. This system is a data-driven controversy monitoring tool that sifts through large volumes of Twitter data. It provides investors with data on key insights on social consciousness, which allows investors to make more informed investment decisions. The direction of our future work is to: 1) develop a quantitative measure on the event impact on the equity market; 2) identify the relevance of the events to entities' operations; 3) extract post-event mitigation actions from the entities. | For each cluster, its overall sentiment score is quantified by the mean of the sentiment scores among all tweets |
89e1e0dc5d15a05f8740f471e1cb3ddd296b8942 | 89e1e0dc5d15a05f8740f471e1cb3ddd296b8942_0 | Q: Which part of the joke is more important in humor?
Text: Introduction
Recent advances in natural language processing and neural network architecture have allowed for widespread application of these methods in Text Summarization BIBREF0, Natural Language Generation BIBREF1, and Text Classification BIBREF2. Such advances have enabled scientists to study common language practices. One such area, humor, has garnered focus in classification BIBREF3, BIBREF4, generation BIBREF5, BIBREF6, and in social media BIBREF7.
The next question then is, what makes a joke humorous? Although humor is a universal construct, there is a wide variety between what each individual may find humorous. We attempt to focus on a subset of the population where we can quantitatively measure reactions: the popular Reddit r/Jokes thread. This forum is highly popular - with tens of thousands of jokes being posted monthly and over 16 million members. Although larger joke datasets exist, the r/Jokes thread is unparalleled in the amount of rated jokes it contains. To the best of our knowledge there is no comparable source of rated jokes in any other language. These Reddit posts consist of the body of the joke, the punchline, and the number of reactions or upvotes. Although this type of humor may only be most enjoyable to a subset of the population, it is an effective way to measure responses to jokes in a large group setting.
What enables us to perform such an analysis are the recent improvements in neural network architecture for natural language processing. These breakthroughs started with the Convolutional Neural Network BIBREF8 and have recently included the inception BIBREF9 and progress of the Attention mechanism BIBREF10, BIBREF11, and the Transformer architecture BIBREF12.
Related Work
In the related work of joke identification, we find a myriad of methods employed over the years: statistical and N-gram analysis BIBREF13, Regression Trees BIBREF14, Word2Vec combined with K-NN Human Centric Features BIBREF15, and Convolutional Neural Networks BIBREF4.
This previous research has gone into many settings where humor takes place. BIBREF4 studied audience laughter compared to textual transcripts in order to identify jokes in conversation, while much work has also gone into using and creating datasets like the Pun of the Day BIBREF15, 16000 One-liners BIBREF16, and even Ted Talks BIBREF4.
Data
We gathered jokes from a variety of sources, each covering a different type of humor. These datasets include jokes of multiple sentences (the Short Jokes dataset), jokes with only one sentence (the Puns dataset), and more mixed jokes (the Reddit dataset). We have made our code and datasets open source for others to use.
Data ::: Reddit
Our Reddit data was gathered using Reddit's public API, collecting the most recent jokes. Every time the scraper ran, it also updated the upvote score of the previously gathered jokes. This data collection occurred every hour through the months of March and April 2019. Since the data was already split into body and punchline sections from Reddit, we created separate datasets containing the body of the joke exclusively and the punchline of the joke exclusively. Additionally, we created a dataset that combined the body and punchline together.
Some sample jokes are shown in Table 1, above. The distribution of joke scores varies wildly, ranging from 0 to 136,354 upvotes. We found that there is a major jump between the 0-200 upvote range and the 200 range and onwards, with only 6% of jokes scoring between 200-20,000. We used this natural divide as the cutoff to decide what qualified as a funny joke, giving us 13884 not-funny jokes and 2025 funny jokes.
Data ::: Short Jokes
The Short Jokes dataset, found on Kaggle, contains 231,657 short jokes scraped from various joke websites with lengths ranging from 10 to 200 characters. The previous work by BIBREF4 combined this dataset with the WMT162 English news crawl. Although their exact combined dataset is not publicly available, we used the same method and news crawl source to create a similar dataset. We built this new Short Jokes dataset by extracting sentences from the WMT162 news crawl that had the same distribution of words and characters as the jokes in the Short Jokes dataset on Kaggle. This was in order to match the two halves (jokes and non-jokes) as closely as possible.
Data ::: Pun of the Day
This dataset was scraped by BIBREF15 and contains 16001 puns and 16002 not-punny sentences. We gratefully acknowledge their help in putting together and giving us use of this dataset. These puns were constructed from the Pun of the Day website while the negative samples were gathered from news websites.
Methods
In this section we will discuss the methods and model used in our experiments.
Methods ::: Our Model
We have chosen to use the pre-trained BERT BIBREF17 as the base of our model. BERT is a multi-layer bidirectional Transformer encoder and was initially trained on a 3.3 billion word corpus. The model can be fined-tuned with another additional output layer for a multitude of other tasks. We chose to use this Transformer based model as our initial platform because of its success at recognizing and attending to the most important words in both sentence and paragraph structures.
In Figure 1, originally designed by BIBREF12, we see the architecture of a Transformer model: the initial input goes up through an encoder, which has two parts: a multi-headed self attention layer, followed by a feed-forward network. It then outputs the information into the decoder, which includes the previously mentioned layers, plus an additional masked attention step. Afterwords, it is transformed through a softmax into the output. This model's success is in large part due to the Transformer's self-attention layers.
We chose a learning rate of 2e-05 and a max sequence length of 128. We trained the model for a maximum of 7 epochs, creating checkpoints along the way.
Methods ::: Training
Since our data was unbalanced we decided to upsample the humorous jokes in training. We split the dataset into a 75/25 percent split, stratifying with the labels. We then upsampled the minority class in the training set until it reached an even 50 percent. This helped our model learn in a more balanced way despite the uneven amount of non-humorous jokes. Our validation and test sets were composed of the remaining 25%, downsampling the data into a 50/50 class split so that the accuracy metric could be balanced and easily understood.
To show how our model compares to the previous work done, we also test on the Short Joke and Pun datasets mentioned in the Data section. For these datasets we will use the metrics (Accuracy, Precision, Recall, and F1 Score) designated in BIBREF4 as a comparison. We use the same model format as previously mentioned, trained on the Reddit dataset. We then immediately apply the model to predict on the Short Joke and Puns dataset, without further fine-tuning, in order to compare the model. However, because both the Puns and Short Joke datasets have large and balanced labels, we do so without the upsampling and downsampling steps used for the Reddit dataset.
Experiments
In this section we will introduce the baselines and models used in our experiments.
Experiments ::: Baselines
In order to have fair baselines, we used the following two models: a CNN with Highway Layers as described by BIBREF4 and developed by BIBREF18, and human performance from a study on Amazon's Mechanical Turk. We wanted to have the general population rate these same jokes, thus showing the difference between a general audience and a specific subset of the population, in particular, Reddit r/Jokes users. Since the Reddit users obviously found these jokes humorous, this experiment would show whether or not a more general population agreed with those labels.
We had 199 unique participants rate an average of 30 jokes each with the prompt "do you find this joke humorous?" If the participant was evaluating a sample from a body or punchline only dataset we prefaced our question with a sentence explaining that context, for example: "Below is the punchline of a joke. Based on this punchline, do you think you would find this joke humorous?" Taking these labels, we used the most frequently chosen tag from a majority vote to calculate the percentages found in the Human section of Table 2.
Experiments ::: Results
In Table 2, we see the results of our experiment with the Reddit dataset. We ran our models on the body of the joke exclusively, the punchline exclusively, and both parts together (labeled full in our table). On the full dataset we found that the Transformer achieved an accuracy of 72.4 percent on the hold out test set, while the CNN was in the high 60's. We also note that the general human classification found 66.3% of the jokes to be humorous.
In order to understand what may be happening in the model, we used the body and punchline only datasets to see what part of the joke was most important for humor. We found that all of the models, including humans, relied more on the punchline of the joke in their predictions (Table 2). Thus, it seems that although both parts of the joke are needed for it to be humorous, the punchline carries higher weight than the body. We hypothesize that this is due to the variations found in the different joke bodies: some take paragraphs to set up the joke, while others are less than a sentence.
Our experiment with the Short Jokes dataset found the Transformer model's accuracy and F1 score to be 0.986. This was a jump of 8 percent from the most recent work done with CNNs (Table 4).
The results on the Pun of the Day dataset are shown in Table 3 above. It shows an accuracy of 93 percent, close to 4 percent greater accuracy than the best CNN model proposed. Although the CNN model used a variety of techniques to extract the best features from the dataset, we see that the self-attention layers found even greater success in pulling out the crucial features.
Discussion
Considering that a joke's humor value is subjective, the results on the Reddit dataset are surprising. The model has used the context of the words to determine, with high probability, what an average Reddit r/Jokes viewer will find humorous. When we look at the general population's opinion as well, we find a stark difference between their preferences and those of the Reddit users (Table 2). We would hypothesize that our model is learning the specific type of humor enjoyed by those who use the Reddit r/Jokes forum. This would suggest that humor can be learned for a specific subset of the population.
The model's high accuracy and F1 scores on the Short Jokes and Pun of the Day dataset show the effectiveness of the model for transfer learning. This result is not terribly surprising. If the model can figure out which jokes are funny, it seems to be an easier task to tell when something isn't a joke at all.
Although these results have high potential, defining the absolute truth value for a joke's humor is a challenging, if not impossible task. However, these results indicate that, at least for a subset of the population, we can find and identify jokes that will be most humorous to them.
Conclusion
In this paper, we showed a method to define the measure of a joke's humor. We explored the idea of using machine learning tools, specifically a Transformer neural network architecture, to discern what jokes are funny and what jokes are not. This proposed model does not require any human interaction to determine, aside from the text of the joke itself, which jokes are humorous. This architecture can predict the level of humor for a specific audience to a higher degree than a general audience consensus. We also showed that this model has increased capability in joke identification as a result, with higher accuracy and F1 scores than previous work on this topic. | the punchline of the joke |
2815bac42db32d8f988b380fed997af31601f129 | 2815bac42db32d8f988b380fed997af31601f129_0 | Q: What is improvement in accuracy for short Jokes in relation other types of jokes?
Text: Introduction
Recent advances in natural language processing and neural network architecture have allowed for widespread application of these methods in Text Summarization BIBREF0, Natural Language Generation BIBREF1, and Text Classification BIBREF2. Such advances have enabled scientists to study common language practices. One such area, humor, has garnered focus in classification BIBREF3, BIBREF4, generation BIBREF5, BIBREF6, and in social media BIBREF7.
The next question then is, what makes a joke humorous? Although humor is a universal construct, there is a wide variety between what each individual may find humorous. We attempt to focus on a subset of the population where we can quantitatively measure reactions: the popular Reddit r/Jokes thread. This forum is highly popular - with tens of thousands of jokes being posted monthly and over 16 million members. Although larger joke datasets exist, the r/Jokes thread is unparalleled in the amount of rated jokes it contains. To the best of our knowledge there is no comparable source of rated jokes in any other language. These Reddit posts consist of the body of the joke, the punchline, and the number of reactions or upvotes. Although this type of humor may only be most enjoyable to a subset of the population, it is an effective way to measure responses to jokes in a large group setting.
What enables us to perform such an analysis are the recent improvements in neural network architecture for natural language processing. These breakthroughs started with the Convolutional Neural Network BIBREF8 and have recently included the inception BIBREF9 and progress of the Attention mechanism BIBREF10, BIBREF11, and the Transformer architecture BIBREF12.
Related Work
In the related work of joke identification, we find a myriad of methods employed over the years: statistical and N-gram analysis BIBREF13, Regression Trees BIBREF14, Word2Vec combined with K-NN Human Centric Features BIBREF15, and Convolutional Neural Networks BIBREF4.
This previous research has gone into many settings where humor takes place. BIBREF4 studied audience laughter compared to textual transcripts in order to identify jokes in conversation, while much work has also gone into using and creating datasets like the Pun of the Day BIBREF15, 16000 One-liners BIBREF16, and even Ted Talks BIBREF4.
Data
We gathered jokes from a variety of sources, each covering a different type of humor. These datasets include jokes of multiple sentences (the Short Jokes dataset), jokes with only one sentence (the Puns dataset), and more mixed jokes (the Reddit dataset). We have made our code and datasets open source for others to use.
Data ::: Reddit
Our Reddit data was gathered using Reddit's public API, collecting the most recent jokes. Every time the scraper ran, it also updated the upvote score of the previously gathered jokes. This data collection occurred every hour through the months of March and April 2019. Since the data was already split into body and punchline sections from Reddit, we created separate datasets containing the body of the joke exclusively and the punchline of the joke exclusively. Additionally, we created a dataset that combined the body and punchline together.
Some sample jokes are shown in Table 1, above. The distribution of joke scores varies wildly, ranging from 0 to 136,354 upvotes. We found that there is a major jump between the 0-200 upvote range and the 200 range and onwards, with only 6% of jokes scoring between 200-20,000. We used this natural divide as the cutoff to decide what qualified as a funny joke, giving us 13884 not-funny jokes and 2025 funny jokes.
Data ::: Short Jokes
The Short Jokes dataset, found on Kaggle, contains 231,657 short jokes scraped from various joke websites with lengths ranging from 10 to 200 characters. The previous work by BIBREF4 combined this dataset with the WMT162 English news crawl. Although their exact combined dataset is not publicly available, we used the same method and news crawl source to create a similar dataset. We built this new Short Jokes dataset by extracting sentences from the WMT162 news crawl that had the same distribution of words and characters as the jokes in the Short Jokes dataset on Kaggle. This was in order to match the two halves (jokes and non-jokes) as closely as possible.
Data ::: Pun of the Day
This dataset was scraped by BIBREF15 and contains 16001 puns and 16002 not-punny sentences. We gratefully acknowledge their help in putting together and giving us use of this dataset. These puns were constructed from the Pun of the Day website while the negative samples were gathered from news websites.
Methods
In this section we will discuss the methods and model used in our experiments.
Methods ::: Our Model
We have chosen to use the pre-trained BERT BIBREF17 as the base of our model. BERT is a multi-layer bidirectional Transformer encoder and was initially trained on a 3.3 billion word corpus. The model can be fined-tuned with another additional output layer for a multitude of other tasks. We chose to use this Transformer based model as our initial platform because of its success at recognizing and attending to the most important words in both sentence and paragraph structures.
In Figure 1, originally designed by BIBREF12, we see the architecture of a Transformer model: the initial input goes up through an encoder, which has two parts: a multi-headed self attention layer, followed by a feed-forward network. It then outputs the information into the decoder, which includes the previously mentioned layers, plus an additional masked attention step. Afterwords, it is transformed through a softmax into the output. This model's success is in large part due to the Transformer's self-attention layers.
We chose a learning rate of 2e-05 and a max sequence length of 128. We trained the model for a maximum of 7 epochs, creating checkpoints along the way.
Methods ::: Training
Since our data was unbalanced we decided to upsample the humorous jokes in training. We split the dataset into a 75/25 percent split, stratifying with the labels. We then upsampled the minority class in the training set until it reached an even 50 percent. This helped our model learn in a more balanced way despite the uneven amount of non-humorous jokes. Our validation and test sets were composed of the remaining 25%, downsampling the data into a 50/50 class split so that the accuracy metric could be balanced and easily understood.
To show how our model compares to the previous work done, we also test on the Short Joke and Pun datasets mentioned in the Data section. For these datasets we will use the metrics (Accuracy, Precision, Recall, and F1 Score) designated in BIBREF4 as a comparison. We use the same model format as previously mentioned, trained on the Reddit dataset. We then immediately apply the model to predict on the Short Joke and Puns dataset, without further fine-tuning, in order to compare the model. However, because both the Puns and Short Joke datasets have large and balanced labels, we do so without the upsampling and downsampling steps used for the Reddit dataset.
Experiments
In this section we will introduce the baselines and models used in our experiments.
Experiments ::: Baselines
In order to have fair baselines, we used the following two models: a CNN with Highway Layers as described by BIBREF4 and developed by BIBREF18, and human performance from a study on Amazon's Mechanical Turk. We wanted to have the general population rate these same jokes, thus showing the difference between a general audience and a specific subset of the population, in particular, Reddit r/Jokes users. Since the Reddit users obviously found these jokes humorous, this experiment would show whether or not a more general population agreed with those labels.
We had 199 unique participants rate an average of 30 jokes each with the prompt "do you find this joke humorous?" If the participant was evaluating a sample from a body or punchline only dataset we prefaced our question with a sentence explaining that context, for example: "Below is the punchline of a joke. Based on this punchline, do you think you would find this joke humorous?" Taking these labels, we used the most frequently chosen tag from a majority vote to calculate the percentages found in the Human section of Table 2.
Experiments ::: Results
In Table 2, we see the results of our experiment with the Reddit dataset. We ran our models on the body of the joke exclusively, the punchline exclusively, and both parts together (labeled full in our table). On the full dataset we found that the Transformer achieved an accuracy of 72.4 percent on the hold out test set, while the CNN was in the high 60's. We also note that the general human classification found 66.3% of the jokes to be humorous.
In order to understand what may be happening in the model, we used the body and punchline only datasets to see what part of the joke was most important for humor. We found that all of the models, including humans, relied more on the punchline of the joke in their predictions (Table 2). Thus, it seems that although both parts of the joke are needed for it to be humorous, the punchline carries higher weight than the body. We hypothesize that this is due to the variations found in the different joke bodies: some take paragraphs to set up the joke, while others are less than a sentence.
Our experiment with the Short Jokes dataset found the Transformer model's accuracy and F1 score to be 0.986. This was a jump of 8 percent from the most recent work done with CNNs (Table 4).
The results on the Pun of the Day dataset are shown in Table 3 above. It shows an accuracy of 93 percent, close to 4 percent greater accuracy than the best CNN model proposed. Although the CNN model used a variety of techniques to extract the best features from the dataset, we see that the self-attention layers found even greater success in pulling out the crucial features.
Discussion
Considering that a joke's humor value is subjective, the results on the Reddit dataset are surprising. The model has used the context of the words to determine, with high probability, what an average Reddit r/Jokes viewer will find humorous. When we look at the general population's opinion as well, we find a stark difference between their preferences and those of the Reddit users (Table 2). We would hypothesize that our model is learning the specific type of humor enjoyed by those who use the Reddit r/Jokes forum. This would suggest that humor can be learned for a specific subset of the population.
The model's high accuracy and F1 scores on the Short Jokes and Pun of the Day dataset show the effectiveness of the model for transfer learning. This result is not terribly surprising. If the model can figure out which jokes are funny, it seems to be an easier task to tell when something isn't a joke at all.
Although these results have high potential, defining the absolute truth value for a joke's humor is a challenging, if not impossible task. However, these results indicate that, at least for a subset of the population, we can find and identify jokes that will be most humorous to them.
Conclusion
In this paper, we showed a method to define the measure of a joke's humor. We explored the idea of using machine learning tools, specifically a Transformer neural network architecture, to discern what jokes are funny and what jokes are not. This proposed model does not require any human interaction to determine, aside from the text of the joke itself, which jokes are humorous. This architecture can predict the level of humor for a specific audience to a higher degree than a general audience consensus. We also showed that this model has increased capability in joke identification as a result, with higher accuracy and F1 scores than previous work on this topic. | It had the highest accuracy comparing to all datasets 0.986% and It had the highest improvement comparing to previous methods on the same dataset by 8% |
de03e8cc1ceaf2108383114460219bf46e00423c | de03e8cc1ceaf2108383114460219bf46e00423c_0 | Q: What kind of humor they have evaluated?
Text: Introduction
Recent advances in natural language processing and neural network architecture have allowed for widespread application of these methods in Text Summarization BIBREF0, Natural Language Generation BIBREF1, and Text Classification BIBREF2. Such advances have enabled scientists to study common language practices. One such area, humor, has garnered focus in classification BIBREF3, BIBREF4, generation BIBREF5, BIBREF6, and in social media BIBREF7.
The next question then is, what makes a joke humorous? Although humor is a universal construct, there is a wide variety between what each individual may find humorous. We attempt to focus on a subset of the population where we can quantitatively measure reactions: the popular Reddit r/Jokes thread. This forum is highly popular - with tens of thousands of jokes being posted monthly and over 16 million members. Although larger joke datasets exist, the r/Jokes thread is unparalleled in the amount of rated jokes it contains. To the best of our knowledge there is no comparable source of rated jokes in any other language. These Reddit posts consist of the body of the joke, the punchline, and the number of reactions or upvotes. Although this type of humor may only be most enjoyable to a subset of the population, it is an effective way to measure responses to jokes in a large group setting.
What enables us to perform such an analysis are the recent improvements in neural network architecture for natural language processing. These breakthroughs started with the Convolutional Neural Network BIBREF8 and have recently included the inception BIBREF9 and progress of the Attention mechanism BIBREF10, BIBREF11, and the Transformer architecture BIBREF12.
Related Work
In the related work of joke identification, we find a myriad of methods employed over the years: statistical and N-gram analysis BIBREF13, Regression Trees BIBREF14, Word2Vec combined with K-NN Human Centric Features BIBREF15, and Convolutional Neural Networks BIBREF4.
This previous research has gone into many settings where humor takes place. BIBREF4 studied audience laughter compared to textual transcripts in order to identify jokes in conversation, while much work has also gone into using and creating datasets like the Pun of the Day BIBREF15, 16000 One-liners BIBREF16, and even Ted Talks BIBREF4.
Data
We gathered jokes from a variety of sources, each covering a different type of humor. These datasets include jokes of multiple sentences (the Short Jokes dataset), jokes with only one sentence (the Puns dataset), and more mixed jokes (the Reddit dataset). We have made our code and datasets open source for others to use.
Data ::: Reddit
Our Reddit data was gathered using Reddit's public API, collecting the most recent jokes. Every time the scraper ran, it also updated the upvote score of the previously gathered jokes. This data collection occurred every hour through the months of March and April 2019. Since the data was already split into body and punchline sections from Reddit, we created separate datasets containing the body of the joke exclusively and the punchline of the joke exclusively. Additionally, we created a dataset that combined the body and punchline together.
Some sample jokes are shown in Table 1, above. The distribution of joke scores varies wildly, ranging from 0 to 136,354 upvotes. We found that there is a major jump between the 0-200 upvote range and the 200 range and onwards, with only 6% of jokes scoring between 200-20,000. We used this natural divide as the cutoff to decide what qualified as a funny joke, giving us 13884 not-funny jokes and 2025 funny jokes.
Data ::: Short Jokes
The Short Jokes dataset, found on Kaggle, contains 231,657 short jokes scraped from various joke websites with lengths ranging from 10 to 200 characters. The previous work by BIBREF4 combined this dataset with the WMT162 English news crawl. Although their exact combined dataset is not publicly available, we used the same method and news crawl source to create a similar dataset. We built this new Short Jokes dataset by extracting sentences from the WMT162 news crawl that had the same distribution of words and characters as the jokes in the Short Jokes dataset on Kaggle. This was in order to match the two halves (jokes and non-jokes) as closely as possible.
Data ::: Pun of the Day
This dataset was scraped by BIBREF15 and contains 16001 puns and 16002 not-punny sentences. We gratefully acknowledge their help in putting together and giving us use of this dataset. These puns were constructed from the Pun of the Day website while the negative samples were gathered from news websites.
Methods
In this section we will discuss the methods and model used in our experiments.
Methods ::: Our Model
We have chosen to use the pre-trained BERT BIBREF17 as the base of our model. BERT is a multi-layer bidirectional Transformer encoder and was initially trained on a 3.3 billion word corpus. The model can be fined-tuned with another additional output layer for a multitude of other tasks. We chose to use this Transformer based model as our initial platform because of its success at recognizing and attending to the most important words in both sentence and paragraph structures.
In Figure 1, originally designed by BIBREF12, we see the architecture of a Transformer model: the initial input goes up through an encoder, which has two parts: a multi-headed self attention layer, followed by a feed-forward network. It then outputs the information into the decoder, which includes the previously mentioned layers, plus an additional masked attention step. Afterwords, it is transformed through a softmax into the output. This model's success is in large part due to the Transformer's self-attention layers.
We chose a learning rate of 2e-05 and a max sequence length of 128. We trained the model for a maximum of 7 epochs, creating checkpoints along the way.
Methods ::: Training
Since our data was unbalanced we decided to upsample the humorous jokes in training. We split the dataset into a 75/25 percent split, stratifying with the labels. We then upsampled the minority class in the training set until it reached an even 50 percent. This helped our model learn in a more balanced way despite the uneven amount of non-humorous jokes. Our validation and test sets were composed of the remaining 25%, downsampling the data into a 50/50 class split so that the accuracy metric could be balanced and easily understood.
To show how our model compares to the previous work done, we also test on the Short Joke and Pun datasets mentioned in the Data section. For these datasets we will use the metrics (Accuracy, Precision, Recall, and F1 Score) designated in BIBREF4 as a comparison. We use the same model format as previously mentioned, trained on the Reddit dataset. We then immediately apply the model to predict on the Short Joke and Puns dataset, without further fine-tuning, in order to compare the model. However, because both the Puns and Short Joke datasets have large and balanced labels, we do so without the upsampling and downsampling steps used for the Reddit dataset.
Experiments
In this section we will introduce the baselines and models used in our experiments.
Experiments ::: Baselines
In order to have fair baselines, we used the following two models: a CNN with Highway Layers as described by BIBREF4 and developed by BIBREF18, and human performance from a study on Amazon's Mechanical Turk. We wanted to have the general population rate these same jokes, thus showing the difference between a general audience and a specific subset of the population, in particular, Reddit r/Jokes users. Since the Reddit users obviously found these jokes humorous, this experiment would show whether or not a more general population agreed with those labels.
We had 199 unique participants rate an average of 30 jokes each with the prompt "do you find this joke humorous?" If the participant was evaluating a sample from a body or punchline only dataset we prefaced our question with a sentence explaining that context, for example: "Below is the punchline of a joke. Based on this punchline, do you think you would find this joke humorous?" Taking these labels, we used the most frequently chosen tag from a majority vote to calculate the percentages found in the Human section of Table 2.
Experiments ::: Results
In Table 2, we see the results of our experiment with the Reddit dataset. We ran our models on the body of the joke exclusively, the punchline exclusively, and both parts together (labeled full in our table). On the full dataset we found that the Transformer achieved an accuracy of 72.4 percent on the hold out test set, while the CNN was in the high 60's. We also note that the general human classification found 66.3% of the jokes to be humorous.
In order to understand what may be happening in the model, we used the body and punchline only datasets to see what part of the joke was most important for humor. We found that all of the models, including humans, relied more on the punchline of the joke in their predictions (Table 2). Thus, it seems that although both parts of the joke are needed for it to be humorous, the punchline carries higher weight than the body. We hypothesize that this is due to the variations found in the different joke bodies: some take paragraphs to set up the joke, while others are less than a sentence.
Our experiment with the Short Jokes dataset found the Transformer model's accuracy and F1 score to be 0.986. This was a jump of 8 percent from the most recent work done with CNNs (Table 4).
The results on the Pun of the Day dataset are shown in Table 3 above. It shows an accuracy of 93 percent, close to 4 percent greater accuracy than the best CNN model proposed. Although the CNN model used a variety of techniques to extract the best features from the dataset, we see that the self-attention layers found even greater success in pulling out the crucial features.
Discussion
Considering that a joke's humor value is subjective, the results on the Reddit dataset are surprising. The model has used the context of the words to determine, with high probability, what an average Reddit r/Jokes viewer will find humorous. When we look at the general population's opinion as well, we find a stark difference between their preferences and those of the Reddit users (Table 2). We would hypothesize that our model is learning the specific type of humor enjoyed by those who use the Reddit r/Jokes forum. This would suggest that humor can be learned for a specific subset of the population.
The model's high accuracy and F1 scores on the Short Jokes and Pun of the Day dataset show the effectiveness of the model for transfer learning. This result is not terribly surprising. If the model can figure out which jokes are funny, it seems to be an easier task to tell when something isn't a joke at all.
Although these results have high potential, defining the absolute truth value for a joke's humor is a challenging, if not impossible task. However, these results indicate that, at least for a subset of the population, we can find and identify jokes that will be most humorous to them.
Conclusion
In this paper, we showed a method to define the measure of a joke's humor. We explored the idea of using machine learning tools, specifically a Transformer neural network architecture, to discern what jokes are funny and what jokes are not. This proposed model does not require any human interaction to determine, aside from the text of the joke itself, which jokes are humorous. This architecture can predict the level of humor for a specific audience to a higher degree than a general audience consensus. We also showed that this model has increased capability in joke identification as a result, with higher accuracy and F1 scores than previous work on this topic. | a subset of the population where we can quantitatively measure reactions: the popular Reddit r/Jokes thread, These Reddit posts consist of the body of the joke, the punchline, and the number of reactions or upvotes. |
8a276dfe748f07e810b3944f4f324eaf27e4a52c | 8a276dfe748f07e810b3944f4f324eaf27e4a52c_0 | Q: How they evaluate if joke is humorous or not?
Text: Introduction
Recent advances in natural language processing and neural network architecture have allowed for widespread application of these methods in Text Summarization BIBREF0, Natural Language Generation BIBREF1, and Text Classification BIBREF2. Such advances have enabled scientists to study common language practices. One such area, humor, has garnered focus in classification BIBREF3, BIBREF4, generation BIBREF5, BIBREF6, and in social media BIBREF7.
The next question then is, what makes a joke humorous? Although humor is a universal construct, there is a wide variety between what each individual may find humorous. We attempt to focus on a subset of the population where we can quantitatively measure reactions: the popular Reddit r/Jokes thread. This forum is highly popular - with tens of thousands of jokes being posted monthly and over 16 million members. Although larger joke datasets exist, the r/Jokes thread is unparalleled in the amount of rated jokes it contains. To the best of our knowledge there is no comparable source of rated jokes in any other language. These Reddit posts consist of the body of the joke, the punchline, and the number of reactions or upvotes. Although this type of humor may only be most enjoyable to a subset of the population, it is an effective way to measure responses to jokes in a large group setting.
What enables us to perform such an analysis are the recent improvements in neural network architecture for natural language processing. These breakthroughs started with the Convolutional Neural Network BIBREF8 and have recently included the inception BIBREF9 and progress of the Attention mechanism BIBREF10, BIBREF11, and the Transformer architecture BIBREF12.
Related Work
In the related work of joke identification, we find a myriad of methods employed over the years: statistical and N-gram analysis BIBREF13, Regression Trees BIBREF14, Word2Vec combined with K-NN Human Centric Features BIBREF15, and Convolutional Neural Networks BIBREF4.
This previous research has gone into many settings where humor takes place. BIBREF4 studied audience laughter compared to textual transcripts in order to identify jokes in conversation, while much work has also gone into using and creating datasets like the Pun of the Day BIBREF15, 16000 One-liners BIBREF16, and even Ted Talks BIBREF4.
Data
We gathered jokes from a variety of sources, each covering a different type of humor. These datasets include jokes of multiple sentences (the Short Jokes dataset), jokes with only one sentence (the Puns dataset), and more mixed jokes (the Reddit dataset). We have made our code and datasets open source for others to use.
Data ::: Reddit
Our Reddit data was gathered using Reddit's public API, collecting the most recent jokes. Every time the scraper ran, it also updated the upvote score of the previously gathered jokes. This data collection occurred every hour through the months of March and April 2019. Since the data was already split into body and punchline sections from Reddit, we created separate datasets containing the body of the joke exclusively and the punchline of the joke exclusively. Additionally, we created a dataset that combined the body and punchline together.
Some sample jokes are shown in Table 1, above. The distribution of joke scores varies wildly, ranging from 0 to 136,354 upvotes. We found that there is a major jump between the 0-200 upvote range and the 200 range and onwards, with only 6% of jokes scoring between 200-20,000. We used this natural divide as the cutoff to decide what qualified as a funny joke, giving us 13884 not-funny jokes and 2025 funny jokes.
Data ::: Short Jokes
The Short Jokes dataset, found on Kaggle, contains 231,657 short jokes scraped from various joke websites with lengths ranging from 10 to 200 characters. The previous work by BIBREF4 combined this dataset with the WMT162 English news crawl. Although their exact combined dataset is not publicly available, we used the same method and news crawl source to create a similar dataset. We built this new Short Jokes dataset by extracting sentences from the WMT162 news crawl that had the same distribution of words and characters as the jokes in the Short Jokes dataset on Kaggle. This was in order to match the two halves (jokes and non-jokes) as closely as possible.
Data ::: Pun of the Day
This dataset was scraped by BIBREF15 and contains 16001 puns and 16002 not-punny sentences. We gratefully acknowledge their help in putting together and giving us use of this dataset. These puns were constructed from the Pun of the Day website while the negative samples were gathered from news websites.
Methods
In this section we will discuss the methods and model used in our experiments.
Methods ::: Our Model
We have chosen to use the pre-trained BERT BIBREF17 as the base of our model. BERT is a multi-layer bidirectional Transformer encoder and was initially trained on a 3.3 billion word corpus. The model can be fined-tuned with another additional output layer for a multitude of other tasks. We chose to use this Transformer based model as our initial platform because of its success at recognizing and attending to the most important words in both sentence and paragraph structures.
In Figure 1, originally designed by BIBREF12, we see the architecture of a Transformer model: the initial input goes up through an encoder, which has two parts: a multi-headed self attention layer, followed by a feed-forward network. It then outputs the information into the decoder, which includes the previously mentioned layers, plus an additional masked attention step. Afterwords, it is transformed through a softmax into the output. This model's success is in large part due to the Transformer's self-attention layers.
We chose a learning rate of 2e-05 and a max sequence length of 128. We trained the model for a maximum of 7 epochs, creating checkpoints along the way.
Methods ::: Training
Since our data was unbalanced we decided to upsample the humorous jokes in training. We split the dataset into a 75/25 percent split, stratifying with the labels. We then upsampled the minority class in the training set until it reached an even 50 percent. This helped our model learn in a more balanced way despite the uneven amount of non-humorous jokes. Our validation and test sets were composed of the remaining 25%, downsampling the data into a 50/50 class split so that the accuracy metric could be balanced and easily understood.
To show how our model compares to the previous work done, we also test on the Short Joke and Pun datasets mentioned in the Data section. For these datasets we will use the metrics (Accuracy, Precision, Recall, and F1 Score) designated in BIBREF4 as a comparison. We use the same model format as previously mentioned, trained on the Reddit dataset. We then immediately apply the model to predict on the Short Joke and Puns dataset, without further fine-tuning, in order to compare the model. However, because both the Puns and Short Joke datasets have large and balanced labels, we do so without the upsampling and downsampling steps used for the Reddit dataset.
Experiments
In this section we will introduce the baselines and models used in our experiments.
Experiments ::: Baselines
In order to have fair baselines, we used the following two models: a CNN with Highway Layers as described by BIBREF4 and developed by BIBREF18, and human performance from a study on Amazon's Mechanical Turk. We wanted to have the general population rate these same jokes, thus showing the difference between a general audience and a specific subset of the population, in particular, Reddit r/Jokes users. Since the Reddit users obviously found these jokes humorous, this experiment would show whether or not a more general population agreed with those labels.
We had 199 unique participants rate an average of 30 jokes each with the prompt "do you find this joke humorous?" If the participant was evaluating a sample from a body or punchline only dataset we prefaced our question with a sentence explaining that context, for example: "Below is the punchline of a joke. Based on this punchline, do you think you would find this joke humorous?" Taking these labels, we used the most frequently chosen tag from a majority vote to calculate the percentages found in the Human section of Table 2.
Experiments ::: Results
In Table 2, we see the results of our experiment with the Reddit dataset. We ran our models on the body of the joke exclusively, the punchline exclusively, and both parts together (labeled full in our table). On the full dataset we found that the Transformer achieved an accuracy of 72.4 percent on the hold out test set, while the CNN was in the high 60's. We also note that the general human classification found 66.3% of the jokes to be humorous.
In order to understand what may be happening in the model, we used the body and punchline only datasets to see what part of the joke was most important for humor. We found that all of the models, including humans, relied more on the punchline of the joke in their predictions (Table 2). Thus, it seems that although both parts of the joke are needed for it to be humorous, the punchline carries higher weight than the body. We hypothesize that this is due to the variations found in the different joke bodies: some take paragraphs to set up the joke, while others are less than a sentence.
Our experiment with the Short Jokes dataset found the Transformer model's accuracy and F1 score to be 0.986. This was a jump of 8 percent from the most recent work done with CNNs (Table 4).
The results on the Pun of the Day dataset are shown in Table 3 above. It shows an accuracy of 93 percent, close to 4 percent greater accuracy than the best CNN model proposed. Although the CNN model used a variety of techniques to extract the best features from the dataset, we see that the self-attention layers found even greater success in pulling out the crucial features.
Discussion
Considering that a joke's humor value is subjective, the results on the Reddit dataset are surprising. The model has used the context of the words to determine, with high probability, what an average Reddit r/Jokes viewer will find humorous. When we look at the general population's opinion as well, we find a stark difference between their preferences and those of the Reddit users (Table 2). We would hypothesize that our model is learning the specific type of humor enjoyed by those who use the Reddit r/Jokes forum. This would suggest that humor can be learned for a specific subset of the population.
The model's high accuracy and F1 scores on the Short Jokes and Pun of the Day dataset show the effectiveness of the model for transfer learning. This result is not terribly surprising. If the model can figure out which jokes are funny, it seems to be an easier task to tell when something isn't a joke at all.
Although these results have high potential, defining the absolute truth value for a joke's humor is a challenging, if not impossible task. However, these results indicate that, at least for a subset of the population, we can find and identify jokes that will be most humorous to them.
Conclusion
In this paper, we showed a method to define the measure of a joke's humor. We explored the idea of using machine learning tools, specifically a Transformer neural network architecture, to discern what jokes are funny and what jokes are not. This proposed model does not require any human interaction to determine, aside from the text of the joke itself, which jokes are humorous. This architecture can predict the level of humor for a specific audience to a higher degree than a general audience consensus. We also showed that this model has increased capability in joke identification as a result, with higher accuracy and F1 scores than previous work on this topic. | The distribution of joke scores varies wildly, ranging from 0 to 136,354 upvotes. We found that there is a major jump between the 0-200 upvote range and the 200 range and onwards, with only 6% of jokes scoring between 200-20,000. We used this natural divide as the cutoff to decide what qualified as a funny joke, giving us 13884 not-funny jokes and 2025 funny jokes. |
0716b481b78d80b012bca17c897c62efbe7f3731 | 0716b481b78d80b012bca17c897c62efbe7f3731_0 | Q: Do they report results only on English data?
Text: Introduction
The effectiveness and ubiquity of pretrained sentence embeddings for natural language understanding has grown dramatically in recent years. Recent sentence encoders like OpenAI's Generative Pretrained Transformer BIBREF3 and BERT BIBREF2 achieve the state of the art on the GLUE benchmark BIBREF4 . Among the GLUE tasks, these state-of-the-art systems make their greatest gains on the acceptability task with the Corpus of Linguistic Acceptability BIBREF0 . CoLA contains example sentences from linguistics publications labeled by experts for grammatical acceptability, and written to show subtle grammatical features. Because minimal syntactic differences can separate acceptable sentences from unacceptable ones (What did Bo write a book about? / *What was a book about written by Bo?), and acceptability classifiers are more reliable when trained on GPT and BERT than on recurrent models, it stands to reason that GPT and BERT have better implicit knowledge of syntactic features relevant to acceptability.
Our goal in this paper is to develop an evaluation dataset that can locate which syntactic features that a model successfully learns by identifying the syntactic domains of CoLA in which it performs the best. Using this evaluation set, we compare the syntactic knowledge of GPT and BERT in detail, and investigate the strengths of these models over the baseline BiLSTM model published by warstadt2018neural. The analysis set includes expert annotations labeling the entire CoLA development set for the presence of 63 fine-grained syntactic features.
We identify many specific syntactic features that make sentences harder to classify, and many that have little effect. For instance, sentences involving unusual or marked argument structures are no harder than the average sentence, while sentences with long distance dependencies are hard to learn. We also find features of sentences that accentuate or minimize the differences between models. Specifically, the transformer models seem to learn long-distance dependencies much better than the recurrent model, yet have no advantage on sentences with morphological violations.
Analysis Set
We introduce a grammatically annotated version of the entire CoLA development set to facilitate detailed error analysis of acceptability classifiers. These 1043 sentences are expert-labeled for the presence of 63 minor grammatical features organized into 15 major features. Each minor feature belongs to a single major feature. A sentence belongs to a major feature if it belongs to one or more of the relevant minor features. The Appendix includes descriptions of each feature along with examples and the criteria used for annotation.
The 63 minor features and 15 major features are illustrated in Table TABREF5 . Considering minor features, an average of 4.31 features is present per sentence (SD=2.59). The average feature is present in 71.3 sentences (SD=54.7). Turning to major features, the average sentence belongs to 3.22 major features (SD=1.66), and the average major feature is present in 224 sentences (SD=112). Every sentence is labeled with at least one feature.
Annotation
The sentences were annotated manually by one of the authors, who is a PhD student with extensive training in formal linguistics. The features were developed in a trial stage, in which the annotator performed a similar annotation with different annotation schema for several hundred sentences from CoLA not belonging to the development set.
Feature Descriptions
Here we briefly summarize the feature set in order of the major features. Many of these constructions are well-studied in syntax, and further background can be found in textbooks such as adger2003core and sportiche2013introduction.
This major feature contains only one minor feature, simple, including sentences with a syntactically simplex subject and predicate.
These three features correspond to predicative phrases, including copular constructions, small clauses (I saw Bo jump), and resultatives/depictives (Bo wiped the table clean).
These six features mark various kinds of optional modifiers. This includes modifiers of NPs (The boy with blue eyes gasped) or VPs (The cat meowed all morning), and temporal (Bo swam yesterday) or locative (Bo jumped on the bed).
These five features identify syntactically selected arguments, differentiating, for example, obliques (I gave a book to Bo), PP arguments of NPs and VPs (Bo voted for Jones), and expletives (It seems that Bo left).
These four features mark VPs with unusual argument structures, including added arguments (I baked Bo a cake) or dropped arguments (Bo knows), and the passive (I was applauded).
This contains only one feature for imperative clauses (Stop it!).
These are two minor features, one for bound reflexives (Bo loves himself), and one for other bound pronouns (Bo thinks he won).
These five features apply to sentences with question-like properties. They mark whether the interrogative is an embedded clause (I know who you are), a matrix clause (Who are you?), or a relative clause (Bo saw the guy who left); whether it contains an island out of which extraction is unacceptable (*What was a picture of hanging on the wall?); or whether there is pied-piping or a multi-word wh-expressions (With whom did you eat?).
These six features apply to various complement clauses (CPs), including subject CPs (That Bo won is odd); CP arguments of VPs or NPs/APs (The fact that Bo won); CPs missing a complementizer (I think Bo's crazy); or non-finite CPs (This is ready for you to eat).
These four minor features mark the presence of auxiliary or modal verbs (I can win), negation, or “pseudo-auxiliaries” (I have to win).
These five features mark various infinitival embedded VPs, including control VPs (Bo wants to win); raising VPs (Bo seemed to fly); VP arguments of NPs or APs (Bo is eager to eat); and VPs with extraction (e.g. This is easy to read ts ).
These seven features mark complex NPs and APs, including ones with PP arguments (Bo is fond of Mo), or CP/VP arguments; noun-noun compounds (Bo ate mud pie); modified NPs, and NPs derived from verbs (Baking is fun).
These seven features mark various unrelated syntactic constructions, including dislocated phrases (The boy left who was here earlier); movement related to focus or information structure (This I've gotta see this); coordination, subordinate clauses, and ellipsis (I can't); or sentence-level adjuncts (Apparently, it's raining).
These four features mark various determiners, including quantifiers, partitives (two of the boys), negative polarity items (I *do/don't have any pie), and comparative constructions.
These three features apply only to unacceptable sentences, and only ones which are ungrammatical due to a semantic or morphological violation, or the presence or absence of a single salient word.
Correlations
We wish to emphasize that these features are overlapping and in many cases are correlated, thus not all results from using this analysis set will be independent. We analyzed the pairwise Matthews Correlation Coefficient BIBREF17 of the 63 minor features (giving 1953 pairs), and of the 15 major features (giving 105 pairs). MCC is a special case of Pearson's INLINEFORM0 for Boolean variables. These results are summarized in Table TABREF25 . Regarding the minor features, 60 pairs had a correlation of 0.2 or greater, 17 had a correlation of 0.4 or greater, and 6 had a correlation of 0.6 or greater. None had an anti-correlation of greater magnitude than -0.17. Turning to the major features, 6 pairs had a correlation of 0.2 or greater, and 2 had an anti-correlation of greater magnitude than -0.2.
We can see at least three reasons for these observed correlations. First, some correlations can be attributed to overlapping feature definitions. For instance, expletive arguments (e.g. There are birds singing) are, by definition, non-canonical arguments, and thus are a subset of add arg. However, some added arguments, such as benefactives (Bo baked Mo a cake), are not expletives. Second, some correlations can be attributed to grammatical properties of the relevant constructions. For instance, question and aux are correlated because main-clause questions in English require subject-aux inversion and in many cases the insertion of auxiliary do (Do lions meow?). Third, some correlations may be a consequence of the sources sampled in CoLA and the phenomena they focus on. For instance, the unusually high correlation of Emb-Q and ellipsis/anaphor can be attributed to BIBREF18 , which is an article about the sluicing construction involving ellipsis of an embedded interrogative (e.g. I saw someone, but I don't know who).
Finally, two strongest anti-correlations between major features are between simple and the two features related to argument structure, argument types and arg altern. This follows from the definition of simple, which excludes any sentence containing a large number or unusual configuration of arguments.
Models Evaluated
We train MLP acceptability classifiers for CoLA on top of three sentence encoders: (1) the CoLA baseline encoder with ELMo-style embeddings, (2) OpenAI GPT, and (3) BERT. We use publicly available sentence encoders with pretrained weights.
Overall CoLA Results
The overall performance of the three sentence encoders is shown in Table TABREF33 . Performance on CoLA is measured using MCC BIBREF14 . We present the best single restart for each encoder, the mean over restarts for an encoder, and the result of ensembling the restarts for a given encoder, i.e. taking the majority classification for a given sentence, or the majority label of acceptable if tied. For BERT results, we exclude 5 out of the 20 restarts because they were degenerate (MCC=0).
Across the board, BERT outperforms GPT, which outperforms the CoLA baseline. However, BERT and GPT are much closer in performance than they are to CoLA baseline. While ensemble performance exceeded the average for BERT and GPT, it did not outperform the best single model.
Analysis Set Results
The results for the major features and minor features are shown in Figures FIGREF26 and FIGREF35 , respectively. For each feature, we measure the MCC of the sentences including that feature. We plot the mean of these results across the different restarts for each model, and error bars mark the mean INLINEFORM0 standard deviation. For the Violations features, MCC is technically undefined because these features only contain unacceptable sentences. We report MCC in these cases by including for each feature a single acceptable example that is correctly classified by all models.
Comparison across features reveals that the presence of certain features has a large effect on performance, and we comment on some overall patterns below. Within a given feature, the effect of model type is overwhelmingly stable, and resembles the overall difference in performance. However, we observe several interactions, i.e. specific features where the relative performance of models does not track their overall relative performance.
Among the major features (Figure FIGREF26 ), performance is universally highest on the simple sentences, and is higher than each model's overall performance. Though these sentences are simple, we notice that the proportion of ungrammatical ones is on par with the entire dataset. Otherwise we find that a model's performance on sentences of a given feature is on par with or lower than its overall performance, reflecting the fact that features mark the presence of unusual or complex syntactic structure.
Performance is also high (and close to overall performance) on sentences with marked argument structures (Argument Types and Arg(ument) Alt(ernation)). While these models are still worse than human (overall) performance on these sentences, this result indicates that argument structure is relatively easy to learn.
Comparing different kinds of embedded content, we observe higher performance on sentences with embedded clauses (major feature=Comp Clause) embedded VPs (major feature=to-VP) than on sentences with embedded interrogatives (minor features=Emb-Q, Rel Clause). An exception to this trend is the minor feature No C-izer, which labels complement clauses without a complementizer (e.g. I think that you're crazy). Low performance on these sentences compared to most other features in Comp Clause might indicate that complementizers are an important syntactic cue for these models.
As the major feature Question shows, the difficulty of sentences with question-like syntax applies beyond just embedded questions. Excluding polar questions, sentences with question-like syntax almost always involve extraction of a wh-word, creating a long-distance dependency between the wh-word and its extraction site, which may be difficult for models to recognize.
The most challenging features are all related to Violations. Low performance on Infl/Agr Violations, which marks morphological violations (He washed yourself, This is happy), is especially striking because a relatively high proportion (29%) of these sentences are Simple. These models are likely to be deficient in encoding morphological features is that they are word level models, and do not have direct access sub-word information like inflectional endings, which indicates that these features are difficult to learn effectively purely from lexical distributions.
Finally, unusual performance on some features is due to small samples, and have a high standard deviation, suggesting the result is unreliable. This includes CP Subj, Frag/Paren, imperative, NPI/FCI, and Comparative.
Comparing within-feature performance of the three encoders to their overall performance, we find they have differing strengths and weaknesses. BERT stands out over other models in Deep Embed, which includes challenging sentences with doubly-embedded, as well as in several features involving extraction (i.e. long-distance dependencies) such as VP+Extract and Info-Struc. The transformer models show evidence of learning long-distance dependencies better than the CoLA baseline. They outperform the CoLA baseline by an especially wide margin on Bind:Refl, which all involves establishing a dependency between a reflexive and its antecedent (Bo tries to love himself). They also have a large advantage in dislocation, in which expressions are separated from their dependents (Bo practiced on the train an important presentation). The advantage of BERT and GPT may be due in part to their use of the transformer architecture. Unlike the BiLSTM used by the CoLA baseline, the transformer uses a self-attention mechanism that associates all pairs of words regardless of distance.
In some cases models showed surprisingly good or bad performance, revealing possible idiosyncrasies of the sentence embeddings they output. For instance, the CoLA baseline performs on par with the others on the major feature adjunct, especially considering the minor feature Particle (Bo looked the word up).
Furthermore, all models struggle equally with sentences in Violation, indicating that the advantages of the transformer models over the CoLA baseline does not extend to the detection of morphological violations (Infl/Agr Violation) or single word anomalies (Extra/Missing Expr).
Length Analysis
For comparison, we analyze the effect of sentence length on acceptability classifier performance. The results are shown in Figure FIGREF39 . The results for the CoLA baseline are inconsistent, but do drop off as sentence length increases. For BERT and GPT, performance decreases very steadily with length. Exceptions are extremely short sentences (length 1-3), which may be challenging due to insufficient information; and extremely long sentences, where we see a small (but somewhat unreliable) boost in BERT's performance. BERT and GPT are generally quite close in performance, except on the longest sentences, where BERT's performance is considerably better.
Conclusion
Using a new grammatically annotated analysis set, we identify several syntactic phenomena that are predictive of good or bad performance of current state of the art sentence encoders on CoLA. We also use these results to develop hypotheses about why BERT is successful, and why transformer models outperform sequence models.
Our findings can guide future work on sentence embeddings. A current weakness of all sentence encoders we investigate, including BERT, is the identification of morphological violations. Future engineering work should investigate whether switching to a character-level model can mitigate this problem. Additionally, transformer models appear to have an advantage over sequence models with long-distance dependencies, but still struggle with these constructions relative to more local phenomena. It stands to reason that this performance gap might be widened by training larger or deeper transformer models, or training on longer or more complex sentences. This analysis set can be used by engineers interested in evaluating the syntactic knowledge of their encoders.
Finally, these findings suggest possible controlled experiments that could confirm whether there is a causal relation between the presence of the syntactic features we single out as interesting and model performance. Our results are purely correlational, and do not mark whether a particular construction is crucial for the acceptability of the sentence. Future experiments following ettinger2018assessing and kann2019verb can semi-automatically generate datasets manipulating, for example, length of long-distance dependencies, inflectional violations, or the presence of interrogatives, while controlling for factors like sentence length and word choice, in order determine the extent to which these features impact the quality of sentence embeddings.
Acknowledgments
We would like to thank Jason Phang and Thibault Févry for sharing GPT and BERT model predictions on CoLA, and Alex Wang for feedback.
Simple
These are sentences with transitive or intransitive verbs appearing with their default syntax and argument structure. All arguments are noun phrases (DPs), and there are no modifiers or adjuncts on DPs or the VP.
. Included J̇ohn owns the book. (37) Park Square has a festive air. (131) *Herself likes Mary's mother. (456)
. Excluded Ḃill has eaten cake. I gave Joe a book.
Pred (Predicates)
These are sentences including the verb be used predicatively. Also, sentences where the object of the verb is itself a predicate, which applies to the subject. Not included are auxiliary uses of be or other predicate phrases that are not linked to a subject by a verb.
. Included J̇ohn is eager. (27) He turned into a frog. (150) To please John is easy. (315)
. Excluded Ṫhere is a bench to sit on. (309) John broke the geode open. The cake was eaten.
These sentences involve predication of a non-subject argument by another non-subject argument, without the presence of a copula. Some of these cases may be analyzed as small clauses. BIBREF35
. Included J̇ohn called the president a fool. (234) John considers himself proud of Mary. (464) They want them arrested. (856) the election of John president surprised me. (1001)
Modifiers that act as predicates of an argument. Resultatives express a resulting state of that argument, and depictives describe that argument during the matrix event. See BIBREF24 .
. Included Ṙesultative Ṭhe table was wiped by John clean. (625) The horse kicked me black and blue. (898) . Depictive J̇ohn left singing. (971) In which car was the man seen? (398)
. Excluded Ḣe turned into a frog. (150)
Adjunct
Particles are lone prepositions associated with verbs. When they appear with transitive verbs they may immediately follow the verb or the object. Verb-particle pairs may have a non-compositional (idiomatic) meaning. See [pp. 69-70]carnie2013syntax and [pp. 16-17]kim2008syntax.
. Included Ṭhe argument was summed by the coach up. (615) Some sentences go on and on and on. (785) *He let the cats which were whining out. (71)
Adjuncts modifying verb phrases. Adjuncts are (usually) optional, and they do not change the category of the expression they modify. See BIBREF33 .
. Included ṖP-adjuncts, e.g. locative, temporal, instrumental, beneficiary Ṅobody who hates to eat anything should work in a delicatessen. (121) Felicia kicked the ball off the bench. (127) . Adverbs Ṁary beautifully plays the violin. (40) John often meets Mary. (65) . Purpose VPs Ẇe need another run to win. (769) .
0.5em. Excluded ṖP arguments Ṣue gave to Bill a book. (42) Everything you like is on the table. (736) . S-adjuncts J̇ohn lost the race, unfortunately.
These are adjuncts modifying noun phrases. Adjuncts are (usually) optional, and they do not change the category of the expression they modify. Single-word prenominal adjectives are excluded, as are relative clauses (this has another category). . Included ṖP-adjuncts Ṭom's dog with one eye attacked Frank's with three legs. (676) They were going to meet sometime on Sunday, but the faculty didn't know when. (565) . Phrasal adjectives Ȧs a statesman, scarcely could he do anything worth mentioning. (292) . Verbal modifiers Ṫhe horse raced past the barn fell. (900)
. Excluded Ṗrenominal Adjectives İt was the policeman met that several young students in the park last night. (227) . Relative Clauses NP arguments
These are adjuncts of VPs and NPs that specify a time or modify tense or aspect or frequency of an event. Adjuncts are (usually) optional, and they do not change the category of the expression they modify. . Included Ṡhort adverbials (never, today, now, always) Ẉhich hat did Mike quip that she never wore? (95) . PPs Ḟiona might be here by 5 o'clock. (426) . When İ inquired when could we leave. (520)
These are adjuncts of VPs and NPs that specify a location of an event or a part of an event, or of an individual. Adjuncts are (usually) optional, and they do not change the category of the expression they modify. . Included Ṡhort adverbials PPs Ṫhe bed was slept in. (298) *Anson demonized up the Khyber (479) Some people consider dogs in my neighborhood dangerous. (802) Mary saw the boy walking toward the railroad station. (73) . Where İ found the place where we can relax. (307)
. Excluded Ŀocative arguments Ṣam gave the ball out of the basket. (129) Jessica loaded boxes on the wagon. (164) I went to Rome.
These are adjuncts of VPs and NPs not described by some other category (with the exception of (6-7)), i.e. not temporal, locative, or relative clauses. Adjuncts are (usually) optional, and they do not change the category of the expression they modify.
. Included Ḃeneficiary Ị know which book José didn't read for class, and which book Lilly did it for him. (58) . Instrument Ŀee saw the student with a telescope. (770) . Comitative J̇oan ate dinner with someone but I don't know who. (544) . VP adjuncts Ẇhich article did Terry file papers without reading? (431) . Purpose Ẇe need another run to win. (769)
Argument Types
Oblique arguments of verbs are individual-denoting arguments (DPs or PPs) which act as the third argument of verb, i.e. not a subject or (direct) object. They may or may not be marked by a preposition. Obliques are only found in VPs that have three or more individual arguments. Arguments are selected for by the verb, and they are (generally) not optional, though in some cases they may be omitted where they are understood or implicitly existentially quantified over. See [p.40]kim2008syntax.
. Included Ṗrepositional Ṣue gave to Bill a book. (42) Mary has always preferred lemons to limes. (70) *Janet broke Bill on the finger. (141) . Benefactives Ṁartha carved the baby a toy out of wood. (139) . Double object Ṡusan told her a story. (875) Locative arguments Ȧnn may spend her vacation in Italy. (289) . High-arity Passives Ṃary was given by John the book. (626)
. Excluded Ṅon-DP arguments Ẇe want John to win (28) . 3rd argments where not all three arguments are DPs Ẇe want John to win (28)
Prepositional Phrase arguments of VPs are individual-denoting arguments of a verb which are marked by a proposition. They may or may not be obliques. Arguments are selected for by the verb, and they are (generally) not optional, though in some cases they may be omitted where they are understood or implicitly existentially quantified over.
. Included Ḋative Ṣue gave to Bill a book. (42) . Conative (at) C̣arla slid at the book. (179) . Idiosyncratic prepositional verbs İ wonder who to place my trust in. (711) She voted for herself. (743) . Locative J̇ohn was found in the office. (283) . PP predicates Ėverything you like is on the table. (736)
. Excluded ṖP adjuncts Particles Arguments of deverbal expressions ṭhe putter of books left. (892) . By-phrase Ṫed was bitten by the spider. (613)
Prepositional Phrase arguments of NPs or APs are individual-denoting arguments of a noun or adjective which are marked by a proposition. Arguments are selected for by the head, and they are (generally) not optional, though in some cases they may be omitted where they are understood or implicitly existentially quantified over.
. Included Ṙelational adjectives Ṁany people were fond of Pat. (936) *I was already aware of fact. (824) . Relational nouns Ẇe admired the pictures of us in the album. (759) They found the book on the atom. (780) . Arguments of deverbal nouns ṭhe putter of books left. (892)
Prepositional arguments introduced with by. Usually, this is the (semantic) subject of a passive verb, but in rare cases it may be the subject of a nominalized verb. Arguments are usually selected for by the head, and they are generally not optional. In this case, the argument introduced with by is semantically selected for by the verb, but it is syntactically optional. See [p.190]adger2003core and []collins2005smuggling.
. Included Ṗassives Ṫed was bitten by the spider. (613) . Subjects of deverbal nouns ṫhe attempt by John to leave surprised me. (1003)
Expletives, or “dummy” arguments, are semantically inert arguments. The most common expletives in English are it and there, although not all occurrences of these items are expletives. Arguments are usually selected for by the head, and they are generally not optional. In this case, the expletive occupies a syntactic argument slot, but it is not semantically selected by the verb, and there is often a syntactic variation without the expletive. See [p.170-172]adger2003core and [p.82-83]kim2008syntax.
. Included Ṫhere—inserted, existential Ṭhere loved Sandy. (939) There is a nurse available. (466) . It—cleft, inserted İt was a brand new car that he bought. (347) It bothers me that John coughs. (314) It is nice to go abroad. (47) . Environmental it K̇erry remarked it was late. (821) Poor Bill, it had started to rain and he had no umbrella. (116) You've really lived it up. (160)
. Excluded J̇ohn counted on Bill to get there on time. (996) I bought it to read. (1026)
Arg Altern (Argument Alternations)
These are verbs with 3 or more arguments of any kind. Arity refers to the number of arguments that a head (or function) selects for. Arguments are usually selected for by the head, and they are generally not optional. They may be DPs, PPs, CPs, VPs, APs or other categories.
. Included Ḋitransitive [̣Sue] gave [to Bill] [a book]. (42) [Martha] carved [the baby] [a toy] out of wood. (139) . VP arguments [̣We] believed [John] [to be a fountain in the park]. (274) [We] made [them] [be rude]. (260) . Particles He] let [the cats which were whining] [out]. (71) . Passives with by-phrase [̣A good friend] is remained [to me] [by him]. (237) . Expletives [̣We] expect [there] [to will rain]. (282) [There] is [a seat] [available]. (934) [It] bothers [me] [that he is here]. (1009) . Small clause John] considers [Bill] [silly]. (1039)
. Excluded Ṙesults, depictives John] broke [the geode] [open].
These are VPs where a canonical argument of the verb is missing. This can be difficult to determine, but in many cases the missing argument is understood with existential quantification or generically, or contextually salient. See [p.106-109]sportiche2013introduction.
. Included Ṁiddle voice/causative inchoative Ṭhe problem perceives easily. (66) . Passive Ṫhe car was driven. (296) . Null complement anaphora J̇ean persuaded Robert. (380) Nobody told Susan. (883) . Dropped argument Ḳim put in the box. (253) The guests dined. (835) I wrote to Bill. (1030) . Transitive adjective J̇ohn is eager. (27) We pulled free. (144) . Transitive noun İ sensed his eagerness. (155) . Expletive insertion Ịt loved Sandy. (949)
. Excluded Ṫed was bitten by the spider. (613)
These are VPs in which a non-canonical argument of the verb has been added. These cases are clearer to identify where the additional argument is a DP. In general, PPs which mark locations, times, beneficiaries, or purposes should be analyzed as adjuncts, while PPs marking causes can be considered arguments. See []pylkkanen2008introducing.
. Included Ėxtra argument Ḷinda winked her lip. (202) Sharon fainted from hunger. (204) I shaved myself. (526) . Causative Ị squeaked the door. (207) . Expletive insertion Ṫhere is a monster in Loch Ness. (928) It annoys people that dogs bark. (943) . Benefactive Ṁartha carved the baby a toy out of wood. (139)
The passive voice is marked by the demotion of the subject (either complete omission or to a by-phrase) and the verb appearing as a past participle. In the stereotypical construction there is an auxiliary be verb, though this may be absent. See [p.175-190]kim2008syntax, collins2005smuggling, and [p.311-333]sag2003syntactic.
. Included V̇erbs Ṫhe earth was believed to be round. (157) . Psuedopassive Ṫhe bed was slept in. (298) . Past participle adjuncts Ṫhe horse raced past the barn fell. (900)
Imperative
The imperative mood is marked by the absence of the a subject and the bare form of the verb, and expresses a command, request, or other directive speech act.
. Included Ẉash you! (224) Somebody just left - guess who. (528)
Binding
These are cases in which a reflexive (non-possessive) pronoun, usually bound by an antecedent. See [p.163-186]sportiche2013introduction and [p.203-226]sag2003syntactic.
. Included Ọurselves like ourselves. (742) Which pictures of himself does John like? (386)
These are cases in which a non-reflexive pronoun appears along with its antecedent. This includes donkey anaphora, quantificational binding, and bound possessives, among other bound pronouns. See [p.163-186]sportiche2013introduction and [p.203-226]sag2003syntactic.
. Included Ḃound possessor Ṫhe children admire their mother. (382) . Quantificational binding Ėverybody gets on well with a certain relative, but often only his therapist knows which one. (562) . Bound pronoun Ẉe gave us to the cause. (747)
Question
These are sentences in which the matrix clause is interrogative (either a wh- or polar question). See [pp.282-213]adger2003core, [pp.193-222]kim2008syntax, and [p.315-350]carnie2013syntax.
. Included Ẇh-question Ẇho always drinks milk? (684) . Polar question Ḋid Athena help us? (486)
These are embedded interrogative clauses appearing as arguments of verbs, nouns, and adjectives. Not including relative clauses and free relatives. See [p.297]adger2003core.
. Included U̇nder VP İ forgot how good beer tastes. (235) *What did you ask who saw? (508) . Under NP Ṫhat is the reason why he resigned. (313) . Under AP Ṫhey claimed they had settled on something, but it wasn't clear what they had settled on. (529) . Free relative Ẇhat the water did to the bottle was fill it. (33)
. Excluded Relative clauses, free relatives
These are phrasal Wh-phrases, in which the wh-word moves along with other expressions, including prepositions (pied-piping) or nouns in the case of determiner wh-words such as how many and which.
. Included Ṗied-piping Ṭhe ship sank, but I don't know with what. (541) . Other phrasal wh-phrases İ know which book Mag read, and which book Bob read my report that you hadn't. (61) How sane is Peter? (88)
Relative clauses are noun modifiers appearing with a relativizer (either that or a wh-word) and an associated gap. See [p.223-244]kim2008syntax.
. Included Ṫhough he may hate those that criticize Carter, it doesn't matter. (332) *The book what inspired them was very long. (686) Everything you like is on the table. (736)
. Excluded Ṭhe more you would want, the less you would eat. (6)
This is wh-movement out of an extraction island, or near-island. Islands include, for example, complex NPs, adjuncts, embedded questions, coordination. A near-island is an extraction that closely resembles an island violation, such as extraction out of an embedded clause, or across-the-board extraction. See [pp.323-333]adger2003core and [pp.332-334]carnie2013syntax.
. Included Ėmbedded question *What did you ask who Medea gave? (493) . Adjunct Ẉhat did you leave before they did? (598) . Parasitic gaps Ẇhich topic did you choose without getting his approval? (311) . Complex NP Ẇho did you get an accurate description of? (483)
Comp Clause (Complement Clauses)
These are complement clauses acting as the (syntactic) subject of verbs. See [pp.90-91]kim2008syntax.
. Included Ṫhat dogs bark annoys people. (942) The socks are ready for for you to put on to be planned. (112)
. Excluded Ėxpletive insertion İt bothers me that John coughs. (314)
These are complement clauses acting as (non-subject) arguments of verbs. See [pp.84-90]kim2008syntax.
. Included İ can't believe Fred won't, either. (50) I saw that gas can explode. (222) It bothers me that John coughs. (314) Clefts İt was a brand new car that he bought. (347)
These are complement clauses acting as an argument of a noun or adjective. See [pp.91-94]kim2008syntax.
. Included U̇nder NP Ḋo you believe the claim that somebody was looking for something? (99) . Under AP Ṭhe children are fond that they have ice cream. (842)
These are complement clauses with a non-finite matrix verb. Often, the complementizer is for, or there is no complementizer. See [pp.252-253,256-260]adger2003core.
. Included Ḟor complementizer İ would prefer for John to leave. (990) . No Complementizer Ṁary intended John to go abroad. (48) . Ungrammatical Ḣeidi thinks that Andy to eat salmon flavored candy bars. (363) . V-ing Ȯnly Churchill remembered Churchill giving the Blood, Sweat and Tears speech. (469)
These are complement clauses with no overt complementizer.
. Included Ċomplement clause İ'm sure we even got these tickets! (325) He announced he would marry the woman he loved most, but none of his relatives could figure out who. (572) . Relative clause Ṫhe Peter we all like was at the party (484)
These are sentences with three or nested verbs, where VP is not an aux or modal, i.e. with the following syntax: [S ...[ VP ...[ VP ...[ VP ...] ...] ...] ...]
. Included Ėmbedded VPs Ṁax seemed to be trying to force Ted to leave the room, and Walt, Ira. (657) . Embedded clauses İ threw away a book that Sandy thought we had read. (713)
Aux (Auxiliaries)
Any occurrence of negation in a sentence, including sentential negation, negative quantifiers, and negative adverbs.
. Included Ṡentential İ can't remember the name of somebody who had misgivings. (123) . Quantifier Ṅo writer, and no playwright, meets in Vienna. (124) . Adverb Ṫhey realised that never had Sir Thomas been so offended. (409)
Modal verbs (may, might, can, could, will, would, shall, should, must). See [pp.152-155]kim2008syntax.
. Included J̇ohn can kick the ball. (280) As a statesman, scarcely could he do anything worth mentioning. (292)
. Excluded Ṗseudo-modals Ṡandy was trying to work out which students would be able to solve a certain problem. (600)
Auxiliary verbs (e.g. be, have, do). See [pp.149-174]kim2008syntax.
. Included Ṫhey love to play golf, but I do not. (290) The car was driven. (296) he had spent five thousand dollars. (301)
. Excluded Ṗseudo-auxiliaries Ṣally asked if somebody was going to fail math class, but I can't remember who. (589) The cat got bitten. (926)
These are predicates acting as near-auxiliary (e.g. get-passive) or near-modals (e.g. willing)
. Included Ṅear-auxiliaries Ṃary came to be introduced by the bartender and I also came to be. (55) *Sally asked if somebody was going to fail math class, but I can't remember who. (589) The cat got bitten. (926) . Near-modals Ċlinton is anxious to find out which budget dilemmas Panetta would be willing to tackle in a certain way, but he won't say in which. (593) Sandy was trying to work out which students would be able to solve a certain problem. (600)
to-VP (Infinitival VPs)
These are VPs with control verbs, where one argument is a non-finite to-VP without a covert subject co-indexed with an argument of the matrix verb. See [pp.252,266-291]adger2003core, [pp.203-222]sportiche2013introduction, and [pp.125-148]kim2008syntax.
. Included İntransitive subject control Ịt tries to leave the country. (275) . Transitive subject control J̇ohn promised Bill to leave. (977) . Transitive object control İ want her to dance. (379) John considers Bill to be silly. (1040)
. Excluded V̇P args of NP/AP Ṫhis violin is difficult to play sonatas on. (114) . Purpose Ṫhere is a bench to sit on. (309) . Subject VPs Ṫo please John is easy. (315) . Argument present participles Ṁedea denied poisoning the phoenix. (490) . Raising Ȧnson believed himself to be handsome. (499)
These are VPs with raising predicates, where one argument is a non-finite to-VP without a covert subject co-indexed with an argument of the matrix verb. Unlike control verbs, the coindexed argument is not a semantic argument of the raising predicate. See [pp.260-266]adger2003core, [pp.203-222]sportiche2013introduction, and [pp.125-148]kim2008syntax.
. Included Ṡubject raising U̇nder the bed seems to be a fun place to hide. (277) . Object raising Ȧnson believed himself to be handsome. (499) . Raising adjective J̇ohn is likely to leave. (370)
These are embedded infinitival VPs containing a (non-subject) gap that is filled by an argument in the upper clause. Examples are purpose-VPs and tough-movement. See [pp.246-252]kim2008syntax.
. Included Ṫough-movement Ḍrowning cats, which is against the law, are hard to rescue. (79) . Infinitival relatives F̣ed knows which politician her to vote for. (302) . Purpose ṫhe one with a red cover takes a very long time to read. (352) . Other non-finite VPs with extraction Ȧs a statesman, scarcely could he do anything worth mentioning. (292)
These are non-finite VP arguments of nouns and adjectives.
. Included Ṙaising adjectives J̇ohn is likely to leave. (370) . Control adjectives Ṫhe administration has issued a statement that it is willing to meet a student group, but I'm not sure which one. (604) . Control nouns Ȧs a teacher, you have to deal simultaneously with the administration's pressure on you to succeed, and the children's to be a nice guy. (673) . Purpose VPs ṫhere is nothing to do. (983)
These are miscellaneous non-finite VPs.
. Included İ saw that gas can explode. (222) Gerunds/Present participles Ṣtudents studying English reads Conrad's Heart of Darkness while at university. (262) Knowing the country well, he took a short cut. (411) John became deadly afraid of flying. (440) . Subject VPs Ṫo please John is easy. (315) . Nominalized VPs Ẉhat Mary did Bill was give a book. (473)
. Excluded ṫo-VPs acting as complements or modifiers of verbs, nouns, or adjectives
N, Adj (Nouns and Adjectives)
These are nouns and adjectives derived from verbs.
. Included Ḋeverbal nouns ṭhe election of John president surprised me. (1001) . “Light” verbs Ṫhe birds give the worm a tug. (815) . Gerunds İf only Superman would stop flying planes! (773) . Event-wh Ẇhat the water did to the bottle was fill it. (33) . Deverbal adjectives Ḣis or her least known work. (95)
Relational nouns are NPs with an obligatory (or existentially closed) argument. A particular relation holds between the members of the extension of NP and the argument. The argument must be a DP possessor or a PP. See [pp.82-83]kim2008syntax.
. Included Ṅouns with of-arguments J̇ohn has a fear of dogs. (353) . Nouns with other PP-arguments Ḣenri wants to buy which books about cooking? (442) . Measure nouns İ bought three quarts of wine and two of Clorox. (667) . Possessed relational nouns J̣ohn's mother likes himself. (484)
. Excluded Ṅouns with PP modifiers Ṡome people consider dogs in my neighborhood dangerous. (802)
Transitive (non-relational) nouns take a VP or CP argument. See [pp.82-83]kim2008syntax.
. Included V̇P argument ṫhe attempt by John to leave surprised me. (1003) . CP argument Ẉhich report that John was incompetent did he submit? (69) . QP argument Ṫhat is the reason why he resigned. (313)
These are complex NPs, including coordinated nouns and nouns with modifiers (excluding prenominal adjectives).
. Included Ṁodified NPs Ṭhe madrigals which Henry plays the lute and sings sound lousy. (84) John bought a book on the table. (233) . NPs with coordination Ṭhe soundly and furry cat slept. (871) The love of my life and mother of my children would never do such a thing. (806)
Noun-noun compounds are NPs consisting of two constituent nouns.
. Included İt was the peasant girl who got it. (320) A felon was elected to the city council. (938)
These are adjectives that take an obligatory (or existentially closed) argument. A particular relation holds between the members of the extension of the modified NP and the argument. The argument must be a DP or PP. See [pp.80-82]kim2008syntax.
. Included Ȯf-arguments Ṫhe chickens seem fond of the farmer. (254) . Other PP arguments Ṫhis week will be a difficult one for us. (241) John made Bill mad at himself. (1035)
A transitive (non-relational) adjective. I.e. an adjectives that takes a VP or CP argument. See [pp.80-82]kim2008syntax.
. Included V̇P argument J̇ohn is likely to leave. (370) . CP argument J̇ohn is aware of it that Bill is here. (1013) . QP argument Ṫhe administration has issued a statement that it is willing to meet a student group, but I'm not sure which one. (604)
S-Syntax (Sentence-Level Syntax)
These are expressions with non-canonical word order. See, for example, [p.76]sportiche2013introduction.
. Includes Ṗarticle shift Ṃickey looked up it. (24) . Preposed modifiers Ȯut of the box jumped a little white rabbit. (215) *Because she's so pleasant, as for Mary I really like her. (331) . Quantifier float Ṫhe men will all leave. (43) . Preposed argument Ẇith no job would John be happy. (333) . Relative clause extraposition Ẇhich book's, author did you meet who you liked? (731) . Misplaced phrases Ṁary was given by John the book. (626)
This includes topicalization and focus constructions. See [pp.258-269]kim2008syntax and [pp.68-75]sportiche2013introduction.
. Included Ṫopicalization Ṁost elections are quickly forgotten, but the election of 2000, everyone will remember for a long time. (807) . Clefts İt was a brand new car that he bought. (347) . Pseudo-clefts Ẇhat John promised is to be gentle. (441)
. Excluded Ṫhere-insertion Passive
These are parentheticals or fragmentary expressions. . Included Ṗarenthetical Ṁary asked me if, in St. Louis, John could rent a house cheap. (704) . Fragments Ṫhe soup cooks, thickens. (448) . Tag question Ġeorge has spent a lot of money, hasn't he? (291)
Coordinations and disjunctions are expressions joined with and, but, or, etc. See [pp.61-68]sportiche2013introduction.
. Included ḊP coordination Ḋave, Dan, Erin, Jaime, and Alina left. (341) . Right Node Raising K̇im gave a dollar to Bobbie and a dime to Jean. (435) . Clausal coordination Ṡhe talked to Harry, but I don't know who else. (575) . Or, nor Ṇo writer, nor any playwright, meets in Vienna. (125) . Pseudo-coordination İ want to try and buy some whiskey. (432) . Juxtaposed clauses Ŀights go out at ten. There will be no talking afterwards. (779)
This includes subordinate clauses, especially with subordinating conjunctions, and conditionals.
. Included Ċonditional İf I can, I will work on it. (56) . Subordinate clause Ẉhat did you leave before they did? (598) *Because Steve's of a spider's eye had been stolen, I borrowed Fred's diagram of a snake's fang. (677) . Correlative Ạs you eat the most, you want the least. (5)
This includes VP or NP ellipsis, or anaphora standing for VPs or NPs (not DPs). See [pp.55-61]sportiche2013introduction.
. Included V̇P Ellipsis İf I can, I will work on it. (56) Mary likes to tour art galleries, but Bill hates to. (287) . VP Anaphor İ saw Bill while you did so Mary. (472) . NP Ellipsis Ṫom's dog with one eye attacked Fred's. (679) . NP anaphor ṫhe one with a red cover takes a very long time to read. (352) . Sluicing Ṁost columnists claim that a senior White House official has been briefing them, and the newspaper today reveals which one. (557) . Gapping Ḃill ate the peaches, but Harry the grapes. (646)
These are adjuncts modifying sentences, sentence-level adverbs, subordinate clauses.
. Included Ṡentence-level adverbs Ṡuddenly, there arrived two inspectors from the INS. (447) . Subordinate clauses Ṫhe storm arrived while we ate lunch. (852)
Determiner
These are quantificational DPs, i.e. the determiner is a quantifier.
. Included Q̇uantifiers Ẹvery student, and he wears socks, is a swinger. (118) We need another run to win. (769) . Partitive Ṇeither of students failed. (265)
These are quantifiers that take PP arguments, and measure nouns. See [pp.109-118]kim2008syntax.
. Included Q̇uantifiers with PP arguments Ṇeither of students failed. (265) . Numerals Ȯne of Korea's most famous poets wrote these lines. (294) . Measure nouns İ bought three quarts of wine and two of Clorox. (667)
These are negative polarity items (any, ever, etc.) and free choice items (any). See kadmon1993any.
. Included ṄPI Ėverybody around here who ever buys anything on credit talks in his sleep. (122) I didn't have a red cent. (350) . FCI Ȧny owl hunts mice. (387)
These are comparative constructions. See BIBREF22 .
. Included Ċorrelative Ṫhe angrier Mary got, the more she looked at pictures. (9) They may grow as high as bamboo. (337) I know you like the back of my hand. (775)
Violations
These are sentences that include a semantic violation, including type mismatches, violations of selectional restrictions, polarity violations, definiteness violations.
. Included V̇olation of selectional restrictions ṃany information was provided. (218) *It tries to leave the country. (275) . Aspectual violations J̣ohn is tall on several occasions. (540) . Definiteness violations Ịt is the problem that he is here. (1018) . Polarity violations Ȧny man didn't eat dinner. (388)
These are sentences that include a violation in inflectional morphology, including tense-aspect marking, or agreement.
. Included Ċase Ụs love they. (46) . Agreement Ṣtudents studying English reads Conrad's Heart of Darkness while at university. (262) . Gender Ṣally kissed himself. (339) . Tense/Aspect Ḳim alienated cats and beating his dog. (429)
These are sentences with a violation that can be identified with the presence or absence of a single word.
. Included Ṁissing word J̣ohn put under the bathtub. (247) *I noticed the. (788) . Extra word Ẹveryone hopes everyone to sleep. (467) *He can will go (510) | Yes |
fed0785d24375ebbde51fb0503b93f14da1d8583 | fed0785d24375ebbde51fb0503b93f14da1d8583_0 | Q: Do the authors have a hypothesis as to why morphological agreement is hardly learned by any model?
Text: Introduction
The effectiveness and ubiquity of pretrained sentence embeddings for natural language understanding has grown dramatically in recent years. Recent sentence encoders like OpenAI's Generative Pretrained Transformer BIBREF3 and BERT BIBREF2 achieve the state of the art on the GLUE benchmark BIBREF4 . Among the GLUE tasks, these state-of-the-art systems make their greatest gains on the acceptability task with the Corpus of Linguistic Acceptability BIBREF0 . CoLA contains example sentences from linguistics publications labeled by experts for grammatical acceptability, and written to show subtle grammatical features. Because minimal syntactic differences can separate acceptable sentences from unacceptable ones (What did Bo write a book about? / *What was a book about written by Bo?), and acceptability classifiers are more reliable when trained on GPT and BERT than on recurrent models, it stands to reason that GPT and BERT have better implicit knowledge of syntactic features relevant to acceptability.
Our goal in this paper is to develop an evaluation dataset that can locate which syntactic features that a model successfully learns by identifying the syntactic domains of CoLA in which it performs the best. Using this evaluation set, we compare the syntactic knowledge of GPT and BERT in detail, and investigate the strengths of these models over the baseline BiLSTM model published by warstadt2018neural. The analysis set includes expert annotations labeling the entire CoLA development set for the presence of 63 fine-grained syntactic features.
We identify many specific syntactic features that make sentences harder to classify, and many that have little effect. For instance, sentences involving unusual or marked argument structures are no harder than the average sentence, while sentences with long distance dependencies are hard to learn. We also find features of sentences that accentuate or minimize the differences between models. Specifically, the transformer models seem to learn long-distance dependencies much better than the recurrent model, yet have no advantage on sentences with morphological violations.
Analysis Set
We introduce a grammatically annotated version of the entire CoLA development set to facilitate detailed error analysis of acceptability classifiers. These 1043 sentences are expert-labeled for the presence of 63 minor grammatical features organized into 15 major features. Each minor feature belongs to a single major feature. A sentence belongs to a major feature if it belongs to one or more of the relevant minor features. The Appendix includes descriptions of each feature along with examples and the criteria used for annotation.
The 63 minor features and 15 major features are illustrated in Table TABREF5 . Considering minor features, an average of 4.31 features is present per sentence (SD=2.59). The average feature is present in 71.3 sentences (SD=54.7). Turning to major features, the average sentence belongs to 3.22 major features (SD=1.66), and the average major feature is present in 224 sentences (SD=112). Every sentence is labeled with at least one feature.
Annotation
The sentences were annotated manually by one of the authors, who is a PhD student with extensive training in formal linguistics. The features were developed in a trial stage, in which the annotator performed a similar annotation with different annotation schema for several hundred sentences from CoLA not belonging to the development set.
Feature Descriptions
Here we briefly summarize the feature set in order of the major features. Many of these constructions are well-studied in syntax, and further background can be found in textbooks such as adger2003core and sportiche2013introduction.
This major feature contains only one minor feature, simple, including sentences with a syntactically simplex subject and predicate.
These three features correspond to predicative phrases, including copular constructions, small clauses (I saw Bo jump), and resultatives/depictives (Bo wiped the table clean).
These six features mark various kinds of optional modifiers. This includes modifiers of NPs (The boy with blue eyes gasped) or VPs (The cat meowed all morning), and temporal (Bo swam yesterday) or locative (Bo jumped on the bed).
These five features identify syntactically selected arguments, differentiating, for example, obliques (I gave a book to Bo), PP arguments of NPs and VPs (Bo voted for Jones), and expletives (It seems that Bo left).
These four features mark VPs with unusual argument structures, including added arguments (I baked Bo a cake) or dropped arguments (Bo knows), and the passive (I was applauded).
This contains only one feature for imperative clauses (Stop it!).
These are two minor features, one for bound reflexives (Bo loves himself), and one for other bound pronouns (Bo thinks he won).
These five features apply to sentences with question-like properties. They mark whether the interrogative is an embedded clause (I know who you are), a matrix clause (Who are you?), or a relative clause (Bo saw the guy who left); whether it contains an island out of which extraction is unacceptable (*What was a picture of hanging on the wall?); or whether there is pied-piping or a multi-word wh-expressions (With whom did you eat?).
These six features apply to various complement clauses (CPs), including subject CPs (That Bo won is odd); CP arguments of VPs or NPs/APs (The fact that Bo won); CPs missing a complementizer (I think Bo's crazy); or non-finite CPs (This is ready for you to eat).
These four minor features mark the presence of auxiliary or modal verbs (I can win), negation, or “pseudo-auxiliaries” (I have to win).
These five features mark various infinitival embedded VPs, including control VPs (Bo wants to win); raising VPs (Bo seemed to fly); VP arguments of NPs or APs (Bo is eager to eat); and VPs with extraction (e.g. This is easy to read ts ).
These seven features mark complex NPs and APs, including ones with PP arguments (Bo is fond of Mo), or CP/VP arguments; noun-noun compounds (Bo ate mud pie); modified NPs, and NPs derived from verbs (Baking is fun).
These seven features mark various unrelated syntactic constructions, including dislocated phrases (The boy left who was here earlier); movement related to focus or information structure (This I've gotta see this); coordination, subordinate clauses, and ellipsis (I can't); or sentence-level adjuncts (Apparently, it's raining).
These four features mark various determiners, including quantifiers, partitives (two of the boys), negative polarity items (I *do/don't have any pie), and comparative constructions.
These three features apply only to unacceptable sentences, and only ones which are ungrammatical due to a semantic or morphological violation, or the presence or absence of a single salient word.
Correlations
We wish to emphasize that these features are overlapping and in many cases are correlated, thus not all results from using this analysis set will be independent. We analyzed the pairwise Matthews Correlation Coefficient BIBREF17 of the 63 minor features (giving 1953 pairs), and of the 15 major features (giving 105 pairs). MCC is a special case of Pearson's INLINEFORM0 for Boolean variables. These results are summarized in Table TABREF25 . Regarding the minor features, 60 pairs had a correlation of 0.2 or greater, 17 had a correlation of 0.4 or greater, and 6 had a correlation of 0.6 or greater. None had an anti-correlation of greater magnitude than -0.17. Turning to the major features, 6 pairs had a correlation of 0.2 or greater, and 2 had an anti-correlation of greater magnitude than -0.2.
We can see at least three reasons for these observed correlations. First, some correlations can be attributed to overlapping feature definitions. For instance, expletive arguments (e.g. There are birds singing) are, by definition, non-canonical arguments, and thus are a subset of add arg. However, some added arguments, such as benefactives (Bo baked Mo a cake), are not expletives. Second, some correlations can be attributed to grammatical properties of the relevant constructions. For instance, question and aux are correlated because main-clause questions in English require subject-aux inversion and in many cases the insertion of auxiliary do (Do lions meow?). Third, some correlations may be a consequence of the sources sampled in CoLA and the phenomena they focus on. For instance, the unusually high correlation of Emb-Q and ellipsis/anaphor can be attributed to BIBREF18 , which is an article about the sluicing construction involving ellipsis of an embedded interrogative (e.g. I saw someone, but I don't know who).
Finally, two strongest anti-correlations between major features are between simple and the two features related to argument structure, argument types and arg altern. This follows from the definition of simple, which excludes any sentence containing a large number or unusual configuration of arguments.
Models Evaluated
We train MLP acceptability classifiers for CoLA on top of three sentence encoders: (1) the CoLA baseline encoder with ELMo-style embeddings, (2) OpenAI GPT, and (3) BERT. We use publicly available sentence encoders with pretrained weights.
Overall CoLA Results
The overall performance of the three sentence encoders is shown in Table TABREF33 . Performance on CoLA is measured using MCC BIBREF14 . We present the best single restart for each encoder, the mean over restarts for an encoder, and the result of ensembling the restarts for a given encoder, i.e. taking the majority classification for a given sentence, or the majority label of acceptable if tied. For BERT results, we exclude 5 out of the 20 restarts because they were degenerate (MCC=0).
Across the board, BERT outperforms GPT, which outperforms the CoLA baseline. However, BERT and GPT are much closer in performance than they are to CoLA baseline. While ensemble performance exceeded the average for BERT and GPT, it did not outperform the best single model.
Analysis Set Results
The results for the major features and minor features are shown in Figures FIGREF26 and FIGREF35 , respectively. For each feature, we measure the MCC of the sentences including that feature. We plot the mean of these results across the different restarts for each model, and error bars mark the mean INLINEFORM0 standard deviation. For the Violations features, MCC is technically undefined because these features only contain unacceptable sentences. We report MCC in these cases by including for each feature a single acceptable example that is correctly classified by all models.
Comparison across features reveals that the presence of certain features has a large effect on performance, and we comment on some overall patterns below. Within a given feature, the effect of model type is overwhelmingly stable, and resembles the overall difference in performance. However, we observe several interactions, i.e. specific features where the relative performance of models does not track their overall relative performance.
Among the major features (Figure FIGREF26 ), performance is universally highest on the simple sentences, and is higher than each model's overall performance. Though these sentences are simple, we notice that the proportion of ungrammatical ones is on par with the entire dataset. Otherwise we find that a model's performance on sentences of a given feature is on par with or lower than its overall performance, reflecting the fact that features mark the presence of unusual or complex syntactic structure.
Performance is also high (and close to overall performance) on sentences with marked argument structures (Argument Types and Arg(ument) Alt(ernation)). While these models are still worse than human (overall) performance on these sentences, this result indicates that argument structure is relatively easy to learn.
Comparing different kinds of embedded content, we observe higher performance on sentences with embedded clauses (major feature=Comp Clause) embedded VPs (major feature=to-VP) than on sentences with embedded interrogatives (minor features=Emb-Q, Rel Clause). An exception to this trend is the minor feature No C-izer, which labels complement clauses without a complementizer (e.g. I think that you're crazy). Low performance on these sentences compared to most other features in Comp Clause might indicate that complementizers are an important syntactic cue for these models.
As the major feature Question shows, the difficulty of sentences with question-like syntax applies beyond just embedded questions. Excluding polar questions, sentences with question-like syntax almost always involve extraction of a wh-word, creating a long-distance dependency between the wh-word and its extraction site, which may be difficult for models to recognize.
The most challenging features are all related to Violations. Low performance on Infl/Agr Violations, which marks morphological violations (He washed yourself, This is happy), is especially striking because a relatively high proportion (29%) of these sentences are Simple. These models are likely to be deficient in encoding morphological features is that they are word level models, and do not have direct access sub-word information like inflectional endings, which indicates that these features are difficult to learn effectively purely from lexical distributions.
Finally, unusual performance on some features is due to small samples, and have a high standard deviation, suggesting the result is unreliable. This includes CP Subj, Frag/Paren, imperative, NPI/FCI, and Comparative.
Comparing within-feature performance of the three encoders to their overall performance, we find they have differing strengths and weaknesses. BERT stands out over other models in Deep Embed, which includes challenging sentences with doubly-embedded, as well as in several features involving extraction (i.e. long-distance dependencies) such as VP+Extract and Info-Struc. The transformer models show evidence of learning long-distance dependencies better than the CoLA baseline. They outperform the CoLA baseline by an especially wide margin on Bind:Refl, which all involves establishing a dependency between a reflexive and its antecedent (Bo tries to love himself). They also have a large advantage in dislocation, in which expressions are separated from their dependents (Bo practiced on the train an important presentation). The advantage of BERT and GPT may be due in part to their use of the transformer architecture. Unlike the BiLSTM used by the CoLA baseline, the transformer uses a self-attention mechanism that associates all pairs of words regardless of distance.
In some cases models showed surprisingly good or bad performance, revealing possible idiosyncrasies of the sentence embeddings they output. For instance, the CoLA baseline performs on par with the others on the major feature adjunct, especially considering the minor feature Particle (Bo looked the word up).
Furthermore, all models struggle equally with sentences in Violation, indicating that the advantages of the transformer models over the CoLA baseline does not extend to the detection of morphological violations (Infl/Agr Violation) or single word anomalies (Extra/Missing Expr).
Length Analysis
For comparison, we analyze the effect of sentence length on acceptability classifier performance. The results are shown in Figure FIGREF39 . The results for the CoLA baseline are inconsistent, but do drop off as sentence length increases. For BERT and GPT, performance decreases very steadily with length. Exceptions are extremely short sentences (length 1-3), which may be challenging due to insufficient information; and extremely long sentences, where we see a small (but somewhat unreliable) boost in BERT's performance. BERT and GPT are generally quite close in performance, except on the longest sentences, where BERT's performance is considerably better.
Conclusion
Using a new grammatically annotated analysis set, we identify several syntactic phenomena that are predictive of good or bad performance of current state of the art sentence encoders on CoLA. We also use these results to develop hypotheses about why BERT is successful, and why transformer models outperform sequence models.
Our findings can guide future work on sentence embeddings. A current weakness of all sentence encoders we investigate, including BERT, is the identification of morphological violations. Future engineering work should investigate whether switching to a character-level model can mitigate this problem. Additionally, transformer models appear to have an advantage over sequence models with long-distance dependencies, but still struggle with these constructions relative to more local phenomena. It stands to reason that this performance gap might be widened by training larger or deeper transformer models, or training on longer or more complex sentences. This analysis set can be used by engineers interested in evaluating the syntactic knowledge of their encoders.
Finally, these findings suggest possible controlled experiments that could confirm whether there is a causal relation between the presence of the syntactic features we single out as interesting and model performance. Our results are purely correlational, and do not mark whether a particular construction is crucial for the acceptability of the sentence. Future experiments following ettinger2018assessing and kann2019verb can semi-automatically generate datasets manipulating, for example, length of long-distance dependencies, inflectional violations, or the presence of interrogatives, while controlling for factors like sentence length and word choice, in order determine the extent to which these features impact the quality of sentence embeddings.
Acknowledgments
We would like to thank Jason Phang and Thibault Févry for sharing GPT and BERT model predictions on CoLA, and Alex Wang for feedback.
Simple
These are sentences with transitive or intransitive verbs appearing with their default syntax and argument structure. All arguments are noun phrases (DPs), and there are no modifiers or adjuncts on DPs or the VP.
. Included J̇ohn owns the book. (37) Park Square has a festive air. (131) *Herself likes Mary's mother. (456)
. Excluded Ḃill has eaten cake. I gave Joe a book.
Pred (Predicates)
These are sentences including the verb be used predicatively. Also, sentences where the object of the verb is itself a predicate, which applies to the subject. Not included are auxiliary uses of be or other predicate phrases that are not linked to a subject by a verb.
. Included J̇ohn is eager. (27) He turned into a frog. (150) To please John is easy. (315)
. Excluded Ṫhere is a bench to sit on. (309) John broke the geode open. The cake was eaten.
These sentences involve predication of a non-subject argument by another non-subject argument, without the presence of a copula. Some of these cases may be analyzed as small clauses. BIBREF35
. Included J̇ohn called the president a fool. (234) John considers himself proud of Mary. (464) They want them arrested. (856) the election of John president surprised me. (1001)
Modifiers that act as predicates of an argument. Resultatives express a resulting state of that argument, and depictives describe that argument during the matrix event. See BIBREF24 .
. Included Ṙesultative Ṭhe table was wiped by John clean. (625) The horse kicked me black and blue. (898) . Depictive J̇ohn left singing. (971) In which car was the man seen? (398)
. Excluded Ḣe turned into a frog. (150)
Adjunct
Particles are lone prepositions associated with verbs. When they appear with transitive verbs they may immediately follow the verb or the object. Verb-particle pairs may have a non-compositional (idiomatic) meaning. See [pp. 69-70]carnie2013syntax and [pp. 16-17]kim2008syntax.
. Included Ṭhe argument was summed by the coach up. (615) Some sentences go on and on and on. (785) *He let the cats which were whining out. (71)
Adjuncts modifying verb phrases. Adjuncts are (usually) optional, and they do not change the category of the expression they modify. See BIBREF33 .
. Included ṖP-adjuncts, e.g. locative, temporal, instrumental, beneficiary Ṅobody who hates to eat anything should work in a delicatessen. (121) Felicia kicked the ball off the bench. (127) . Adverbs Ṁary beautifully plays the violin. (40) John often meets Mary. (65) . Purpose VPs Ẇe need another run to win. (769) .
0.5em. Excluded ṖP arguments Ṣue gave to Bill a book. (42) Everything you like is on the table. (736) . S-adjuncts J̇ohn lost the race, unfortunately.
These are adjuncts modifying noun phrases. Adjuncts are (usually) optional, and they do not change the category of the expression they modify. Single-word prenominal adjectives are excluded, as are relative clauses (this has another category). . Included ṖP-adjuncts Ṭom's dog with one eye attacked Frank's with three legs. (676) They were going to meet sometime on Sunday, but the faculty didn't know when. (565) . Phrasal adjectives Ȧs a statesman, scarcely could he do anything worth mentioning. (292) . Verbal modifiers Ṫhe horse raced past the barn fell. (900)
. Excluded Ṗrenominal Adjectives İt was the policeman met that several young students in the park last night. (227) . Relative Clauses NP arguments
These are adjuncts of VPs and NPs that specify a time or modify tense or aspect or frequency of an event. Adjuncts are (usually) optional, and they do not change the category of the expression they modify. . Included Ṡhort adverbials (never, today, now, always) Ẉhich hat did Mike quip that she never wore? (95) . PPs Ḟiona might be here by 5 o'clock. (426) . When İ inquired when could we leave. (520)
These are adjuncts of VPs and NPs that specify a location of an event or a part of an event, or of an individual. Adjuncts are (usually) optional, and they do not change the category of the expression they modify. . Included Ṡhort adverbials PPs Ṫhe bed was slept in. (298) *Anson demonized up the Khyber (479) Some people consider dogs in my neighborhood dangerous. (802) Mary saw the boy walking toward the railroad station. (73) . Where İ found the place where we can relax. (307)
. Excluded Ŀocative arguments Ṣam gave the ball out of the basket. (129) Jessica loaded boxes on the wagon. (164) I went to Rome.
These are adjuncts of VPs and NPs not described by some other category (with the exception of (6-7)), i.e. not temporal, locative, or relative clauses. Adjuncts are (usually) optional, and they do not change the category of the expression they modify.
. Included Ḃeneficiary Ị know which book José didn't read for class, and which book Lilly did it for him. (58) . Instrument Ŀee saw the student with a telescope. (770) . Comitative J̇oan ate dinner with someone but I don't know who. (544) . VP adjuncts Ẇhich article did Terry file papers without reading? (431) . Purpose Ẇe need another run to win. (769)
Argument Types
Oblique arguments of verbs are individual-denoting arguments (DPs or PPs) which act as the third argument of verb, i.e. not a subject or (direct) object. They may or may not be marked by a preposition. Obliques are only found in VPs that have three or more individual arguments. Arguments are selected for by the verb, and they are (generally) not optional, though in some cases they may be omitted where they are understood or implicitly existentially quantified over. See [p.40]kim2008syntax.
. Included Ṗrepositional Ṣue gave to Bill a book. (42) Mary has always preferred lemons to limes. (70) *Janet broke Bill on the finger. (141) . Benefactives Ṁartha carved the baby a toy out of wood. (139) . Double object Ṡusan told her a story. (875) Locative arguments Ȧnn may spend her vacation in Italy. (289) . High-arity Passives Ṃary was given by John the book. (626)
. Excluded Ṅon-DP arguments Ẇe want John to win (28) . 3rd argments where not all three arguments are DPs Ẇe want John to win (28)
Prepositional Phrase arguments of VPs are individual-denoting arguments of a verb which are marked by a proposition. They may or may not be obliques. Arguments are selected for by the verb, and they are (generally) not optional, though in some cases they may be omitted where they are understood or implicitly existentially quantified over.
. Included Ḋative Ṣue gave to Bill a book. (42) . Conative (at) C̣arla slid at the book. (179) . Idiosyncratic prepositional verbs İ wonder who to place my trust in. (711) She voted for herself. (743) . Locative J̇ohn was found in the office. (283) . PP predicates Ėverything you like is on the table. (736)
. Excluded ṖP adjuncts Particles Arguments of deverbal expressions ṭhe putter of books left. (892) . By-phrase Ṫed was bitten by the spider. (613)
Prepositional Phrase arguments of NPs or APs are individual-denoting arguments of a noun or adjective which are marked by a proposition. Arguments are selected for by the head, and they are (generally) not optional, though in some cases they may be omitted where they are understood or implicitly existentially quantified over.
. Included Ṙelational adjectives Ṁany people were fond of Pat. (936) *I was already aware of fact. (824) . Relational nouns Ẇe admired the pictures of us in the album. (759) They found the book on the atom. (780) . Arguments of deverbal nouns ṭhe putter of books left. (892)
Prepositional arguments introduced with by. Usually, this is the (semantic) subject of a passive verb, but in rare cases it may be the subject of a nominalized verb. Arguments are usually selected for by the head, and they are generally not optional. In this case, the argument introduced with by is semantically selected for by the verb, but it is syntactically optional. See [p.190]adger2003core and []collins2005smuggling.
. Included Ṗassives Ṫed was bitten by the spider. (613) . Subjects of deverbal nouns ṫhe attempt by John to leave surprised me. (1003)
Expletives, or “dummy” arguments, are semantically inert arguments. The most common expletives in English are it and there, although not all occurrences of these items are expletives. Arguments are usually selected for by the head, and they are generally not optional. In this case, the expletive occupies a syntactic argument slot, but it is not semantically selected by the verb, and there is often a syntactic variation without the expletive. See [p.170-172]adger2003core and [p.82-83]kim2008syntax.
. Included Ṫhere—inserted, existential Ṭhere loved Sandy. (939) There is a nurse available. (466) . It—cleft, inserted İt was a brand new car that he bought. (347) It bothers me that John coughs. (314) It is nice to go abroad. (47) . Environmental it K̇erry remarked it was late. (821) Poor Bill, it had started to rain and he had no umbrella. (116) You've really lived it up. (160)
. Excluded J̇ohn counted on Bill to get there on time. (996) I bought it to read. (1026)
Arg Altern (Argument Alternations)
These are verbs with 3 or more arguments of any kind. Arity refers to the number of arguments that a head (or function) selects for. Arguments are usually selected for by the head, and they are generally not optional. They may be DPs, PPs, CPs, VPs, APs or other categories.
. Included Ḋitransitive [̣Sue] gave [to Bill] [a book]. (42) [Martha] carved [the baby] [a toy] out of wood. (139) . VP arguments [̣We] believed [John] [to be a fountain in the park]. (274) [We] made [them] [be rude]. (260) . Particles He] let [the cats which were whining] [out]. (71) . Passives with by-phrase [̣A good friend] is remained [to me] [by him]. (237) . Expletives [̣We] expect [there] [to will rain]. (282) [There] is [a seat] [available]. (934) [It] bothers [me] [that he is here]. (1009) . Small clause John] considers [Bill] [silly]. (1039)
. Excluded Ṙesults, depictives John] broke [the geode] [open].
These are VPs where a canonical argument of the verb is missing. This can be difficult to determine, but in many cases the missing argument is understood with existential quantification or generically, or contextually salient. See [p.106-109]sportiche2013introduction.
. Included Ṁiddle voice/causative inchoative Ṭhe problem perceives easily. (66) . Passive Ṫhe car was driven. (296) . Null complement anaphora J̇ean persuaded Robert. (380) Nobody told Susan. (883) . Dropped argument Ḳim put in the box. (253) The guests dined. (835) I wrote to Bill. (1030) . Transitive adjective J̇ohn is eager. (27) We pulled free. (144) . Transitive noun İ sensed his eagerness. (155) . Expletive insertion Ịt loved Sandy. (949)
. Excluded Ṫed was bitten by the spider. (613)
These are VPs in which a non-canonical argument of the verb has been added. These cases are clearer to identify where the additional argument is a DP. In general, PPs which mark locations, times, beneficiaries, or purposes should be analyzed as adjuncts, while PPs marking causes can be considered arguments. See []pylkkanen2008introducing.
. Included Ėxtra argument Ḷinda winked her lip. (202) Sharon fainted from hunger. (204) I shaved myself. (526) . Causative Ị squeaked the door. (207) . Expletive insertion Ṫhere is a monster in Loch Ness. (928) It annoys people that dogs bark. (943) . Benefactive Ṁartha carved the baby a toy out of wood. (139)
The passive voice is marked by the demotion of the subject (either complete omission or to a by-phrase) and the verb appearing as a past participle. In the stereotypical construction there is an auxiliary be verb, though this may be absent. See [p.175-190]kim2008syntax, collins2005smuggling, and [p.311-333]sag2003syntactic.
. Included V̇erbs Ṫhe earth was believed to be round. (157) . Psuedopassive Ṫhe bed was slept in. (298) . Past participle adjuncts Ṫhe horse raced past the barn fell. (900)
Imperative
The imperative mood is marked by the absence of the a subject and the bare form of the verb, and expresses a command, request, or other directive speech act.
. Included Ẉash you! (224) Somebody just left - guess who. (528)
Binding
These are cases in which a reflexive (non-possessive) pronoun, usually bound by an antecedent. See [p.163-186]sportiche2013introduction and [p.203-226]sag2003syntactic.
. Included Ọurselves like ourselves. (742) Which pictures of himself does John like? (386)
These are cases in which a non-reflexive pronoun appears along with its antecedent. This includes donkey anaphora, quantificational binding, and bound possessives, among other bound pronouns. See [p.163-186]sportiche2013introduction and [p.203-226]sag2003syntactic.
. Included Ḃound possessor Ṫhe children admire their mother. (382) . Quantificational binding Ėverybody gets on well with a certain relative, but often only his therapist knows which one. (562) . Bound pronoun Ẉe gave us to the cause. (747)
Question
These are sentences in which the matrix clause is interrogative (either a wh- or polar question). See [pp.282-213]adger2003core, [pp.193-222]kim2008syntax, and [p.315-350]carnie2013syntax.
. Included Ẇh-question Ẇho always drinks milk? (684) . Polar question Ḋid Athena help us? (486)
These are embedded interrogative clauses appearing as arguments of verbs, nouns, and adjectives. Not including relative clauses and free relatives. See [p.297]adger2003core.
. Included U̇nder VP İ forgot how good beer tastes. (235) *What did you ask who saw? (508) . Under NP Ṫhat is the reason why he resigned. (313) . Under AP Ṫhey claimed they had settled on something, but it wasn't clear what they had settled on. (529) . Free relative Ẇhat the water did to the bottle was fill it. (33)
. Excluded Relative clauses, free relatives
These are phrasal Wh-phrases, in which the wh-word moves along with other expressions, including prepositions (pied-piping) or nouns in the case of determiner wh-words such as how many and which.
. Included Ṗied-piping Ṭhe ship sank, but I don't know with what. (541) . Other phrasal wh-phrases İ know which book Mag read, and which book Bob read my report that you hadn't. (61) How sane is Peter? (88)
Relative clauses are noun modifiers appearing with a relativizer (either that or a wh-word) and an associated gap. See [p.223-244]kim2008syntax.
. Included Ṫhough he may hate those that criticize Carter, it doesn't matter. (332) *The book what inspired them was very long. (686) Everything you like is on the table. (736)
. Excluded Ṭhe more you would want, the less you would eat. (6)
This is wh-movement out of an extraction island, or near-island. Islands include, for example, complex NPs, adjuncts, embedded questions, coordination. A near-island is an extraction that closely resembles an island violation, such as extraction out of an embedded clause, or across-the-board extraction. See [pp.323-333]adger2003core and [pp.332-334]carnie2013syntax.
. Included Ėmbedded question *What did you ask who Medea gave? (493) . Adjunct Ẉhat did you leave before they did? (598) . Parasitic gaps Ẇhich topic did you choose without getting his approval? (311) . Complex NP Ẇho did you get an accurate description of? (483)
Comp Clause (Complement Clauses)
These are complement clauses acting as the (syntactic) subject of verbs. See [pp.90-91]kim2008syntax.
. Included Ṫhat dogs bark annoys people. (942) The socks are ready for for you to put on to be planned. (112)
. Excluded Ėxpletive insertion İt bothers me that John coughs. (314)
These are complement clauses acting as (non-subject) arguments of verbs. See [pp.84-90]kim2008syntax.
. Included İ can't believe Fred won't, either. (50) I saw that gas can explode. (222) It bothers me that John coughs. (314) Clefts İt was a brand new car that he bought. (347)
These are complement clauses acting as an argument of a noun or adjective. See [pp.91-94]kim2008syntax.
. Included U̇nder NP Ḋo you believe the claim that somebody was looking for something? (99) . Under AP Ṭhe children are fond that they have ice cream. (842)
These are complement clauses with a non-finite matrix verb. Often, the complementizer is for, or there is no complementizer. See [pp.252-253,256-260]adger2003core.
. Included Ḟor complementizer İ would prefer for John to leave. (990) . No Complementizer Ṁary intended John to go abroad. (48) . Ungrammatical Ḣeidi thinks that Andy to eat salmon flavored candy bars. (363) . V-ing Ȯnly Churchill remembered Churchill giving the Blood, Sweat and Tears speech. (469)
These are complement clauses with no overt complementizer.
. Included Ċomplement clause İ'm sure we even got these tickets! (325) He announced he would marry the woman he loved most, but none of his relatives could figure out who. (572) . Relative clause Ṫhe Peter we all like was at the party (484)
These are sentences with three or nested verbs, where VP is not an aux or modal, i.e. with the following syntax: [S ...[ VP ...[ VP ...[ VP ...] ...] ...] ...]
. Included Ėmbedded VPs Ṁax seemed to be trying to force Ted to leave the room, and Walt, Ira. (657) . Embedded clauses İ threw away a book that Sandy thought we had read. (713)
Aux (Auxiliaries)
Any occurrence of negation in a sentence, including sentential negation, negative quantifiers, and negative adverbs.
. Included Ṡentential İ can't remember the name of somebody who had misgivings. (123) . Quantifier Ṅo writer, and no playwright, meets in Vienna. (124) . Adverb Ṫhey realised that never had Sir Thomas been so offended. (409)
Modal verbs (may, might, can, could, will, would, shall, should, must). See [pp.152-155]kim2008syntax.
. Included J̇ohn can kick the ball. (280) As a statesman, scarcely could he do anything worth mentioning. (292)
. Excluded Ṗseudo-modals Ṡandy was trying to work out which students would be able to solve a certain problem. (600)
Auxiliary verbs (e.g. be, have, do). See [pp.149-174]kim2008syntax.
. Included Ṫhey love to play golf, but I do not. (290) The car was driven. (296) he had spent five thousand dollars. (301)
. Excluded Ṗseudo-auxiliaries Ṣally asked if somebody was going to fail math class, but I can't remember who. (589) The cat got bitten. (926)
These are predicates acting as near-auxiliary (e.g. get-passive) or near-modals (e.g. willing)
. Included Ṅear-auxiliaries Ṃary came to be introduced by the bartender and I also came to be. (55) *Sally asked if somebody was going to fail math class, but I can't remember who. (589) The cat got bitten. (926) . Near-modals Ċlinton is anxious to find out which budget dilemmas Panetta would be willing to tackle in a certain way, but he won't say in which. (593) Sandy was trying to work out which students would be able to solve a certain problem. (600)
to-VP (Infinitival VPs)
These are VPs with control verbs, where one argument is a non-finite to-VP without a covert subject co-indexed with an argument of the matrix verb. See [pp.252,266-291]adger2003core, [pp.203-222]sportiche2013introduction, and [pp.125-148]kim2008syntax.
. Included İntransitive subject control Ịt tries to leave the country. (275) . Transitive subject control J̇ohn promised Bill to leave. (977) . Transitive object control İ want her to dance. (379) John considers Bill to be silly. (1040)
. Excluded V̇P args of NP/AP Ṫhis violin is difficult to play sonatas on. (114) . Purpose Ṫhere is a bench to sit on. (309) . Subject VPs Ṫo please John is easy. (315) . Argument present participles Ṁedea denied poisoning the phoenix. (490) . Raising Ȧnson believed himself to be handsome. (499)
These are VPs with raising predicates, where one argument is a non-finite to-VP without a covert subject co-indexed with an argument of the matrix verb. Unlike control verbs, the coindexed argument is not a semantic argument of the raising predicate. See [pp.260-266]adger2003core, [pp.203-222]sportiche2013introduction, and [pp.125-148]kim2008syntax.
. Included Ṡubject raising U̇nder the bed seems to be a fun place to hide. (277) . Object raising Ȧnson believed himself to be handsome. (499) . Raising adjective J̇ohn is likely to leave. (370)
These are embedded infinitival VPs containing a (non-subject) gap that is filled by an argument in the upper clause. Examples are purpose-VPs and tough-movement. See [pp.246-252]kim2008syntax.
. Included Ṫough-movement Ḍrowning cats, which is against the law, are hard to rescue. (79) . Infinitival relatives F̣ed knows which politician her to vote for. (302) . Purpose ṫhe one with a red cover takes a very long time to read. (352) . Other non-finite VPs with extraction Ȧs a statesman, scarcely could he do anything worth mentioning. (292)
These are non-finite VP arguments of nouns and adjectives.
. Included Ṙaising adjectives J̇ohn is likely to leave. (370) . Control adjectives Ṫhe administration has issued a statement that it is willing to meet a student group, but I'm not sure which one. (604) . Control nouns Ȧs a teacher, you have to deal simultaneously with the administration's pressure on you to succeed, and the children's to be a nice guy. (673) . Purpose VPs ṫhere is nothing to do. (983)
These are miscellaneous non-finite VPs.
. Included İ saw that gas can explode. (222) Gerunds/Present participles Ṣtudents studying English reads Conrad's Heart of Darkness while at university. (262) Knowing the country well, he took a short cut. (411) John became deadly afraid of flying. (440) . Subject VPs Ṫo please John is easy. (315) . Nominalized VPs Ẉhat Mary did Bill was give a book. (473)
. Excluded ṫo-VPs acting as complements or modifiers of verbs, nouns, or adjectives
N, Adj (Nouns and Adjectives)
These are nouns and adjectives derived from verbs.
. Included Ḋeverbal nouns ṭhe election of John president surprised me. (1001) . “Light” verbs Ṫhe birds give the worm a tug. (815) . Gerunds İf only Superman would stop flying planes! (773) . Event-wh Ẇhat the water did to the bottle was fill it. (33) . Deverbal adjectives Ḣis or her least known work. (95)
Relational nouns are NPs with an obligatory (or existentially closed) argument. A particular relation holds between the members of the extension of NP and the argument. The argument must be a DP possessor or a PP. See [pp.82-83]kim2008syntax.
. Included Ṅouns with of-arguments J̇ohn has a fear of dogs. (353) . Nouns with other PP-arguments Ḣenri wants to buy which books about cooking? (442) . Measure nouns İ bought three quarts of wine and two of Clorox. (667) . Possessed relational nouns J̣ohn's mother likes himself. (484)
. Excluded Ṅouns with PP modifiers Ṡome people consider dogs in my neighborhood dangerous. (802)
Transitive (non-relational) nouns take a VP or CP argument. See [pp.82-83]kim2008syntax.
. Included V̇P argument ṫhe attempt by John to leave surprised me. (1003) . CP argument Ẉhich report that John was incompetent did he submit? (69) . QP argument Ṫhat is the reason why he resigned. (313)
These are complex NPs, including coordinated nouns and nouns with modifiers (excluding prenominal adjectives).
. Included Ṁodified NPs Ṭhe madrigals which Henry plays the lute and sings sound lousy. (84) John bought a book on the table. (233) . NPs with coordination Ṭhe soundly and furry cat slept. (871) The love of my life and mother of my children would never do such a thing. (806)
Noun-noun compounds are NPs consisting of two constituent nouns.
. Included İt was the peasant girl who got it. (320) A felon was elected to the city council. (938)
These are adjectives that take an obligatory (or existentially closed) argument. A particular relation holds between the members of the extension of the modified NP and the argument. The argument must be a DP or PP. See [pp.80-82]kim2008syntax.
. Included Ȯf-arguments Ṫhe chickens seem fond of the farmer. (254) . Other PP arguments Ṫhis week will be a difficult one for us. (241) John made Bill mad at himself. (1035)
A transitive (non-relational) adjective. I.e. an adjectives that takes a VP or CP argument. See [pp.80-82]kim2008syntax.
. Included V̇P argument J̇ohn is likely to leave. (370) . CP argument J̇ohn is aware of it that Bill is here. (1013) . QP argument Ṫhe administration has issued a statement that it is willing to meet a student group, but I'm not sure which one. (604)
S-Syntax (Sentence-Level Syntax)
These are expressions with non-canonical word order. See, for example, [p.76]sportiche2013introduction.
. Includes Ṗarticle shift Ṃickey looked up it. (24) . Preposed modifiers Ȯut of the box jumped a little white rabbit. (215) *Because she's so pleasant, as for Mary I really like her. (331) . Quantifier float Ṫhe men will all leave. (43) . Preposed argument Ẇith no job would John be happy. (333) . Relative clause extraposition Ẇhich book's, author did you meet who you liked? (731) . Misplaced phrases Ṁary was given by John the book. (626)
This includes topicalization and focus constructions. See [pp.258-269]kim2008syntax and [pp.68-75]sportiche2013introduction.
. Included Ṫopicalization Ṁost elections are quickly forgotten, but the election of 2000, everyone will remember for a long time. (807) . Clefts İt was a brand new car that he bought. (347) . Pseudo-clefts Ẇhat John promised is to be gentle. (441)
. Excluded Ṫhere-insertion Passive
These are parentheticals or fragmentary expressions. . Included Ṗarenthetical Ṁary asked me if, in St. Louis, John could rent a house cheap. (704) . Fragments Ṫhe soup cooks, thickens. (448) . Tag question Ġeorge has spent a lot of money, hasn't he? (291)
Coordinations and disjunctions are expressions joined with and, but, or, etc. See [pp.61-68]sportiche2013introduction.
. Included ḊP coordination Ḋave, Dan, Erin, Jaime, and Alina left. (341) . Right Node Raising K̇im gave a dollar to Bobbie and a dime to Jean. (435) . Clausal coordination Ṡhe talked to Harry, but I don't know who else. (575) . Or, nor Ṇo writer, nor any playwright, meets in Vienna. (125) . Pseudo-coordination İ want to try and buy some whiskey. (432) . Juxtaposed clauses Ŀights go out at ten. There will be no talking afterwards. (779)
This includes subordinate clauses, especially with subordinating conjunctions, and conditionals.
. Included Ċonditional İf I can, I will work on it. (56) . Subordinate clause Ẉhat did you leave before they did? (598) *Because Steve's of a spider's eye had been stolen, I borrowed Fred's diagram of a snake's fang. (677) . Correlative Ạs you eat the most, you want the least. (5)
This includes VP or NP ellipsis, or anaphora standing for VPs or NPs (not DPs). See [pp.55-61]sportiche2013introduction.
. Included V̇P Ellipsis İf I can, I will work on it. (56) Mary likes to tour art galleries, but Bill hates to. (287) . VP Anaphor İ saw Bill while you did so Mary. (472) . NP Ellipsis Ṫom's dog with one eye attacked Fred's. (679) . NP anaphor ṫhe one with a red cover takes a very long time to read. (352) . Sluicing Ṁost columnists claim that a senior White House official has been briefing them, and the newspaper today reveals which one. (557) . Gapping Ḃill ate the peaches, but Harry the grapes. (646)
These are adjuncts modifying sentences, sentence-level adverbs, subordinate clauses.
. Included Ṡentence-level adverbs Ṡuddenly, there arrived two inspectors from the INS. (447) . Subordinate clauses Ṫhe storm arrived while we ate lunch. (852)
Determiner
These are quantificational DPs, i.e. the determiner is a quantifier.
. Included Q̇uantifiers Ẹvery student, and he wears socks, is a swinger. (118) We need another run to win. (769) . Partitive Ṇeither of students failed. (265)
These are quantifiers that take PP arguments, and measure nouns. See [pp.109-118]kim2008syntax.
. Included Q̇uantifiers with PP arguments Ṇeither of students failed. (265) . Numerals Ȯne of Korea's most famous poets wrote these lines. (294) . Measure nouns İ bought three quarts of wine and two of Clorox. (667)
These are negative polarity items (any, ever, etc.) and free choice items (any). See kadmon1993any.
. Included ṄPI Ėverybody around here who ever buys anything on credit talks in his sleep. (122) I didn't have a red cent. (350) . FCI Ȧny owl hunts mice. (387)
These are comparative constructions. See BIBREF22 .
. Included Ċorrelative Ṫhe angrier Mary got, the more she looked at pictures. (9) They may grow as high as bamboo. (337) I know you like the back of my hand. (775)
Violations
These are sentences that include a semantic violation, including type mismatches, violations of selectional restrictions, polarity violations, definiteness violations.
. Included V̇olation of selectional restrictions ṃany information was provided. (218) *It tries to leave the country. (275) . Aspectual violations J̣ohn is tall on several occasions. (540) . Definiteness violations Ịt is the problem that he is here. (1018) . Polarity violations Ȧny man didn't eat dinner. (388)
These are sentences that include a violation in inflectional morphology, including tense-aspect marking, or agreement.
. Included Ċase Ụs love they. (46) . Agreement Ṣtudents studying English reads Conrad's Heart of Darkness while at university. (262) . Gender Ṣally kissed himself. (339) . Tense/Aspect Ḳim alienated cats and beating his dog. (429)
These are sentences with a violation that can be identified with the presence or absence of a single word.
. Included Ṁissing word J̣ohn put under the bathtub. (247) *I noticed the. (788) . Extra word Ẹveryone hopes everyone to sleep. (467) *He can will go (510) | These models are likely to be deficient in encoding morphological features is that they are word level models, and do not have direct access sub-word information like inflectional endings, which indicates that these features are difficult to learn effectively purely from lexical distributions. |
675d7c48541b6368df135f71f9fc13a398f0c8c6 | 675d7c48541b6368df135f71f9fc13a398f0c8c6_0 | Q: Which models are best for learning long-distance movement?
Text: Introduction
The effectiveness and ubiquity of pretrained sentence embeddings for natural language understanding has grown dramatically in recent years. Recent sentence encoders like OpenAI's Generative Pretrained Transformer BIBREF3 and BERT BIBREF2 achieve the state of the art on the GLUE benchmark BIBREF4 . Among the GLUE tasks, these state-of-the-art systems make their greatest gains on the acceptability task with the Corpus of Linguistic Acceptability BIBREF0 . CoLA contains example sentences from linguistics publications labeled by experts for grammatical acceptability, and written to show subtle grammatical features. Because minimal syntactic differences can separate acceptable sentences from unacceptable ones (What did Bo write a book about? / *What was a book about written by Bo?), and acceptability classifiers are more reliable when trained on GPT and BERT than on recurrent models, it stands to reason that GPT and BERT have better implicit knowledge of syntactic features relevant to acceptability.
Our goal in this paper is to develop an evaluation dataset that can locate which syntactic features that a model successfully learns by identifying the syntactic domains of CoLA in which it performs the best. Using this evaluation set, we compare the syntactic knowledge of GPT and BERT in detail, and investigate the strengths of these models over the baseline BiLSTM model published by warstadt2018neural. The analysis set includes expert annotations labeling the entire CoLA development set for the presence of 63 fine-grained syntactic features.
We identify many specific syntactic features that make sentences harder to classify, and many that have little effect. For instance, sentences involving unusual or marked argument structures are no harder than the average sentence, while sentences with long distance dependencies are hard to learn. We also find features of sentences that accentuate or minimize the differences between models. Specifically, the transformer models seem to learn long-distance dependencies much better than the recurrent model, yet have no advantage on sentences with morphological violations.
Analysis Set
We introduce a grammatically annotated version of the entire CoLA development set to facilitate detailed error analysis of acceptability classifiers. These 1043 sentences are expert-labeled for the presence of 63 minor grammatical features organized into 15 major features. Each minor feature belongs to a single major feature. A sentence belongs to a major feature if it belongs to one or more of the relevant minor features. The Appendix includes descriptions of each feature along with examples and the criteria used for annotation.
The 63 minor features and 15 major features are illustrated in Table TABREF5 . Considering minor features, an average of 4.31 features is present per sentence (SD=2.59). The average feature is present in 71.3 sentences (SD=54.7). Turning to major features, the average sentence belongs to 3.22 major features (SD=1.66), and the average major feature is present in 224 sentences (SD=112). Every sentence is labeled with at least one feature.
Annotation
The sentences were annotated manually by one of the authors, who is a PhD student with extensive training in formal linguistics. The features were developed in a trial stage, in which the annotator performed a similar annotation with different annotation schema for several hundred sentences from CoLA not belonging to the development set.
Feature Descriptions
Here we briefly summarize the feature set in order of the major features. Many of these constructions are well-studied in syntax, and further background can be found in textbooks such as adger2003core and sportiche2013introduction.
This major feature contains only one minor feature, simple, including sentences with a syntactically simplex subject and predicate.
These three features correspond to predicative phrases, including copular constructions, small clauses (I saw Bo jump), and resultatives/depictives (Bo wiped the table clean).
These six features mark various kinds of optional modifiers. This includes modifiers of NPs (The boy with blue eyes gasped) or VPs (The cat meowed all morning), and temporal (Bo swam yesterday) or locative (Bo jumped on the bed).
These five features identify syntactically selected arguments, differentiating, for example, obliques (I gave a book to Bo), PP arguments of NPs and VPs (Bo voted for Jones), and expletives (It seems that Bo left).
These four features mark VPs with unusual argument structures, including added arguments (I baked Bo a cake) or dropped arguments (Bo knows), and the passive (I was applauded).
This contains only one feature for imperative clauses (Stop it!).
These are two minor features, one for bound reflexives (Bo loves himself), and one for other bound pronouns (Bo thinks he won).
These five features apply to sentences with question-like properties. They mark whether the interrogative is an embedded clause (I know who you are), a matrix clause (Who are you?), or a relative clause (Bo saw the guy who left); whether it contains an island out of which extraction is unacceptable (*What was a picture of hanging on the wall?); or whether there is pied-piping or a multi-word wh-expressions (With whom did you eat?).
These six features apply to various complement clauses (CPs), including subject CPs (That Bo won is odd); CP arguments of VPs or NPs/APs (The fact that Bo won); CPs missing a complementizer (I think Bo's crazy); or non-finite CPs (This is ready for you to eat).
These four minor features mark the presence of auxiliary or modal verbs (I can win), negation, or “pseudo-auxiliaries” (I have to win).
These five features mark various infinitival embedded VPs, including control VPs (Bo wants to win); raising VPs (Bo seemed to fly); VP arguments of NPs or APs (Bo is eager to eat); and VPs with extraction (e.g. This is easy to read ts ).
These seven features mark complex NPs and APs, including ones with PP arguments (Bo is fond of Mo), or CP/VP arguments; noun-noun compounds (Bo ate mud pie); modified NPs, and NPs derived from verbs (Baking is fun).
These seven features mark various unrelated syntactic constructions, including dislocated phrases (The boy left who was here earlier); movement related to focus or information structure (This I've gotta see this); coordination, subordinate clauses, and ellipsis (I can't); or sentence-level adjuncts (Apparently, it's raining).
These four features mark various determiners, including quantifiers, partitives (two of the boys), negative polarity items (I *do/don't have any pie), and comparative constructions.
These three features apply only to unacceptable sentences, and only ones which are ungrammatical due to a semantic or morphological violation, or the presence or absence of a single salient word.
Correlations
We wish to emphasize that these features are overlapping and in many cases are correlated, thus not all results from using this analysis set will be independent. We analyzed the pairwise Matthews Correlation Coefficient BIBREF17 of the 63 minor features (giving 1953 pairs), and of the 15 major features (giving 105 pairs). MCC is a special case of Pearson's INLINEFORM0 for Boolean variables. These results are summarized in Table TABREF25 . Regarding the minor features, 60 pairs had a correlation of 0.2 or greater, 17 had a correlation of 0.4 or greater, and 6 had a correlation of 0.6 or greater. None had an anti-correlation of greater magnitude than -0.17. Turning to the major features, 6 pairs had a correlation of 0.2 or greater, and 2 had an anti-correlation of greater magnitude than -0.2.
We can see at least three reasons for these observed correlations. First, some correlations can be attributed to overlapping feature definitions. For instance, expletive arguments (e.g. There are birds singing) are, by definition, non-canonical arguments, and thus are a subset of add arg. However, some added arguments, such as benefactives (Bo baked Mo a cake), are not expletives. Second, some correlations can be attributed to grammatical properties of the relevant constructions. For instance, question and aux are correlated because main-clause questions in English require subject-aux inversion and in many cases the insertion of auxiliary do (Do lions meow?). Third, some correlations may be a consequence of the sources sampled in CoLA and the phenomena they focus on. For instance, the unusually high correlation of Emb-Q and ellipsis/anaphor can be attributed to BIBREF18 , which is an article about the sluicing construction involving ellipsis of an embedded interrogative (e.g. I saw someone, but I don't know who).
Finally, two strongest anti-correlations between major features are between simple and the two features related to argument structure, argument types and arg altern. This follows from the definition of simple, which excludes any sentence containing a large number or unusual configuration of arguments.
Models Evaluated
We train MLP acceptability classifiers for CoLA on top of three sentence encoders: (1) the CoLA baseline encoder with ELMo-style embeddings, (2) OpenAI GPT, and (3) BERT. We use publicly available sentence encoders with pretrained weights.
Overall CoLA Results
The overall performance of the three sentence encoders is shown in Table TABREF33 . Performance on CoLA is measured using MCC BIBREF14 . We present the best single restart for each encoder, the mean over restarts for an encoder, and the result of ensembling the restarts for a given encoder, i.e. taking the majority classification for a given sentence, or the majority label of acceptable if tied. For BERT results, we exclude 5 out of the 20 restarts because they were degenerate (MCC=0).
Across the board, BERT outperforms GPT, which outperforms the CoLA baseline. However, BERT and GPT are much closer in performance than they are to CoLA baseline. While ensemble performance exceeded the average for BERT and GPT, it did not outperform the best single model.
Analysis Set Results
The results for the major features and minor features are shown in Figures FIGREF26 and FIGREF35 , respectively. For each feature, we measure the MCC of the sentences including that feature. We plot the mean of these results across the different restarts for each model, and error bars mark the mean INLINEFORM0 standard deviation. For the Violations features, MCC is technically undefined because these features only contain unacceptable sentences. We report MCC in these cases by including for each feature a single acceptable example that is correctly classified by all models.
Comparison across features reveals that the presence of certain features has a large effect on performance, and we comment on some overall patterns below. Within a given feature, the effect of model type is overwhelmingly stable, and resembles the overall difference in performance. However, we observe several interactions, i.e. specific features where the relative performance of models does not track their overall relative performance.
Among the major features (Figure FIGREF26 ), performance is universally highest on the simple sentences, and is higher than each model's overall performance. Though these sentences are simple, we notice that the proportion of ungrammatical ones is on par with the entire dataset. Otherwise we find that a model's performance on sentences of a given feature is on par with or lower than its overall performance, reflecting the fact that features mark the presence of unusual or complex syntactic structure.
Performance is also high (and close to overall performance) on sentences with marked argument structures (Argument Types and Arg(ument) Alt(ernation)). While these models are still worse than human (overall) performance on these sentences, this result indicates that argument structure is relatively easy to learn.
Comparing different kinds of embedded content, we observe higher performance on sentences with embedded clauses (major feature=Comp Clause) embedded VPs (major feature=to-VP) than on sentences with embedded interrogatives (minor features=Emb-Q, Rel Clause). An exception to this trend is the minor feature No C-izer, which labels complement clauses without a complementizer (e.g. I think that you're crazy). Low performance on these sentences compared to most other features in Comp Clause might indicate that complementizers are an important syntactic cue for these models.
As the major feature Question shows, the difficulty of sentences with question-like syntax applies beyond just embedded questions. Excluding polar questions, sentences with question-like syntax almost always involve extraction of a wh-word, creating a long-distance dependency between the wh-word and its extraction site, which may be difficult for models to recognize.
The most challenging features are all related to Violations. Low performance on Infl/Agr Violations, which marks morphological violations (He washed yourself, This is happy), is especially striking because a relatively high proportion (29%) of these sentences are Simple. These models are likely to be deficient in encoding morphological features is that they are word level models, and do not have direct access sub-word information like inflectional endings, which indicates that these features are difficult to learn effectively purely from lexical distributions.
Finally, unusual performance on some features is due to small samples, and have a high standard deviation, suggesting the result is unreliable. This includes CP Subj, Frag/Paren, imperative, NPI/FCI, and Comparative.
Comparing within-feature performance of the three encoders to their overall performance, we find they have differing strengths and weaknesses. BERT stands out over other models in Deep Embed, which includes challenging sentences with doubly-embedded, as well as in several features involving extraction (i.e. long-distance dependencies) such as VP+Extract and Info-Struc. The transformer models show evidence of learning long-distance dependencies better than the CoLA baseline. They outperform the CoLA baseline by an especially wide margin on Bind:Refl, which all involves establishing a dependency between a reflexive and its antecedent (Bo tries to love himself). They also have a large advantage in dislocation, in which expressions are separated from their dependents (Bo practiced on the train an important presentation). The advantage of BERT and GPT may be due in part to their use of the transformer architecture. Unlike the BiLSTM used by the CoLA baseline, the transformer uses a self-attention mechanism that associates all pairs of words regardless of distance.
In some cases models showed surprisingly good or bad performance, revealing possible idiosyncrasies of the sentence embeddings they output. For instance, the CoLA baseline performs on par with the others on the major feature adjunct, especially considering the minor feature Particle (Bo looked the word up).
Furthermore, all models struggle equally with sentences in Violation, indicating that the advantages of the transformer models over the CoLA baseline does not extend to the detection of morphological violations (Infl/Agr Violation) or single word anomalies (Extra/Missing Expr).
Length Analysis
For comparison, we analyze the effect of sentence length on acceptability classifier performance. The results are shown in Figure FIGREF39 . The results for the CoLA baseline are inconsistent, but do drop off as sentence length increases. For BERT and GPT, performance decreases very steadily with length. Exceptions are extremely short sentences (length 1-3), which may be challenging due to insufficient information; and extremely long sentences, where we see a small (but somewhat unreliable) boost in BERT's performance. BERT and GPT are generally quite close in performance, except on the longest sentences, where BERT's performance is considerably better.
Conclusion
Using a new grammatically annotated analysis set, we identify several syntactic phenomena that are predictive of good or bad performance of current state of the art sentence encoders on CoLA. We also use these results to develop hypotheses about why BERT is successful, and why transformer models outperform sequence models.
Our findings can guide future work on sentence embeddings. A current weakness of all sentence encoders we investigate, including BERT, is the identification of morphological violations. Future engineering work should investigate whether switching to a character-level model can mitigate this problem. Additionally, transformer models appear to have an advantage over sequence models with long-distance dependencies, but still struggle with these constructions relative to more local phenomena. It stands to reason that this performance gap might be widened by training larger or deeper transformer models, or training on longer or more complex sentences. This analysis set can be used by engineers interested in evaluating the syntactic knowledge of their encoders.
Finally, these findings suggest possible controlled experiments that could confirm whether there is a causal relation between the presence of the syntactic features we single out as interesting and model performance. Our results are purely correlational, and do not mark whether a particular construction is crucial for the acceptability of the sentence. Future experiments following ettinger2018assessing and kann2019verb can semi-automatically generate datasets manipulating, for example, length of long-distance dependencies, inflectional violations, or the presence of interrogatives, while controlling for factors like sentence length and word choice, in order determine the extent to which these features impact the quality of sentence embeddings.
Acknowledgments
We would like to thank Jason Phang and Thibault Févry for sharing GPT and BERT model predictions on CoLA, and Alex Wang for feedback.
Simple
These are sentences with transitive or intransitive verbs appearing with their default syntax and argument structure. All arguments are noun phrases (DPs), and there are no modifiers or adjuncts on DPs or the VP.
. Included J̇ohn owns the book. (37) Park Square has a festive air. (131) *Herself likes Mary's mother. (456)
. Excluded Ḃill has eaten cake. I gave Joe a book.
Pred (Predicates)
These are sentences including the verb be used predicatively. Also, sentences where the object of the verb is itself a predicate, which applies to the subject. Not included are auxiliary uses of be or other predicate phrases that are not linked to a subject by a verb.
. Included J̇ohn is eager. (27) He turned into a frog. (150) To please John is easy. (315)
. Excluded Ṫhere is a bench to sit on. (309) John broke the geode open. The cake was eaten.
These sentences involve predication of a non-subject argument by another non-subject argument, without the presence of a copula. Some of these cases may be analyzed as small clauses. BIBREF35
. Included J̇ohn called the president a fool. (234) John considers himself proud of Mary. (464) They want them arrested. (856) the election of John president surprised me. (1001)
Modifiers that act as predicates of an argument. Resultatives express a resulting state of that argument, and depictives describe that argument during the matrix event. See BIBREF24 .
. Included Ṙesultative Ṭhe table was wiped by John clean. (625) The horse kicked me black and blue. (898) . Depictive J̇ohn left singing. (971) In which car was the man seen? (398)
. Excluded Ḣe turned into a frog. (150)
Adjunct
Particles are lone prepositions associated with verbs. When they appear with transitive verbs they may immediately follow the verb or the object. Verb-particle pairs may have a non-compositional (idiomatic) meaning. See [pp. 69-70]carnie2013syntax and [pp. 16-17]kim2008syntax.
. Included Ṭhe argument was summed by the coach up. (615) Some sentences go on and on and on. (785) *He let the cats which were whining out. (71)
Adjuncts modifying verb phrases. Adjuncts are (usually) optional, and they do not change the category of the expression they modify. See BIBREF33 .
. Included ṖP-adjuncts, e.g. locative, temporal, instrumental, beneficiary Ṅobody who hates to eat anything should work in a delicatessen. (121) Felicia kicked the ball off the bench. (127) . Adverbs Ṁary beautifully plays the violin. (40) John often meets Mary. (65) . Purpose VPs Ẇe need another run to win. (769) .
0.5em. Excluded ṖP arguments Ṣue gave to Bill a book. (42) Everything you like is on the table. (736) . S-adjuncts J̇ohn lost the race, unfortunately.
These are adjuncts modifying noun phrases. Adjuncts are (usually) optional, and they do not change the category of the expression they modify. Single-word prenominal adjectives are excluded, as are relative clauses (this has another category). . Included ṖP-adjuncts Ṭom's dog with one eye attacked Frank's with three legs. (676) They were going to meet sometime on Sunday, but the faculty didn't know when. (565) . Phrasal adjectives Ȧs a statesman, scarcely could he do anything worth mentioning. (292) . Verbal modifiers Ṫhe horse raced past the barn fell. (900)
. Excluded Ṗrenominal Adjectives İt was the policeman met that several young students in the park last night. (227) . Relative Clauses NP arguments
These are adjuncts of VPs and NPs that specify a time or modify tense or aspect or frequency of an event. Adjuncts are (usually) optional, and they do not change the category of the expression they modify. . Included Ṡhort adverbials (never, today, now, always) Ẉhich hat did Mike quip that she never wore? (95) . PPs Ḟiona might be here by 5 o'clock. (426) . When İ inquired when could we leave. (520)
These are adjuncts of VPs and NPs that specify a location of an event or a part of an event, or of an individual. Adjuncts are (usually) optional, and they do not change the category of the expression they modify. . Included Ṡhort adverbials PPs Ṫhe bed was slept in. (298) *Anson demonized up the Khyber (479) Some people consider dogs in my neighborhood dangerous. (802) Mary saw the boy walking toward the railroad station. (73) . Where İ found the place where we can relax. (307)
. Excluded Ŀocative arguments Ṣam gave the ball out of the basket. (129) Jessica loaded boxes on the wagon. (164) I went to Rome.
These are adjuncts of VPs and NPs not described by some other category (with the exception of (6-7)), i.e. not temporal, locative, or relative clauses. Adjuncts are (usually) optional, and they do not change the category of the expression they modify.
. Included Ḃeneficiary Ị know which book José didn't read for class, and which book Lilly did it for him. (58) . Instrument Ŀee saw the student with a telescope. (770) . Comitative J̇oan ate dinner with someone but I don't know who. (544) . VP adjuncts Ẇhich article did Terry file papers without reading? (431) . Purpose Ẇe need another run to win. (769)
Argument Types
Oblique arguments of verbs are individual-denoting arguments (DPs or PPs) which act as the third argument of verb, i.e. not a subject or (direct) object. They may or may not be marked by a preposition. Obliques are only found in VPs that have three or more individual arguments. Arguments are selected for by the verb, and they are (generally) not optional, though in some cases they may be omitted where they are understood or implicitly existentially quantified over. See [p.40]kim2008syntax.
. Included Ṗrepositional Ṣue gave to Bill a book. (42) Mary has always preferred lemons to limes. (70) *Janet broke Bill on the finger. (141) . Benefactives Ṁartha carved the baby a toy out of wood. (139) . Double object Ṡusan told her a story. (875) Locative arguments Ȧnn may spend her vacation in Italy. (289) . High-arity Passives Ṃary was given by John the book. (626)
. Excluded Ṅon-DP arguments Ẇe want John to win (28) . 3rd argments where not all three arguments are DPs Ẇe want John to win (28)
Prepositional Phrase arguments of VPs are individual-denoting arguments of a verb which are marked by a proposition. They may or may not be obliques. Arguments are selected for by the verb, and they are (generally) not optional, though in some cases they may be omitted where they are understood or implicitly existentially quantified over.
. Included Ḋative Ṣue gave to Bill a book. (42) . Conative (at) C̣arla slid at the book. (179) . Idiosyncratic prepositional verbs İ wonder who to place my trust in. (711) She voted for herself. (743) . Locative J̇ohn was found in the office. (283) . PP predicates Ėverything you like is on the table. (736)
. Excluded ṖP adjuncts Particles Arguments of deverbal expressions ṭhe putter of books left. (892) . By-phrase Ṫed was bitten by the spider. (613)
Prepositional Phrase arguments of NPs or APs are individual-denoting arguments of a noun or adjective which are marked by a proposition. Arguments are selected for by the head, and they are (generally) not optional, though in some cases they may be omitted where they are understood or implicitly existentially quantified over.
. Included Ṙelational adjectives Ṁany people were fond of Pat. (936) *I was already aware of fact. (824) . Relational nouns Ẇe admired the pictures of us in the album. (759) They found the book on the atom. (780) . Arguments of deverbal nouns ṭhe putter of books left. (892)
Prepositional arguments introduced with by. Usually, this is the (semantic) subject of a passive verb, but in rare cases it may be the subject of a nominalized verb. Arguments are usually selected for by the head, and they are generally not optional. In this case, the argument introduced with by is semantically selected for by the verb, but it is syntactically optional. See [p.190]adger2003core and []collins2005smuggling.
. Included Ṗassives Ṫed was bitten by the spider. (613) . Subjects of deverbal nouns ṫhe attempt by John to leave surprised me. (1003)
Expletives, or “dummy” arguments, are semantically inert arguments. The most common expletives in English are it and there, although not all occurrences of these items are expletives. Arguments are usually selected for by the head, and they are generally not optional. In this case, the expletive occupies a syntactic argument slot, but it is not semantically selected by the verb, and there is often a syntactic variation without the expletive. See [p.170-172]adger2003core and [p.82-83]kim2008syntax.
. Included Ṫhere—inserted, existential Ṭhere loved Sandy. (939) There is a nurse available. (466) . It—cleft, inserted İt was a brand new car that he bought. (347) It bothers me that John coughs. (314) It is nice to go abroad. (47) . Environmental it K̇erry remarked it was late. (821) Poor Bill, it had started to rain and he had no umbrella. (116) You've really lived it up. (160)
. Excluded J̇ohn counted on Bill to get there on time. (996) I bought it to read. (1026)
Arg Altern (Argument Alternations)
These are verbs with 3 or more arguments of any kind. Arity refers to the number of arguments that a head (or function) selects for. Arguments are usually selected for by the head, and they are generally not optional. They may be DPs, PPs, CPs, VPs, APs or other categories.
. Included Ḋitransitive [̣Sue] gave [to Bill] [a book]. (42) [Martha] carved [the baby] [a toy] out of wood. (139) . VP arguments [̣We] believed [John] [to be a fountain in the park]. (274) [We] made [them] [be rude]. (260) . Particles He] let [the cats which were whining] [out]. (71) . Passives with by-phrase [̣A good friend] is remained [to me] [by him]. (237) . Expletives [̣We] expect [there] [to will rain]. (282) [There] is [a seat] [available]. (934) [It] bothers [me] [that he is here]. (1009) . Small clause John] considers [Bill] [silly]. (1039)
. Excluded Ṙesults, depictives John] broke [the geode] [open].
These are VPs where a canonical argument of the verb is missing. This can be difficult to determine, but in many cases the missing argument is understood with existential quantification or generically, or contextually salient. See [p.106-109]sportiche2013introduction.
. Included Ṁiddle voice/causative inchoative Ṭhe problem perceives easily. (66) . Passive Ṫhe car was driven. (296) . Null complement anaphora J̇ean persuaded Robert. (380) Nobody told Susan. (883) . Dropped argument Ḳim put in the box. (253) The guests dined. (835) I wrote to Bill. (1030) . Transitive adjective J̇ohn is eager. (27) We pulled free. (144) . Transitive noun İ sensed his eagerness. (155) . Expletive insertion Ịt loved Sandy. (949)
. Excluded Ṫed was bitten by the spider. (613)
These are VPs in which a non-canonical argument of the verb has been added. These cases are clearer to identify where the additional argument is a DP. In general, PPs which mark locations, times, beneficiaries, or purposes should be analyzed as adjuncts, while PPs marking causes can be considered arguments. See []pylkkanen2008introducing.
. Included Ėxtra argument Ḷinda winked her lip. (202) Sharon fainted from hunger. (204) I shaved myself. (526) . Causative Ị squeaked the door. (207) . Expletive insertion Ṫhere is a monster in Loch Ness. (928) It annoys people that dogs bark. (943) . Benefactive Ṁartha carved the baby a toy out of wood. (139)
The passive voice is marked by the demotion of the subject (either complete omission or to a by-phrase) and the verb appearing as a past participle. In the stereotypical construction there is an auxiliary be verb, though this may be absent. See [p.175-190]kim2008syntax, collins2005smuggling, and [p.311-333]sag2003syntactic.
. Included V̇erbs Ṫhe earth was believed to be round. (157) . Psuedopassive Ṫhe bed was slept in. (298) . Past participle adjuncts Ṫhe horse raced past the barn fell. (900)
Imperative
The imperative mood is marked by the absence of the a subject and the bare form of the verb, and expresses a command, request, or other directive speech act.
. Included Ẉash you! (224) Somebody just left - guess who. (528)
Binding
These are cases in which a reflexive (non-possessive) pronoun, usually bound by an antecedent. See [p.163-186]sportiche2013introduction and [p.203-226]sag2003syntactic.
. Included Ọurselves like ourselves. (742) Which pictures of himself does John like? (386)
These are cases in which a non-reflexive pronoun appears along with its antecedent. This includes donkey anaphora, quantificational binding, and bound possessives, among other bound pronouns. See [p.163-186]sportiche2013introduction and [p.203-226]sag2003syntactic.
. Included Ḃound possessor Ṫhe children admire their mother. (382) . Quantificational binding Ėverybody gets on well with a certain relative, but often only his therapist knows which one. (562) . Bound pronoun Ẉe gave us to the cause. (747)
Question
These are sentences in which the matrix clause is interrogative (either a wh- or polar question). See [pp.282-213]adger2003core, [pp.193-222]kim2008syntax, and [p.315-350]carnie2013syntax.
. Included Ẇh-question Ẇho always drinks milk? (684) . Polar question Ḋid Athena help us? (486)
These are embedded interrogative clauses appearing as arguments of verbs, nouns, and adjectives. Not including relative clauses and free relatives. See [p.297]adger2003core.
. Included U̇nder VP İ forgot how good beer tastes. (235) *What did you ask who saw? (508) . Under NP Ṫhat is the reason why he resigned. (313) . Under AP Ṫhey claimed they had settled on something, but it wasn't clear what they had settled on. (529) . Free relative Ẇhat the water did to the bottle was fill it. (33)
. Excluded Relative clauses, free relatives
These are phrasal Wh-phrases, in which the wh-word moves along with other expressions, including prepositions (pied-piping) or nouns in the case of determiner wh-words such as how many and which.
. Included Ṗied-piping Ṭhe ship sank, but I don't know with what. (541) . Other phrasal wh-phrases İ know which book Mag read, and which book Bob read my report that you hadn't. (61) How sane is Peter? (88)
Relative clauses are noun modifiers appearing with a relativizer (either that or a wh-word) and an associated gap. See [p.223-244]kim2008syntax.
. Included Ṫhough he may hate those that criticize Carter, it doesn't matter. (332) *The book what inspired them was very long. (686) Everything you like is on the table. (736)
. Excluded Ṭhe more you would want, the less you would eat. (6)
This is wh-movement out of an extraction island, or near-island. Islands include, for example, complex NPs, adjuncts, embedded questions, coordination. A near-island is an extraction that closely resembles an island violation, such as extraction out of an embedded clause, or across-the-board extraction. See [pp.323-333]adger2003core and [pp.332-334]carnie2013syntax.
. Included Ėmbedded question *What did you ask who Medea gave? (493) . Adjunct Ẉhat did you leave before they did? (598) . Parasitic gaps Ẇhich topic did you choose without getting his approval? (311) . Complex NP Ẇho did you get an accurate description of? (483)
Comp Clause (Complement Clauses)
These are complement clauses acting as the (syntactic) subject of verbs. See [pp.90-91]kim2008syntax.
. Included Ṫhat dogs bark annoys people. (942) The socks are ready for for you to put on to be planned. (112)
. Excluded Ėxpletive insertion İt bothers me that John coughs. (314)
These are complement clauses acting as (non-subject) arguments of verbs. See [pp.84-90]kim2008syntax.
. Included İ can't believe Fred won't, either. (50) I saw that gas can explode. (222) It bothers me that John coughs. (314) Clefts İt was a brand new car that he bought. (347)
These are complement clauses acting as an argument of a noun or adjective. See [pp.91-94]kim2008syntax.
. Included U̇nder NP Ḋo you believe the claim that somebody was looking for something? (99) . Under AP Ṭhe children are fond that they have ice cream. (842)
These are complement clauses with a non-finite matrix verb. Often, the complementizer is for, or there is no complementizer. See [pp.252-253,256-260]adger2003core.
. Included Ḟor complementizer İ would prefer for John to leave. (990) . No Complementizer Ṁary intended John to go abroad. (48) . Ungrammatical Ḣeidi thinks that Andy to eat salmon flavored candy bars. (363) . V-ing Ȯnly Churchill remembered Churchill giving the Blood, Sweat and Tears speech. (469)
These are complement clauses with no overt complementizer.
. Included Ċomplement clause İ'm sure we even got these tickets! (325) He announced he would marry the woman he loved most, but none of his relatives could figure out who. (572) . Relative clause Ṫhe Peter we all like was at the party (484)
These are sentences with three or nested verbs, where VP is not an aux or modal, i.e. with the following syntax: [S ...[ VP ...[ VP ...[ VP ...] ...] ...] ...]
. Included Ėmbedded VPs Ṁax seemed to be trying to force Ted to leave the room, and Walt, Ira. (657) . Embedded clauses İ threw away a book that Sandy thought we had read. (713)
Aux (Auxiliaries)
Any occurrence of negation in a sentence, including sentential negation, negative quantifiers, and negative adverbs.
. Included Ṡentential İ can't remember the name of somebody who had misgivings. (123) . Quantifier Ṅo writer, and no playwright, meets in Vienna. (124) . Adverb Ṫhey realised that never had Sir Thomas been so offended. (409)
Modal verbs (may, might, can, could, will, would, shall, should, must). See [pp.152-155]kim2008syntax.
. Included J̇ohn can kick the ball. (280) As a statesman, scarcely could he do anything worth mentioning. (292)
. Excluded Ṗseudo-modals Ṡandy was trying to work out which students would be able to solve a certain problem. (600)
Auxiliary verbs (e.g. be, have, do). See [pp.149-174]kim2008syntax.
. Included Ṫhey love to play golf, but I do not. (290) The car was driven. (296) he had spent five thousand dollars. (301)
. Excluded Ṗseudo-auxiliaries Ṣally asked if somebody was going to fail math class, but I can't remember who. (589) The cat got bitten. (926)
These are predicates acting as near-auxiliary (e.g. get-passive) or near-modals (e.g. willing)
. Included Ṅear-auxiliaries Ṃary came to be introduced by the bartender and I also came to be. (55) *Sally asked if somebody was going to fail math class, but I can't remember who. (589) The cat got bitten. (926) . Near-modals Ċlinton is anxious to find out which budget dilemmas Panetta would be willing to tackle in a certain way, but he won't say in which. (593) Sandy was trying to work out which students would be able to solve a certain problem. (600)
to-VP (Infinitival VPs)
These are VPs with control verbs, where one argument is a non-finite to-VP without a covert subject co-indexed with an argument of the matrix verb. See [pp.252,266-291]adger2003core, [pp.203-222]sportiche2013introduction, and [pp.125-148]kim2008syntax.
. Included İntransitive subject control Ịt tries to leave the country. (275) . Transitive subject control J̇ohn promised Bill to leave. (977) . Transitive object control İ want her to dance. (379) John considers Bill to be silly. (1040)
. Excluded V̇P args of NP/AP Ṫhis violin is difficult to play sonatas on. (114) . Purpose Ṫhere is a bench to sit on. (309) . Subject VPs Ṫo please John is easy. (315) . Argument present participles Ṁedea denied poisoning the phoenix. (490) . Raising Ȧnson believed himself to be handsome. (499)
These are VPs with raising predicates, where one argument is a non-finite to-VP without a covert subject co-indexed with an argument of the matrix verb. Unlike control verbs, the coindexed argument is not a semantic argument of the raising predicate. See [pp.260-266]adger2003core, [pp.203-222]sportiche2013introduction, and [pp.125-148]kim2008syntax.
. Included Ṡubject raising U̇nder the bed seems to be a fun place to hide. (277) . Object raising Ȧnson believed himself to be handsome. (499) . Raising adjective J̇ohn is likely to leave. (370)
These are embedded infinitival VPs containing a (non-subject) gap that is filled by an argument in the upper clause. Examples are purpose-VPs and tough-movement. See [pp.246-252]kim2008syntax.
. Included Ṫough-movement Ḍrowning cats, which is against the law, are hard to rescue. (79) . Infinitival relatives F̣ed knows which politician her to vote for. (302) . Purpose ṫhe one with a red cover takes a very long time to read. (352) . Other non-finite VPs with extraction Ȧs a statesman, scarcely could he do anything worth mentioning. (292)
These are non-finite VP arguments of nouns and adjectives.
. Included Ṙaising adjectives J̇ohn is likely to leave. (370) . Control adjectives Ṫhe administration has issued a statement that it is willing to meet a student group, but I'm not sure which one. (604) . Control nouns Ȧs a teacher, you have to deal simultaneously with the administration's pressure on you to succeed, and the children's to be a nice guy. (673) . Purpose VPs ṫhere is nothing to do. (983)
These are miscellaneous non-finite VPs.
. Included İ saw that gas can explode. (222) Gerunds/Present participles Ṣtudents studying English reads Conrad's Heart of Darkness while at university. (262) Knowing the country well, he took a short cut. (411) John became deadly afraid of flying. (440) . Subject VPs Ṫo please John is easy. (315) . Nominalized VPs Ẉhat Mary did Bill was give a book. (473)
. Excluded ṫo-VPs acting as complements or modifiers of verbs, nouns, or adjectives
N, Adj (Nouns and Adjectives)
These are nouns and adjectives derived from verbs.
. Included Ḋeverbal nouns ṭhe election of John president surprised me. (1001) . “Light” verbs Ṫhe birds give the worm a tug. (815) . Gerunds İf only Superman would stop flying planes! (773) . Event-wh Ẇhat the water did to the bottle was fill it. (33) . Deverbal adjectives Ḣis or her least known work. (95)
Relational nouns are NPs with an obligatory (or existentially closed) argument. A particular relation holds between the members of the extension of NP and the argument. The argument must be a DP possessor or a PP. See [pp.82-83]kim2008syntax.
. Included Ṅouns with of-arguments J̇ohn has a fear of dogs. (353) . Nouns with other PP-arguments Ḣenri wants to buy which books about cooking? (442) . Measure nouns İ bought three quarts of wine and two of Clorox. (667) . Possessed relational nouns J̣ohn's mother likes himself. (484)
. Excluded Ṅouns with PP modifiers Ṡome people consider dogs in my neighborhood dangerous. (802)
Transitive (non-relational) nouns take a VP or CP argument. See [pp.82-83]kim2008syntax.
. Included V̇P argument ṫhe attempt by John to leave surprised me. (1003) . CP argument Ẉhich report that John was incompetent did he submit? (69) . QP argument Ṫhat is the reason why he resigned. (313)
These are complex NPs, including coordinated nouns and nouns with modifiers (excluding prenominal adjectives).
. Included Ṁodified NPs Ṭhe madrigals which Henry plays the lute and sings sound lousy. (84) John bought a book on the table. (233) . NPs with coordination Ṭhe soundly and furry cat slept. (871) The love of my life and mother of my children would never do such a thing. (806)
Noun-noun compounds are NPs consisting of two constituent nouns.
. Included İt was the peasant girl who got it. (320) A felon was elected to the city council. (938)
These are adjectives that take an obligatory (or existentially closed) argument. A particular relation holds between the members of the extension of the modified NP and the argument. The argument must be a DP or PP. See [pp.80-82]kim2008syntax.
. Included Ȯf-arguments Ṫhe chickens seem fond of the farmer. (254) . Other PP arguments Ṫhis week will be a difficult one for us. (241) John made Bill mad at himself. (1035)
A transitive (non-relational) adjective. I.e. an adjectives that takes a VP or CP argument. See [pp.80-82]kim2008syntax.
. Included V̇P argument J̇ohn is likely to leave. (370) . CP argument J̇ohn is aware of it that Bill is here. (1013) . QP argument Ṫhe administration has issued a statement that it is willing to meet a student group, but I'm not sure which one. (604)
S-Syntax (Sentence-Level Syntax)
These are expressions with non-canonical word order. See, for example, [p.76]sportiche2013introduction.
. Includes Ṗarticle shift Ṃickey looked up it. (24) . Preposed modifiers Ȯut of the box jumped a little white rabbit. (215) *Because she's so pleasant, as for Mary I really like her. (331) . Quantifier float Ṫhe men will all leave. (43) . Preposed argument Ẇith no job would John be happy. (333) . Relative clause extraposition Ẇhich book's, author did you meet who you liked? (731) . Misplaced phrases Ṁary was given by John the book. (626)
This includes topicalization and focus constructions. See [pp.258-269]kim2008syntax and [pp.68-75]sportiche2013introduction.
. Included Ṫopicalization Ṁost elections are quickly forgotten, but the election of 2000, everyone will remember for a long time. (807) . Clefts İt was a brand new car that he bought. (347) . Pseudo-clefts Ẇhat John promised is to be gentle. (441)
. Excluded Ṫhere-insertion Passive
These are parentheticals or fragmentary expressions. . Included Ṗarenthetical Ṁary asked me if, in St. Louis, John could rent a house cheap. (704) . Fragments Ṫhe soup cooks, thickens. (448) . Tag question Ġeorge has spent a lot of money, hasn't he? (291)
Coordinations and disjunctions are expressions joined with and, but, or, etc. See [pp.61-68]sportiche2013introduction.
. Included ḊP coordination Ḋave, Dan, Erin, Jaime, and Alina left. (341) . Right Node Raising K̇im gave a dollar to Bobbie and a dime to Jean. (435) . Clausal coordination Ṡhe talked to Harry, but I don't know who else. (575) . Or, nor Ṇo writer, nor any playwright, meets in Vienna. (125) . Pseudo-coordination İ want to try and buy some whiskey. (432) . Juxtaposed clauses Ŀights go out at ten. There will be no talking afterwards. (779)
This includes subordinate clauses, especially with subordinating conjunctions, and conditionals.
. Included Ċonditional İf I can, I will work on it. (56) . Subordinate clause Ẉhat did you leave before they did? (598) *Because Steve's of a spider's eye had been stolen, I borrowed Fred's diagram of a snake's fang. (677) . Correlative Ạs you eat the most, you want the least. (5)
This includes VP or NP ellipsis, or anaphora standing for VPs or NPs (not DPs). See [pp.55-61]sportiche2013introduction.
. Included V̇P Ellipsis İf I can, I will work on it. (56) Mary likes to tour art galleries, but Bill hates to. (287) . VP Anaphor İ saw Bill while you did so Mary. (472) . NP Ellipsis Ṫom's dog with one eye attacked Fred's. (679) . NP anaphor ṫhe one with a red cover takes a very long time to read. (352) . Sluicing Ṁost columnists claim that a senior White House official has been briefing them, and the newspaper today reveals which one. (557) . Gapping Ḃill ate the peaches, but Harry the grapes. (646)
These are adjuncts modifying sentences, sentence-level adverbs, subordinate clauses.
. Included Ṡentence-level adverbs Ṡuddenly, there arrived two inspectors from the INS. (447) . Subordinate clauses Ṫhe storm arrived while we ate lunch. (852)
Determiner
These are quantificational DPs, i.e. the determiner is a quantifier.
. Included Q̇uantifiers Ẹvery student, and he wears socks, is a swinger. (118) We need another run to win. (769) . Partitive Ṇeither of students failed. (265)
These are quantifiers that take PP arguments, and measure nouns. See [pp.109-118]kim2008syntax.
. Included Q̇uantifiers with PP arguments Ṇeither of students failed. (265) . Numerals Ȯne of Korea's most famous poets wrote these lines. (294) . Measure nouns İ bought three quarts of wine and two of Clorox. (667)
These are negative polarity items (any, ever, etc.) and free choice items (any). See kadmon1993any.
. Included ṄPI Ėverybody around here who ever buys anything on credit talks in his sleep. (122) I didn't have a red cent. (350) . FCI Ȧny owl hunts mice. (387)
These are comparative constructions. See BIBREF22 .
. Included Ċorrelative Ṫhe angrier Mary got, the more she looked at pictures. (9) They may grow as high as bamboo. (337) I know you like the back of my hand. (775)
Violations
These are sentences that include a semantic violation, including type mismatches, violations of selectional restrictions, polarity violations, definiteness violations.
. Included V̇olation of selectional restrictions ṃany information was provided. (218) *It tries to leave the country. (275) . Aspectual violations J̣ohn is tall on several occasions. (540) . Definiteness violations Ịt is the problem that he is here. (1018) . Polarity violations Ȧny man didn't eat dinner. (388)
These are sentences that include a violation in inflectional morphology, including tense-aspect marking, or agreement.
. Included Ċase Ụs love they. (46) . Agreement Ṣtudents studying English reads Conrad's Heart of Darkness while at university. (262) . Gender Ṣally kissed himself. (339) . Tense/Aspect Ḳim alienated cats and beating his dog. (429)
These are sentences with a violation that can be identified with the presence or absence of a single word.
. Included Ṁissing word J̣ohn put under the bathtub. (247) *I noticed the. (788) . Extra word Ẹveryone hopes everyone to sleep. (467) *He can will go (510) | the transformer models |
868c69c8f623e30b96df5b5c8336070994469f60 | 868c69c8f623e30b96df5b5c8336070994469f60_0 | Q: Where does the data in CoLA come from?
Text: Introduction
The effectiveness and ubiquity of pretrained sentence embeddings for natural language understanding has grown dramatically in recent years. Recent sentence encoders like OpenAI's Generative Pretrained Transformer BIBREF3 and BERT BIBREF2 achieve the state of the art on the GLUE benchmark BIBREF4 . Among the GLUE tasks, these state-of-the-art systems make their greatest gains on the acceptability task with the Corpus of Linguistic Acceptability BIBREF0 . CoLA contains example sentences from linguistics publications labeled by experts for grammatical acceptability, and written to show subtle grammatical features. Because minimal syntactic differences can separate acceptable sentences from unacceptable ones (What did Bo write a book about? / *What was a book about written by Bo?), and acceptability classifiers are more reliable when trained on GPT and BERT than on recurrent models, it stands to reason that GPT and BERT have better implicit knowledge of syntactic features relevant to acceptability.
Our goal in this paper is to develop an evaluation dataset that can locate which syntactic features that a model successfully learns by identifying the syntactic domains of CoLA in which it performs the best. Using this evaluation set, we compare the syntactic knowledge of GPT and BERT in detail, and investigate the strengths of these models over the baseline BiLSTM model published by warstadt2018neural. The analysis set includes expert annotations labeling the entire CoLA development set for the presence of 63 fine-grained syntactic features.
We identify many specific syntactic features that make sentences harder to classify, and many that have little effect. For instance, sentences involving unusual or marked argument structures are no harder than the average sentence, while sentences with long distance dependencies are hard to learn. We also find features of sentences that accentuate or minimize the differences between models. Specifically, the transformer models seem to learn long-distance dependencies much better than the recurrent model, yet have no advantage on sentences with morphological violations.
Analysis Set
We introduce a grammatically annotated version of the entire CoLA development set to facilitate detailed error analysis of acceptability classifiers. These 1043 sentences are expert-labeled for the presence of 63 minor grammatical features organized into 15 major features. Each minor feature belongs to a single major feature. A sentence belongs to a major feature if it belongs to one or more of the relevant minor features. The Appendix includes descriptions of each feature along with examples and the criteria used for annotation.
The 63 minor features and 15 major features are illustrated in Table TABREF5 . Considering minor features, an average of 4.31 features is present per sentence (SD=2.59). The average feature is present in 71.3 sentences (SD=54.7). Turning to major features, the average sentence belongs to 3.22 major features (SD=1.66), and the average major feature is present in 224 sentences (SD=112). Every sentence is labeled with at least one feature.
Annotation
The sentences were annotated manually by one of the authors, who is a PhD student with extensive training in formal linguistics. The features were developed in a trial stage, in which the annotator performed a similar annotation with different annotation schema for several hundred sentences from CoLA not belonging to the development set.
Feature Descriptions
Here we briefly summarize the feature set in order of the major features. Many of these constructions are well-studied in syntax, and further background can be found in textbooks such as adger2003core and sportiche2013introduction.
This major feature contains only one minor feature, simple, including sentences with a syntactically simplex subject and predicate.
These three features correspond to predicative phrases, including copular constructions, small clauses (I saw Bo jump), and resultatives/depictives (Bo wiped the table clean).
These six features mark various kinds of optional modifiers. This includes modifiers of NPs (The boy with blue eyes gasped) or VPs (The cat meowed all morning), and temporal (Bo swam yesterday) or locative (Bo jumped on the bed).
These five features identify syntactically selected arguments, differentiating, for example, obliques (I gave a book to Bo), PP arguments of NPs and VPs (Bo voted for Jones), and expletives (It seems that Bo left).
These four features mark VPs with unusual argument structures, including added arguments (I baked Bo a cake) or dropped arguments (Bo knows), and the passive (I was applauded).
This contains only one feature for imperative clauses (Stop it!).
These are two minor features, one for bound reflexives (Bo loves himself), and one for other bound pronouns (Bo thinks he won).
These five features apply to sentences with question-like properties. They mark whether the interrogative is an embedded clause (I know who you are), a matrix clause (Who are you?), or a relative clause (Bo saw the guy who left); whether it contains an island out of which extraction is unacceptable (*What was a picture of hanging on the wall?); or whether there is pied-piping or a multi-word wh-expressions (With whom did you eat?).
These six features apply to various complement clauses (CPs), including subject CPs (That Bo won is odd); CP arguments of VPs or NPs/APs (The fact that Bo won); CPs missing a complementizer (I think Bo's crazy); or non-finite CPs (This is ready for you to eat).
These four minor features mark the presence of auxiliary or modal verbs (I can win), negation, or “pseudo-auxiliaries” (I have to win).
These five features mark various infinitival embedded VPs, including control VPs (Bo wants to win); raising VPs (Bo seemed to fly); VP arguments of NPs or APs (Bo is eager to eat); and VPs with extraction (e.g. This is easy to read ts ).
These seven features mark complex NPs and APs, including ones with PP arguments (Bo is fond of Mo), or CP/VP arguments; noun-noun compounds (Bo ate mud pie); modified NPs, and NPs derived from verbs (Baking is fun).
These seven features mark various unrelated syntactic constructions, including dislocated phrases (The boy left who was here earlier); movement related to focus or information structure (This I've gotta see this); coordination, subordinate clauses, and ellipsis (I can't); or sentence-level adjuncts (Apparently, it's raining).
These four features mark various determiners, including quantifiers, partitives (two of the boys), negative polarity items (I *do/don't have any pie), and comparative constructions.
These three features apply only to unacceptable sentences, and only ones which are ungrammatical due to a semantic or morphological violation, or the presence or absence of a single salient word.
Correlations
We wish to emphasize that these features are overlapping and in many cases are correlated, thus not all results from using this analysis set will be independent. We analyzed the pairwise Matthews Correlation Coefficient BIBREF17 of the 63 minor features (giving 1953 pairs), and of the 15 major features (giving 105 pairs). MCC is a special case of Pearson's INLINEFORM0 for Boolean variables. These results are summarized in Table TABREF25 . Regarding the minor features, 60 pairs had a correlation of 0.2 or greater, 17 had a correlation of 0.4 or greater, and 6 had a correlation of 0.6 or greater. None had an anti-correlation of greater magnitude than -0.17. Turning to the major features, 6 pairs had a correlation of 0.2 or greater, and 2 had an anti-correlation of greater magnitude than -0.2.
We can see at least three reasons for these observed correlations. First, some correlations can be attributed to overlapping feature definitions. For instance, expletive arguments (e.g. There are birds singing) are, by definition, non-canonical arguments, and thus are a subset of add arg. However, some added arguments, such as benefactives (Bo baked Mo a cake), are not expletives. Second, some correlations can be attributed to grammatical properties of the relevant constructions. For instance, question and aux are correlated because main-clause questions in English require subject-aux inversion and in many cases the insertion of auxiliary do (Do lions meow?). Third, some correlations may be a consequence of the sources sampled in CoLA and the phenomena they focus on. For instance, the unusually high correlation of Emb-Q and ellipsis/anaphor can be attributed to BIBREF18 , which is an article about the sluicing construction involving ellipsis of an embedded interrogative (e.g. I saw someone, but I don't know who).
Finally, two strongest anti-correlations between major features are between simple and the two features related to argument structure, argument types and arg altern. This follows from the definition of simple, which excludes any sentence containing a large number or unusual configuration of arguments.
Models Evaluated
We train MLP acceptability classifiers for CoLA on top of three sentence encoders: (1) the CoLA baseline encoder with ELMo-style embeddings, (2) OpenAI GPT, and (3) BERT. We use publicly available sentence encoders with pretrained weights.
Overall CoLA Results
The overall performance of the three sentence encoders is shown in Table TABREF33 . Performance on CoLA is measured using MCC BIBREF14 . We present the best single restart for each encoder, the mean over restarts for an encoder, and the result of ensembling the restarts for a given encoder, i.e. taking the majority classification for a given sentence, or the majority label of acceptable if tied. For BERT results, we exclude 5 out of the 20 restarts because they were degenerate (MCC=0).
Across the board, BERT outperforms GPT, which outperforms the CoLA baseline. However, BERT and GPT are much closer in performance than they are to CoLA baseline. While ensemble performance exceeded the average for BERT and GPT, it did not outperform the best single model.
Analysis Set Results
The results for the major features and minor features are shown in Figures FIGREF26 and FIGREF35 , respectively. For each feature, we measure the MCC of the sentences including that feature. We plot the mean of these results across the different restarts for each model, and error bars mark the mean INLINEFORM0 standard deviation. For the Violations features, MCC is technically undefined because these features only contain unacceptable sentences. We report MCC in these cases by including for each feature a single acceptable example that is correctly classified by all models.
Comparison across features reveals that the presence of certain features has a large effect on performance, and we comment on some overall patterns below. Within a given feature, the effect of model type is overwhelmingly stable, and resembles the overall difference in performance. However, we observe several interactions, i.e. specific features where the relative performance of models does not track their overall relative performance.
Among the major features (Figure FIGREF26 ), performance is universally highest on the simple sentences, and is higher than each model's overall performance. Though these sentences are simple, we notice that the proportion of ungrammatical ones is on par with the entire dataset. Otherwise we find that a model's performance on sentences of a given feature is on par with or lower than its overall performance, reflecting the fact that features mark the presence of unusual or complex syntactic structure.
Performance is also high (and close to overall performance) on sentences with marked argument structures (Argument Types and Arg(ument) Alt(ernation)). While these models are still worse than human (overall) performance on these sentences, this result indicates that argument structure is relatively easy to learn.
Comparing different kinds of embedded content, we observe higher performance on sentences with embedded clauses (major feature=Comp Clause) embedded VPs (major feature=to-VP) than on sentences with embedded interrogatives (minor features=Emb-Q, Rel Clause). An exception to this trend is the minor feature No C-izer, which labels complement clauses without a complementizer (e.g. I think that you're crazy). Low performance on these sentences compared to most other features in Comp Clause might indicate that complementizers are an important syntactic cue for these models.
As the major feature Question shows, the difficulty of sentences with question-like syntax applies beyond just embedded questions. Excluding polar questions, sentences with question-like syntax almost always involve extraction of a wh-word, creating a long-distance dependency between the wh-word and its extraction site, which may be difficult for models to recognize.
The most challenging features are all related to Violations. Low performance on Infl/Agr Violations, which marks morphological violations (He washed yourself, This is happy), is especially striking because a relatively high proportion (29%) of these sentences are Simple. These models are likely to be deficient in encoding morphological features is that they are word level models, and do not have direct access sub-word information like inflectional endings, which indicates that these features are difficult to learn effectively purely from lexical distributions.
Finally, unusual performance on some features is due to small samples, and have a high standard deviation, suggesting the result is unreliable. This includes CP Subj, Frag/Paren, imperative, NPI/FCI, and Comparative.
Comparing within-feature performance of the three encoders to their overall performance, we find they have differing strengths and weaknesses. BERT stands out over other models in Deep Embed, which includes challenging sentences with doubly-embedded, as well as in several features involving extraction (i.e. long-distance dependencies) such as VP+Extract and Info-Struc. The transformer models show evidence of learning long-distance dependencies better than the CoLA baseline. They outperform the CoLA baseline by an especially wide margin on Bind:Refl, which all involves establishing a dependency between a reflexive and its antecedent (Bo tries to love himself). They also have a large advantage in dislocation, in which expressions are separated from their dependents (Bo practiced on the train an important presentation). The advantage of BERT and GPT may be due in part to their use of the transformer architecture. Unlike the BiLSTM used by the CoLA baseline, the transformer uses a self-attention mechanism that associates all pairs of words regardless of distance.
In some cases models showed surprisingly good or bad performance, revealing possible idiosyncrasies of the sentence embeddings they output. For instance, the CoLA baseline performs on par with the others on the major feature adjunct, especially considering the minor feature Particle (Bo looked the word up).
Furthermore, all models struggle equally with sentences in Violation, indicating that the advantages of the transformer models over the CoLA baseline does not extend to the detection of morphological violations (Infl/Agr Violation) or single word anomalies (Extra/Missing Expr).
Length Analysis
For comparison, we analyze the effect of sentence length on acceptability classifier performance. The results are shown in Figure FIGREF39 . The results for the CoLA baseline are inconsistent, but do drop off as sentence length increases. For BERT and GPT, performance decreases very steadily with length. Exceptions are extremely short sentences (length 1-3), which may be challenging due to insufficient information; and extremely long sentences, where we see a small (but somewhat unreliable) boost in BERT's performance. BERT and GPT are generally quite close in performance, except on the longest sentences, where BERT's performance is considerably better.
Conclusion
Using a new grammatically annotated analysis set, we identify several syntactic phenomena that are predictive of good or bad performance of current state of the art sentence encoders on CoLA. We also use these results to develop hypotheses about why BERT is successful, and why transformer models outperform sequence models.
Our findings can guide future work on sentence embeddings. A current weakness of all sentence encoders we investigate, including BERT, is the identification of morphological violations. Future engineering work should investigate whether switching to a character-level model can mitigate this problem. Additionally, transformer models appear to have an advantage over sequence models with long-distance dependencies, but still struggle with these constructions relative to more local phenomena. It stands to reason that this performance gap might be widened by training larger or deeper transformer models, or training on longer or more complex sentences. This analysis set can be used by engineers interested in evaluating the syntactic knowledge of their encoders.
Finally, these findings suggest possible controlled experiments that could confirm whether there is a causal relation between the presence of the syntactic features we single out as interesting and model performance. Our results are purely correlational, and do not mark whether a particular construction is crucial for the acceptability of the sentence. Future experiments following ettinger2018assessing and kann2019verb can semi-automatically generate datasets manipulating, for example, length of long-distance dependencies, inflectional violations, or the presence of interrogatives, while controlling for factors like sentence length and word choice, in order determine the extent to which these features impact the quality of sentence embeddings.
Acknowledgments
We would like to thank Jason Phang and Thibault Févry for sharing GPT and BERT model predictions on CoLA, and Alex Wang for feedback.
Simple
These are sentences with transitive or intransitive verbs appearing with their default syntax and argument structure. All arguments are noun phrases (DPs), and there are no modifiers or adjuncts on DPs or the VP.
. Included J̇ohn owns the book. (37) Park Square has a festive air. (131) *Herself likes Mary's mother. (456)
. Excluded Ḃill has eaten cake. I gave Joe a book.
Pred (Predicates)
These are sentences including the verb be used predicatively. Also, sentences where the object of the verb is itself a predicate, which applies to the subject. Not included are auxiliary uses of be or other predicate phrases that are not linked to a subject by a verb.
. Included J̇ohn is eager. (27) He turned into a frog. (150) To please John is easy. (315)
. Excluded Ṫhere is a bench to sit on. (309) John broke the geode open. The cake was eaten.
These sentences involve predication of a non-subject argument by another non-subject argument, without the presence of a copula. Some of these cases may be analyzed as small clauses. BIBREF35
. Included J̇ohn called the president a fool. (234) John considers himself proud of Mary. (464) They want them arrested. (856) the election of John president surprised me. (1001)
Modifiers that act as predicates of an argument. Resultatives express a resulting state of that argument, and depictives describe that argument during the matrix event. See BIBREF24 .
. Included Ṙesultative Ṭhe table was wiped by John clean. (625) The horse kicked me black and blue. (898) . Depictive J̇ohn left singing. (971) In which car was the man seen? (398)
. Excluded Ḣe turned into a frog. (150)
Adjunct
Particles are lone prepositions associated with verbs. When they appear with transitive verbs they may immediately follow the verb or the object. Verb-particle pairs may have a non-compositional (idiomatic) meaning. See [pp. 69-70]carnie2013syntax and [pp. 16-17]kim2008syntax.
. Included Ṭhe argument was summed by the coach up. (615) Some sentences go on and on and on. (785) *He let the cats which were whining out. (71)
Adjuncts modifying verb phrases. Adjuncts are (usually) optional, and they do not change the category of the expression they modify. See BIBREF33 .
. Included ṖP-adjuncts, e.g. locative, temporal, instrumental, beneficiary Ṅobody who hates to eat anything should work in a delicatessen. (121) Felicia kicked the ball off the bench. (127) . Adverbs Ṁary beautifully plays the violin. (40) John often meets Mary. (65) . Purpose VPs Ẇe need another run to win. (769) .
0.5em. Excluded ṖP arguments Ṣue gave to Bill a book. (42) Everything you like is on the table. (736) . S-adjuncts J̇ohn lost the race, unfortunately.
These are adjuncts modifying noun phrases. Adjuncts are (usually) optional, and they do not change the category of the expression they modify. Single-word prenominal adjectives are excluded, as are relative clauses (this has another category). . Included ṖP-adjuncts Ṭom's dog with one eye attacked Frank's with three legs. (676) They were going to meet sometime on Sunday, but the faculty didn't know when. (565) . Phrasal adjectives Ȧs a statesman, scarcely could he do anything worth mentioning. (292) . Verbal modifiers Ṫhe horse raced past the barn fell. (900)
. Excluded Ṗrenominal Adjectives İt was the policeman met that several young students in the park last night. (227) . Relative Clauses NP arguments
These are adjuncts of VPs and NPs that specify a time or modify tense or aspect or frequency of an event. Adjuncts are (usually) optional, and they do not change the category of the expression they modify. . Included Ṡhort adverbials (never, today, now, always) Ẉhich hat did Mike quip that she never wore? (95) . PPs Ḟiona might be here by 5 o'clock. (426) . When İ inquired when could we leave. (520)
These are adjuncts of VPs and NPs that specify a location of an event or a part of an event, or of an individual. Adjuncts are (usually) optional, and they do not change the category of the expression they modify. . Included Ṡhort adverbials PPs Ṫhe bed was slept in. (298) *Anson demonized up the Khyber (479) Some people consider dogs in my neighborhood dangerous. (802) Mary saw the boy walking toward the railroad station. (73) . Where İ found the place where we can relax. (307)
. Excluded Ŀocative arguments Ṣam gave the ball out of the basket. (129) Jessica loaded boxes on the wagon. (164) I went to Rome.
These are adjuncts of VPs and NPs not described by some other category (with the exception of (6-7)), i.e. not temporal, locative, or relative clauses. Adjuncts are (usually) optional, and they do not change the category of the expression they modify.
. Included Ḃeneficiary Ị know which book José didn't read for class, and which book Lilly did it for him. (58) . Instrument Ŀee saw the student with a telescope. (770) . Comitative J̇oan ate dinner with someone but I don't know who. (544) . VP adjuncts Ẇhich article did Terry file papers without reading? (431) . Purpose Ẇe need another run to win. (769)
Argument Types
Oblique arguments of verbs are individual-denoting arguments (DPs or PPs) which act as the third argument of verb, i.e. not a subject or (direct) object. They may or may not be marked by a preposition. Obliques are only found in VPs that have three or more individual arguments. Arguments are selected for by the verb, and they are (generally) not optional, though in some cases they may be omitted where they are understood or implicitly existentially quantified over. See [p.40]kim2008syntax.
. Included Ṗrepositional Ṣue gave to Bill a book. (42) Mary has always preferred lemons to limes. (70) *Janet broke Bill on the finger. (141) . Benefactives Ṁartha carved the baby a toy out of wood. (139) . Double object Ṡusan told her a story. (875) Locative arguments Ȧnn may spend her vacation in Italy. (289) . High-arity Passives Ṃary was given by John the book. (626)
. Excluded Ṅon-DP arguments Ẇe want John to win (28) . 3rd argments where not all three arguments are DPs Ẇe want John to win (28)
Prepositional Phrase arguments of VPs are individual-denoting arguments of a verb which are marked by a proposition. They may or may not be obliques. Arguments are selected for by the verb, and they are (generally) not optional, though in some cases they may be omitted where they are understood or implicitly existentially quantified over.
. Included Ḋative Ṣue gave to Bill a book. (42) . Conative (at) C̣arla slid at the book. (179) . Idiosyncratic prepositional verbs İ wonder who to place my trust in. (711) She voted for herself. (743) . Locative J̇ohn was found in the office. (283) . PP predicates Ėverything you like is on the table. (736)
. Excluded ṖP adjuncts Particles Arguments of deverbal expressions ṭhe putter of books left. (892) . By-phrase Ṫed was bitten by the spider. (613)
Prepositional Phrase arguments of NPs or APs are individual-denoting arguments of a noun or adjective which are marked by a proposition. Arguments are selected for by the head, and they are (generally) not optional, though in some cases they may be omitted where they are understood or implicitly existentially quantified over.
. Included Ṙelational adjectives Ṁany people were fond of Pat. (936) *I was already aware of fact. (824) . Relational nouns Ẇe admired the pictures of us in the album. (759) They found the book on the atom. (780) . Arguments of deverbal nouns ṭhe putter of books left. (892)
Prepositional arguments introduced with by. Usually, this is the (semantic) subject of a passive verb, but in rare cases it may be the subject of a nominalized verb. Arguments are usually selected for by the head, and they are generally not optional. In this case, the argument introduced with by is semantically selected for by the verb, but it is syntactically optional. See [p.190]adger2003core and []collins2005smuggling.
. Included Ṗassives Ṫed was bitten by the spider. (613) . Subjects of deverbal nouns ṫhe attempt by John to leave surprised me. (1003)
Expletives, or “dummy” arguments, are semantically inert arguments. The most common expletives in English are it and there, although not all occurrences of these items are expletives. Arguments are usually selected for by the head, and they are generally not optional. In this case, the expletive occupies a syntactic argument slot, but it is not semantically selected by the verb, and there is often a syntactic variation without the expletive. See [p.170-172]adger2003core and [p.82-83]kim2008syntax.
. Included Ṫhere—inserted, existential Ṭhere loved Sandy. (939) There is a nurse available. (466) . It—cleft, inserted İt was a brand new car that he bought. (347) It bothers me that John coughs. (314) It is nice to go abroad. (47) . Environmental it K̇erry remarked it was late. (821) Poor Bill, it had started to rain and he had no umbrella. (116) You've really lived it up. (160)
. Excluded J̇ohn counted on Bill to get there on time. (996) I bought it to read. (1026)
Arg Altern (Argument Alternations)
These are verbs with 3 or more arguments of any kind. Arity refers to the number of arguments that a head (or function) selects for. Arguments are usually selected for by the head, and they are generally not optional. They may be DPs, PPs, CPs, VPs, APs or other categories.
. Included Ḋitransitive [̣Sue] gave [to Bill] [a book]. (42) [Martha] carved [the baby] [a toy] out of wood. (139) . VP arguments [̣We] believed [John] [to be a fountain in the park]. (274) [We] made [them] [be rude]. (260) . Particles He] let [the cats which were whining] [out]. (71) . Passives with by-phrase [̣A good friend] is remained [to me] [by him]. (237) . Expletives [̣We] expect [there] [to will rain]. (282) [There] is [a seat] [available]. (934) [It] bothers [me] [that he is here]. (1009) . Small clause John] considers [Bill] [silly]. (1039)
. Excluded Ṙesults, depictives John] broke [the geode] [open].
These are VPs where a canonical argument of the verb is missing. This can be difficult to determine, but in many cases the missing argument is understood with existential quantification or generically, or contextually salient. See [p.106-109]sportiche2013introduction.
. Included Ṁiddle voice/causative inchoative Ṭhe problem perceives easily. (66) . Passive Ṫhe car was driven. (296) . Null complement anaphora J̇ean persuaded Robert. (380) Nobody told Susan. (883) . Dropped argument Ḳim put in the box. (253) The guests dined. (835) I wrote to Bill. (1030) . Transitive adjective J̇ohn is eager. (27) We pulled free. (144) . Transitive noun İ sensed his eagerness. (155) . Expletive insertion Ịt loved Sandy. (949)
. Excluded Ṫed was bitten by the spider. (613)
These are VPs in which a non-canonical argument of the verb has been added. These cases are clearer to identify where the additional argument is a DP. In general, PPs which mark locations, times, beneficiaries, or purposes should be analyzed as adjuncts, while PPs marking causes can be considered arguments. See []pylkkanen2008introducing.
. Included Ėxtra argument Ḷinda winked her lip. (202) Sharon fainted from hunger. (204) I shaved myself. (526) . Causative Ị squeaked the door. (207) . Expletive insertion Ṫhere is a monster in Loch Ness. (928) It annoys people that dogs bark. (943) . Benefactive Ṁartha carved the baby a toy out of wood. (139)
The passive voice is marked by the demotion of the subject (either complete omission or to a by-phrase) and the verb appearing as a past participle. In the stereotypical construction there is an auxiliary be verb, though this may be absent. See [p.175-190]kim2008syntax, collins2005smuggling, and [p.311-333]sag2003syntactic.
. Included V̇erbs Ṫhe earth was believed to be round. (157) . Psuedopassive Ṫhe bed was slept in. (298) . Past participle adjuncts Ṫhe horse raced past the barn fell. (900)
Imperative
The imperative mood is marked by the absence of the a subject and the bare form of the verb, and expresses a command, request, or other directive speech act.
. Included Ẉash you! (224) Somebody just left - guess who. (528)
Binding
These are cases in which a reflexive (non-possessive) pronoun, usually bound by an antecedent. See [p.163-186]sportiche2013introduction and [p.203-226]sag2003syntactic.
. Included Ọurselves like ourselves. (742) Which pictures of himself does John like? (386)
These are cases in which a non-reflexive pronoun appears along with its antecedent. This includes donkey anaphora, quantificational binding, and bound possessives, among other bound pronouns. See [p.163-186]sportiche2013introduction and [p.203-226]sag2003syntactic.
. Included Ḃound possessor Ṫhe children admire their mother. (382) . Quantificational binding Ėverybody gets on well with a certain relative, but often only his therapist knows which one. (562) . Bound pronoun Ẉe gave us to the cause. (747)
Question
These are sentences in which the matrix clause is interrogative (either a wh- or polar question). See [pp.282-213]adger2003core, [pp.193-222]kim2008syntax, and [p.315-350]carnie2013syntax.
. Included Ẇh-question Ẇho always drinks milk? (684) . Polar question Ḋid Athena help us? (486)
These are embedded interrogative clauses appearing as arguments of verbs, nouns, and adjectives. Not including relative clauses and free relatives. See [p.297]adger2003core.
. Included U̇nder VP İ forgot how good beer tastes. (235) *What did you ask who saw? (508) . Under NP Ṫhat is the reason why he resigned. (313) . Under AP Ṫhey claimed they had settled on something, but it wasn't clear what they had settled on. (529) . Free relative Ẇhat the water did to the bottle was fill it. (33)
. Excluded Relative clauses, free relatives
These are phrasal Wh-phrases, in which the wh-word moves along with other expressions, including prepositions (pied-piping) or nouns in the case of determiner wh-words such as how many and which.
. Included Ṗied-piping Ṭhe ship sank, but I don't know with what. (541) . Other phrasal wh-phrases İ know which book Mag read, and which book Bob read my report that you hadn't. (61) How sane is Peter? (88)
Relative clauses are noun modifiers appearing with a relativizer (either that or a wh-word) and an associated gap. See [p.223-244]kim2008syntax.
. Included Ṫhough he may hate those that criticize Carter, it doesn't matter. (332) *The book what inspired them was very long. (686) Everything you like is on the table. (736)
. Excluded Ṭhe more you would want, the less you would eat. (6)
This is wh-movement out of an extraction island, or near-island. Islands include, for example, complex NPs, adjuncts, embedded questions, coordination. A near-island is an extraction that closely resembles an island violation, such as extraction out of an embedded clause, or across-the-board extraction. See [pp.323-333]adger2003core and [pp.332-334]carnie2013syntax.
. Included Ėmbedded question *What did you ask who Medea gave? (493) . Adjunct Ẉhat did you leave before they did? (598) . Parasitic gaps Ẇhich topic did you choose without getting his approval? (311) . Complex NP Ẇho did you get an accurate description of? (483)
Comp Clause (Complement Clauses)
These are complement clauses acting as the (syntactic) subject of verbs. See [pp.90-91]kim2008syntax.
. Included Ṫhat dogs bark annoys people. (942) The socks are ready for for you to put on to be planned. (112)
. Excluded Ėxpletive insertion İt bothers me that John coughs. (314)
These are complement clauses acting as (non-subject) arguments of verbs. See [pp.84-90]kim2008syntax.
. Included İ can't believe Fred won't, either. (50) I saw that gas can explode. (222) It bothers me that John coughs. (314) Clefts İt was a brand new car that he bought. (347)
These are complement clauses acting as an argument of a noun or adjective. See [pp.91-94]kim2008syntax.
. Included U̇nder NP Ḋo you believe the claim that somebody was looking for something? (99) . Under AP Ṭhe children are fond that they have ice cream. (842)
These are complement clauses with a non-finite matrix verb. Often, the complementizer is for, or there is no complementizer. See [pp.252-253,256-260]adger2003core.
. Included Ḟor complementizer İ would prefer for John to leave. (990) . No Complementizer Ṁary intended John to go abroad. (48) . Ungrammatical Ḣeidi thinks that Andy to eat salmon flavored candy bars. (363) . V-ing Ȯnly Churchill remembered Churchill giving the Blood, Sweat and Tears speech. (469)
These are complement clauses with no overt complementizer.
. Included Ċomplement clause İ'm sure we even got these tickets! (325) He announced he would marry the woman he loved most, but none of his relatives could figure out who. (572) . Relative clause Ṫhe Peter we all like was at the party (484)
These are sentences with three or nested verbs, where VP is not an aux or modal, i.e. with the following syntax: [S ...[ VP ...[ VP ...[ VP ...] ...] ...] ...]
. Included Ėmbedded VPs Ṁax seemed to be trying to force Ted to leave the room, and Walt, Ira. (657) . Embedded clauses İ threw away a book that Sandy thought we had read. (713)
Aux (Auxiliaries)
Any occurrence of negation in a sentence, including sentential negation, negative quantifiers, and negative adverbs.
. Included Ṡentential İ can't remember the name of somebody who had misgivings. (123) . Quantifier Ṅo writer, and no playwright, meets in Vienna. (124) . Adverb Ṫhey realised that never had Sir Thomas been so offended. (409)
Modal verbs (may, might, can, could, will, would, shall, should, must). See [pp.152-155]kim2008syntax.
. Included J̇ohn can kick the ball. (280) As a statesman, scarcely could he do anything worth mentioning. (292)
. Excluded Ṗseudo-modals Ṡandy was trying to work out which students would be able to solve a certain problem. (600)
Auxiliary verbs (e.g. be, have, do). See [pp.149-174]kim2008syntax.
. Included Ṫhey love to play golf, but I do not. (290) The car was driven. (296) he had spent five thousand dollars. (301)
. Excluded Ṗseudo-auxiliaries Ṣally asked if somebody was going to fail math class, but I can't remember who. (589) The cat got bitten. (926)
These are predicates acting as near-auxiliary (e.g. get-passive) or near-modals (e.g. willing)
. Included Ṅear-auxiliaries Ṃary came to be introduced by the bartender and I also came to be. (55) *Sally asked if somebody was going to fail math class, but I can't remember who. (589) The cat got bitten. (926) . Near-modals Ċlinton is anxious to find out which budget dilemmas Panetta would be willing to tackle in a certain way, but he won't say in which. (593) Sandy was trying to work out which students would be able to solve a certain problem. (600)
to-VP (Infinitival VPs)
These are VPs with control verbs, where one argument is a non-finite to-VP without a covert subject co-indexed with an argument of the matrix verb. See [pp.252,266-291]adger2003core, [pp.203-222]sportiche2013introduction, and [pp.125-148]kim2008syntax.
. Included İntransitive subject control Ịt tries to leave the country. (275) . Transitive subject control J̇ohn promised Bill to leave. (977) . Transitive object control İ want her to dance. (379) John considers Bill to be silly. (1040)
. Excluded V̇P args of NP/AP Ṫhis violin is difficult to play sonatas on. (114) . Purpose Ṫhere is a bench to sit on. (309) . Subject VPs Ṫo please John is easy. (315) . Argument present participles Ṁedea denied poisoning the phoenix. (490) . Raising Ȧnson believed himself to be handsome. (499)
These are VPs with raising predicates, where one argument is a non-finite to-VP without a covert subject co-indexed with an argument of the matrix verb. Unlike control verbs, the coindexed argument is not a semantic argument of the raising predicate. See [pp.260-266]adger2003core, [pp.203-222]sportiche2013introduction, and [pp.125-148]kim2008syntax.
. Included Ṡubject raising U̇nder the bed seems to be a fun place to hide. (277) . Object raising Ȧnson believed himself to be handsome. (499) . Raising adjective J̇ohn is likely to leave. (370)
These are embedded infinitival VPs containing a (non-subject) gap that is filled by an argument in the upper clause. Examples are purpose-VPs and tough-movement. See [pp.246-252]kim2008syntax.
. Included Ṫough-movement Ḍrowning cats, which is against the law, are hard to rescue. (79) . Infinitival relatives F̣ed knows which politician her to vote for. (302) . Purpose ṫhe one with a red cover takes a very long time to read. (352) . Other non-finite VPs with extraction Ȧs a statesman, scarcely could he do anything worth mentioning. (292)
These are non-finite VP arguments of nouns and adjectives.
. Included Ṙaising adjectives J̇ohn is likely to leave. (370) . Control adjectives Ṫhe administration has issued a statement that it is willing to meet a student group, but I'm not sure which one. (604) . Control nouns Ȧs a teacher, you have to deal simultaneously with the administration's pressure on you to succeed, and the children's to be a nice guy. (673) . Purpose VPs ṫhere is nothing to do. (983)
These are miscellaneous non-finite VPs.
. Included İ saw that gas can explode. (222) Gerunds/Present participles Ṣtudents studying English reads Conrad's Heart of Darkness while at university. (262) Knowing the country well, he took a short cut. (411) John became deadly afraid of flying. (440) . Subject VPs Ṫo please John is easy. (315) . Nominalized VPs Ẉhat Mary did Bill was give a book. (473)
. Excluded ṫo-VPs acting as complements or modifiers of verbs, nouns, or adjectives
N, Adj (Nouns and Adjectives)
These are nouns and adjectives derived from verbs.
. Included Ḋeverbal nouns ṭhe election of John president surprised me. (1001) . “Light” verbs Ṫhe birds give the worm a tug. (815) . Gerunds İf only Superman would stop flying planes! (773) . Event-wh Ẇhat the water did to the bottle was fill it. (33) . Deverbal adjectives Ḣis or her least known work. (95)
Relational nouns are NPs with an obligatory (or existentially closed) argument. A particular relation holds between the members of the extension of NP and the argument. The argument must be a DP possessor or a PP. See [pp.82-83]kim2008syntax.
. Included Ṅouns with of-arguments J̇ohn has a fear of dogs. (353) . Nouns with other PP-arguments Ḣenri wants to buy which books about cooking? (442) . Measure nouns İ bought three quarts of wine and two of Clorox. (667) . Possessed relational nouns J̣ohn's mother likes himself. (484)
. Excluded Ṅouns with PP modifiers Ṡome people consider dogs in my neighborhood dangerous. (802)
Transitive (non-relational) nouns take a VP or CP argument. See [pp.82-83]kim2008syntax.
. Included V̇P argument ṫhe attempt by John to leave surprised me. (1003) . CP argument Ẉhich report that John was incompetent did he submit? (69) . QP argument Ṫhat is the reason why he resigned. (313)
These are complex NPs, including coordinated nouns and nouns with modifiers (excluding prenominal adjectives).
. Included Ṁodified NPs Ṭhe madrigals which Henry plays the lute and sings sound lousy. (84) John bought a book on the table. (233) . NPs with coordination Ṭhe soundly and furry cat slept. (871) The love of my life and mother of my children would never do such a thing. (806)
Noun-noun compounds are NPs consisting of two constituent nouns.
. Included İt was the peasant girl who got it. (320) A felon was elected to the city council. (938)
These are adjectives that take an obligatory (or existentially closed) argument. A particular relation holds between the members of the extension of the modified NP and the argument. The argument must be a DP or PP. See [pp.80-82]kim2008syntax.
. Included Ȯf-arguments Ṫhe chickens seem fond of the farmer. (254) . Other PP arguments Ṫhis week will be a difficult one for us. (241) John made Bill mad at himself. (1035)
A transitive (non-relational) adjective. I.e. an adjectives that takes a VP or CP argument. See [pp.80-82]kim2008syntax.
. Included V̇P argument J̇ohn is likely to leave. (370) . CP argument J̇ohn is aware of it that Bill is here. (1013) . QP argument Ṫhe administration has issued a statement that it is willing to meet a student group, but I'm not sure which one. (604)
S-Syntax (Sentence-Level Syntax)
These are expressions with non-canonical word order. See, for example, [p.76]sportiche2013introduction.
. Includes Ṗarticle shift Ṃickey looked up it. (24) . Preposed modifiers Ȯut of the box jumped a little white rabbit. (215) *Because she's so pleasant, as for Mary I really like her. (331) . Quantifier float Ṫhe men will all leave. (43) . Preposed argument Ẇith no job would John be happy. (333) . Relative clause extraposition Ẇhich book's, author did you meet who you liked? (731) . Misplaced phrases Ṁary was given by John the book. (626)
This includes topicalization and focus constructions. See [pp.258-269]kim2008syntax and [pp.68-75]sportiche2013introduction.
. Included Ṫopicalization Ṁost elections are quickly forgotten, but the election of 2000, everyone will remember for a long time. (807) . Clefts İt was a brand new car that he bought. (347) . Pseudo-clefts Ẇhat John promised is to be gentle. (441)
. Excluded Ṫhere-insertion Passive
These are parentheticals or fragmentary expressions. . Included Ṗarenthetical Ṁary asked me if, in St. Louis, John could rent a house cheap. (704) . Fragments Ṫhe soup cooks, thickens. (448) . Tag question Ġeorge has spent a lot of money, hasn't he? (291)
Coordinations and disjunctions are expressions joined with and, but, or, etc. See [pp.61-68]sportiche2013introduction.
. Included ḊP coordination Ḋave, Dan, Erin, Jaime, and Alina left. (341) . Right Node Raising K̇im gave a dollar to Bobbie and a dime to Jean. (435) . Clausal coordination Ṡhe talked to Harry, but I don't know who else. (575) . Or, nor Ṇo writer, nor any playwright, meets in Vienna. (125) . Pseudo-coordination İ want to try and buy some whiskey. (432) . Juxtaposed clauses Ŀights go out at ten. There will be no talking afterwards. (779)
This includes subordinate clauses, especially with subordinating conjunctions, and conditionals.
. Included Ċonditional İf I can, I will work on it. (56) . Subordinate clause Ẉhat did you leave before they did? (598) *Because Steve's of a spider's eye had been stolen, I borrowed Fred's diagram of a snake's fang. (677) . Correlative Ạs you eat the most, you want the least. (5)
This includes VP or NP ellipsis, or anaphora standing for VPs or NPs (not DPs). See [pp.55-61]sportiche2013introduction.
. Included V̇P Ellipsis İf I can, I will work on it. (56) Mary likes to tour art galleries, but Bill hates to. (287) . VP Anaphor İ saw Bill while you did so Mary. (472) . NP Ellipsis Ṫom's dog with one eye attacked Fred's. (679) . NP anaphor ṫhe one with a red cover takes a very long time to read. (352) . Sluicing Ṁost columnists claim that a senior White House official has been briefing them, and the newspaper today reveals which one. (557) . Gapping Ḃill ate the peaches, but Harry the grapes. (646)
These are adjuncts modifying sentences, sentence-level adverbs, subordinate clauses.
. Included Ṡentence-level adverbs Ṡuddenly, there arrived two inspectors from the INS. (447) . Subordinate clauses Ṫhe storm arrived while we ate lunch. (852)
Determiner
These are quantificational DPs, i.e. the determiner is a quantifier.
. Included Q̇uantifiers Ẹvery student, and he wears socks, is a swinger. (118) We need another run to win. (769) . Partitive Ṇeither of students failed. (265)
These are quantifiers that take PP arguments, and measure nouns. See [pp.109-118]kim2008syntax.
. Included Q̇uantifiers with PP arguments Ṇeither of students failed. (265) . Numerals Ȯne of Korea's most famous poets wrote these lines. (294) . Measure nouns İ bought three quarts of wine and two of Clorox. (667)
These are negative polarity items (any, ever, etc.) and free choice items (any). See kadmon1993any.
. Included ṄPI Ėverybody around here who ever buys anything on credit talks in his sleep. (122) I didn't have a red cent. (350) . FCI Ȧny owl hunts mice. (387)
These are comparative constructions. See BIBREF22 .
. Included Ċorrelative Ṫhe angrier Mary got, the more she looked at pictures. (9) They may grow as high as bamboo. (337) I know you like the back of my hand. (775)
Violations
These are sentences that include a semantic violation, including type mismatches, violations of selectional restrictions, polarity violations, definiteness violations.
. Included V̇olation of selectional restrictions ṃany information was provided. (218) *It tries to leave the country. (275) . Aspectual violations J̣ohn is tall on several occasions. (540) . Definiteness violations Ịt is the problem that he is here. (1018) . Polarity violations Ȧny man didn't eat dinner. (388)
These are sentences that include a violation in inflectional morphology, including tense-aspect marking, or agreement.
. Included Ċase Ụs love they. (46) . Agreement Ṣtudents studying English reads Conrad's Heart of Darkness while at university. (262) . Gender Ṣally kissed himself. (339) . Tense/Aspect Ḳim alienated cats and beating his dog. (429)
These are sentences with a violation that can be identified with the presence or absence of a single word.
. Included Ṁissing word J̣ohn put under the bathtub. (247) *I noticed the. (788) . Extra word Ẹveryone hopes everyone to sleep. (467) *He can will go (510) | CoLA contains example sentences from linguistics publications labeled by experts |
f809fd0d3acfaccbe6c8abb4a9d951a83eec9a32 | f809fd0d3acfaccbe6c8abb4a9d951a83eec9a32_0 | Q: How is the CoLA grammatically annotated?
Text: Introduction
The effectiveness and ubiquity of pretrained sentence embeddings for natural language understanding has grown dramatically in recent years. Recent sentence encoders like OpenAI's Generative Pretrained Transformer BIBREF3 and BERT BIBREF2 achieve the state of the art on the GLUE benchmark BIBREF4 . Among the GLUE tasks, these state-of-the-art systems make their greatest gains on the acceptability task with the Corpus of Linguistic Acceptability BIBREF0 . CoLA contains example sentences from linguistics publications labeled by experts for grammatical acceptability, and written to show subtle grammatical features. Because minimal syntactic differences can separate acceptable sentences from unacceptable ones (What did Bo write a book about? / *What was a book about written by Bo?), and acceptability classifiers are more reliable when trained on GPT and BERT than on recurrent models, it stands to reason that GPT and BERT have better implicit knowledge of syntactic features relevant to acceptability.
Our goal in this paper is to develop an evaluation dataset that can locate which syntactic features that a model successfully learns by identifying the syntactic domains of CoLA in which it performs the best. Using this evaluation set, we compare the syntactic knowledge of GPT and BERT in detail, and investigate the strengths of these models over the baseline BiLSTM model published by warstadt2018neural. The analysis set includes expert annotations labeling the entire CoLA development set for the presence of 63 fine-grained syntactic features.
We identify many specific syntactic features that make sentences harder to classify, and many that have little effect. For instance, sentences involving unusual or marked argument structures are no harder than the average sentence, while sentences with long distance dependencies are hard to learn. We also find features of sentences that accentuate or minimize the differences between models. Specifically, the transformer models seem to learn long-distance dependencies much better than the recurrent model, yet have no advantage on sentences with morphological violations.
Analysis Set
We introduce a grammatically annotated version of the entire CoLA development set to facilitate detailed error analysis of acceptability classifiers. These 1043 sentences are expert-labeled for the presence of 63 minor grammatical features organized into 15 major features. Each minor feature belongs to a single major feature. A sentence belongs to a major feature if it belongs to one or more of the relevant minor features. The Appendix includes descriptions of each feature along with examples and the criteria used for annotation.
The 63 minor features and 15 major features are illustrated in Table TABREF5 . Considering minor features, an average of 4.31 features is present per sentence (SD=2.59). The average feature is present in 71.3 sentences (SD=54.7). Turning to major features, the average sentence belongs to 3.22 major features (SD=1.66), and the average major feature is present in 224 sentences (SD=112). Every sentence is labeled with at least one feature.
Annotation
The sentences were annotated manually by one of the authors, who is a PhD student with extensive training in formal linguistics. The features were developed in a trial stage, in which the annotator performed a similar annotation with different annotation schema for several hundred sentences from CoLA not belonging to the development set.
Feature Descriptions
Here we briefly summarize the feature set in order of the major features. Many of these constructions are well-studied in syntax, and further background can be found in textbooks such as adger2003core and sportiche2013introduction.
This major feature contains only one minor feature, simple, including sentences with a syntactically simplex subject and predicate.
These three features correspond to predicative phrases, including copular constructions, small clauses (I saw Bo jump), and resultatives/depictives (Bo wiped the table clean).
These six features mark various kinds of optional modifiers. This includes modifiers of NPs (The boy with blue eyes gasped) or VPs (The cat meowed all morning), and temporal (Bo swam yesterday) or locative (Bo jumped on the bed).
These five features identify syntactically selected arguments, differentiating, for example, obliques (I gave a book to Bo), PP arguments of NPs and VPs (Bo voted for Jones), and expletives (It seems that Bo left).
These four features mark VPs with unusual argument structures, including added arguments (I baked Bo a cake) or dropped arguments (Bo knows), and the passive (I was applauded).
This contains only one feature for imperative clauses (Stop it!).
These are two minor features, one for bound reflexives (Bo loves himself), and one for other bound pronouns (Bo thinks he won).
These five features apply to sentences with question-like properties. They mark whether the interrogative is an embedded clause (I know who you are), a matrix clause (Who are you?), or a relative clause (Bo saw the guy who left); whether it contains an island out of which extraction is unacceptable (*What was a picture of hanging on the wall?); or whether there is pied-piping or a multi-word wh-expressions (With whom did you eat?).
These six features apply to various complement clauses (CPs), including subject CPs (That Bo won is odd); CP arguments of VPs or NPs/APs (The fact that Bo won); CPs missing a complementizer (I think Bo's crazy); or non-finite CPs (This is ready for you to eat).
These four minor features mark the presence of auxiliary or modal verbs (I can win), negation, or “pseudo-auxiliaries” (I have to win).
These five features mark various infinitival embedded VPs, including control VPs (Bo wants to win); raising VPs (Bo seemed to fly); VP arguments of NPs or APs (Bo is eager to eat); and VPs with extraction (e.g. This is easy to read ts ).
These seven features mark complex NPs and APs, including ones with PP arguments (Bo is fond of Mo), or CP/VP arguments; noun-noun compounds (Bo ate mud pie); modified NPs, and NPs derived from verbs (Baking is fun).
These seven features mark various unrelated syntactic constructions, including dislocated phrases (The boy left who was here earlier); movement related to focus or information structure (This I've gotta see this); coordination, subordinate clauses, and ellipsis (I can't); or sentence-level adjuncts (Apparently, it's raining).
These four features mark various determiners, including quantifiers, partitives (two of the boys), negative polarity items (I *do/don't have any pie), and comparative constructions.
These three features apply only to unacceptable sentences, and only ones which are ungrammatical due to a semantic or morphological violation, or the presence or absence of a single salient word.
Correlations
We wish to emphasize that these features are overlapping and in many cases are correlated, thus not all results from using this analysis set will be independent. We analyzed the pairwise Matthews Correlation Coefficient BIBREF17 of the 63 minor features (giving 1953 pairs), and of the 15 major features (giving 105 pairs). MCC is a special case of Pearson's INLINEFORM0 for Boolean variables. These results are summarized in Table TABREF25 . Regarding the minor features, 60 pairs had a correlation of 0.2 or greater, 17 had a correlation of 0.4 or greater, and 6 had a correlation of 0.6 or greater. None had an anti-correlation of greater magnitude than -0.17. Turning to the major features, 6 pairs had a correlation of 0.2 or greater, and 2 had an anti-correlation of greater magnitude than -0.2.
We can see at least three reasons for these observed correlations. First, some correlations can be attributed to overlapping feature definitions. For instance, expletive arguments (e.g. There are birds singing) are, by definition, non-canonical arguments, and thus are a subset of add arg. However, some added arguments, such as benefactives (Bo baked Mo a cake), are not expletives. Second, some correlations can be attributed to grammatical properties of the relevant constructions. For instance, question and aux are correlated because main-clause questions in English require subject-aux inversion and in many cases the insertion of auxiliary do (Do lions meow?). Third, some correlations may be a consequence of the sources sampled in CoLA and the phenomena they focus on. For instance, the unusually high correlation of Emb-Q and ellipsis/anaphor can be attributed to BIBREF18 , which is an article about the sluicing construction involving ellipsis of an embedded interrogative (e.g. I saw someone, but I don't know who).
Finally, two strongest anti-correlations between major features are between simple and the two features related to argument structure, argument types and arg altern. This follows from the definition of simple, which excludes any sentence containing a large number or unusual configuration of arguments.
Models Evaluated
We train MLP acceptability classifiers for CoLA on top of three sentence encoders: (1) the CoLA baseline encoder with ELMo-style embeddings, (2) OpenAI GPT, and (3) BERT. We use publicly available sentence encoders with pretrained weights.
Overall CoLA Results
The overall performance of the three sentence encoders is shown in Table TABREF33 . Performance on CoLA is measured using MCC BIBREF14 . We present the best single restart for each encoder, the mean over restarts for an encoder, and the result of ensembling the restarts for a given encoder, i.e. taking the majority classification for a given sentence, or the majority label of acceptable if tied. For BERT results, we exclude 5 out of the 20 restarts because they were degenerate (MCC=0).
Across the board, BERT outperforms GPT, which outperforms the CoLA baseline. However, BERT and GPT are much closer in performance than they are to CoLA baseline. While ensemble performance exceeded the average for BERT and GPT, it did not outperform the best single model.
Analysis Set Results
The results for the major features and minor features are shown in Figures FIGREF26 and FIGREF35 , respectively. For each feature, we measure the MCC of the sentences including that feature. We plot the mean of these results across the different restarts for each model, and error bars mark the mean INLINEFORM0 standard deviation. For the Violations features, MCC is technically undefined because these features only contain unacceptable sentences. We report MCC in these cases by including for each feature a single acceptable example that is correctly classified by all models.
Comparison across features reveals that the presence of certain features has a large effect on performance, and we comment on some overall patterns below. Within a given feature, the effect of model type is overwhelmingly stable, and resembles the overall difference in performance. However, we observe several interactions, i.e. specific features where the relative performance of models does not track their overall relative performance.
Among the major features (Figure FIGREF26 ), performance is universally highest on the simple sentences, and is higher than each model's overall performance. Though these sentences are simple, we notice that the proportion of ungrammatical ones is on par with the entire dataset. Otherwise we find that a model's performance on sentences of a given feature is on par with or lower than its overall performance, reflecting the fact that features mark the presence of unusual or complex syntactic structure.
Performance is also high (and close to overall performance) on sentences with marked argument structures (Argument Types and Arg(ument) Alt(ernation)). While these models are still worse than human (overall) performance on these sentences, this result indicates that argument structure is relatively easy to learn.
Comparing different kinds of embedded content, we observe higher performance on sentences with embedded clauses (major feature=Comp Clause) embedded VPs (major feature=to-VP) than on sentences with embedded interrogatives (minor features=Emb-Q, Rel Clause). An exception to this trend is the minor feature No C-izer, which labels complement clauses without a complementizer (e.g. I think that you're crazy). Low performance on these sentences compared to most other features in Comp Clause might indicate that complementizers are an important syntactic cue for these models.
As the major feature Question shows, the difficulty of sentences with question-like syntax applies beyond just embedded questions. Excluding polar questions, sentences with question-like syntax almost always involve extraction of a wh-word, creating a long-distance dependency between the wh-word and its extraction site, which may be difficult for models to recognize.
The most challenging features are all related to Violations. Low performance on Infl/Agr Violations, which marks morphological violations (He washed yourself, This is happy), is especially striking because a relatively high proportion (29%) of these sentences are Simple. These models are likely to be deficient in encoding morphological features is that they are word level models, and do not have direct access sub-word information like inflectional endings, which indicates that these features are difficult to learn effectively purely from lexical distributions.
Finally, unusual performance on some features is due to small samples, and have a high standard deviation, suggesting the result is unreliable. This includes CP Subj, Frag/Paren, imperative, NPI/FCI, and Comparative.
Comparing within-feature performance of the three encoders to their overall performance, we find they have differing strengths and weaknesses. BERT stands out over other models in Deep Embed, which includes challenging sentences with doubly-embedded, as well as in several features involving extraction (i.e. long-distance dependencies) such as VP+Extract and Info-Struc. The transformer models show evidence of learning long-distance dependencies better than the CoLA baseline. They outperform the CoLA baseline by an especially wide margin on Bind:Refl, which all involves establishing a dependency between a reflexive and its antecedent (Bo tries to love himself). They also have a large advantage in dislocation, in which expressions are separated from their dependents (Bo practiced on the train an important presentation). The advantage of BERT and GPT may be due in part to their use of the transformer architecture. Unlike the BiLSTM used by the CoLA baseline, the transformer uses a self-attention mechanism that associates all pairs of words regardless of distance.
In some cases models showed surprisingly good or bad performance, revealing possible idiosyncrasies of the sentence embeddings they output. For instance, the CoLA baseline performs on par with the others on the major feature adjunct, especially considering the minor feature Particle (Bo looked the word up).
Furthermore, all models struggle equally with sentences in Violation, indicating that the advantages of the transformer models over the CoLA baseline does not extend to the detection of morphological violations (Infl/Agr Violation) or single word anomalies (Extra/Missing Expr).
Length Analysis
For comparison, we analyze the effect of sentence length on acceptability classifier performance. The results are shown in Figure FIGREF39 . The results for the CoLA baseline are inconsistent, but do drop off as sentence length increases. For BERT and GPT, performance decreases very steadily with length. Exceptions are extremely short sentences (length 1-3), which may be challenging due to insufficient information; and extremely long sentences, where we see a small (but somewhat unreliable) boost in BERT's performance. BERT and GPT are generally quite close in performance, except on the longest sentences, where BERT's performance is considerably better.
Conclusion
Using a new grammatically annotated analysis set, we identify several syntactic phenomena that are predictive of good or bad performance of current state of the art sentence encoders on CoLA. We also use these results to develop hypotheses about why BERT is successful, and why transformer models outperform sequence models.
Our findings can guide future work on sentence embeddings. A current weakness of all sentence encoders we investigate, including BERT, is the identification of morphological violations. Future engineering work should investigate whether switching to a character-level model can mitigate this problem. Additionally, transformer models appear to have an advantage over sequence models with long-distance dependencies, but still struggle with these constructions relative to more local phenomena. It stands to reason that this performance gap might be widened by training larger or deeper transformer models, or training on longer or more complex sentences. This analysis set can be used by engineers interested in evaluating the syntactic knowledge of their encoders.
Finally, these findings suggest possible controlled experiments that could confirm whether there is a causal relation between the presence of the syntactic features we single out as interesting and model performance. Our results are purely correlational, and do not mark whether a particular construction is crucial for the acceptability of the sentence. Future experiments following ettinger2018assessing and kann2019verb can semi-automatically generate datasets manipulating, for example, length of long-distance dependencies, inflectional violations, or the presence of interrogatives, while controlling for factors like sentence length and word choice, in order determine the extent to which these features impact the quality of sentence embeddings.
Acknowledgments
We would like to thank Jason Phang and Thibault Févry for sharing GPT and BERT model predictions on CoLA, and Alex Wang for feedback.
Simple
These are sentences with transitive or intransitive verbs appearing with their default syntax and argument structure. All arguments are noun phrases (DPs), and there are no modifiers or adjuncts on DPs or the VP.
. Included J̇ohn owns the book. (37) Park Square has a festive air. (131) *Herself likes Mary's mother. (456)
. Excluded Ḃill has eaten cake. I gave Joe a book.
Pred (Predicates)
These are sentences including the verb be used predicatively. Also, sentences where the object of the verb is itself a predicate, which applies to the subject. Not included are auxiliary uses of be or other predicate phrases that are not linked to a subject by a verb.
. Included J̇ohn is eager. (27) He turned into a frog. (150) To please John is easy. (315)
. Excluded Ṫhere is a bench to sit on. (309) John broke the geode open. The cake was eaten.
These sentences involve predication of a non-subject argument by another non-subject argument, without the presence of a copula. Some of these cases may be analyzed as small clauses. BIBREF35
. Included J̇ohn called the president a fool. (234) John considers himself proud of Mary. (464) They want them arrested. (856) the election of John president surprised me. (1001)
Modifiers that act as predicates of an argument. Resultatives express a resulting state of that argument, and depictives describe that argument during the matrix event. See BIBREF24 .
. Included Ṙesultative Ṭhe table was wiped by John clean. (625) The horse kicked me black and blue. (898) . Depictive J̇ohn left singing. (971) In which car was the man seen? (398)
. Excluded Ḣe turned into a frog. (150)
Adjunct
Particles are lone prepositions associated with verbs. When they appear with transitive verbs they may immediately follow the verb or the object. Verb-particle pairs may have a non-compositional (idiomatic) meaning. See [pp. 69-70]carnie2013syntax and [pp. 16-17]kim2008syntax.
. Included Ṭhe argument was summed by the coach up. (615) Some sentences go on and on and on. (785) *He let the cats which were whining out. (71)
Adjuncts modifying verb phrases. Adjuncts are (usually) optional, and they do not change the category of the expression they modify. See BIBREF33 .
. Included ṖP-adjuncts, e.g. locative, temporal, instrumental, beneficiary Ṅobody who hates to eat anything should work in a delicatessen. (121) Felicia kicked the ball off the bench. (127) . Adverbs Ṁary beautifully plays the violin. (40) John often meets Mary. (65) . Purpose VPs Ẇe need another run to win. (769) .
0.5em. Excluded ṖP arguments Ṣue gave to Bill a book. (42) Everything you like is on the table. (736) . S-adjuncts J̇ohn lost the race, unfortunately.
These are adjuncts modifying noun phrases. Adjuncts are (usually) optional, and they do not change the category of the expression they modify. Single-word prenominal adjectives are excluded, as are relative clauses (this has another category). . Included ṖP-adjuncts Ṭom's dog with one eye attacked Frank's with three legs. (676) They were going to meet sometime on Sunday, but the faculty didn't know when. (565) . Phrasal adjectives Ȧs a statesman, scarcely could he do anything worth mentioning. (292) . Verbal modifiers Ṫhe horse raced past the barn fell. (900)
. Excluded Ṗrenominal Adjectives İt was the policeman met that several young students in the park last night. (227) . Relative Clauses NP arguments
These are adjuncts of VPs and NPs that specify a time or modify tense or aspect or frequency of an event. Adjuncts are (usually) optional, and they do not change the category of the expression they modify. . Included Ṡhort adverbials (never, today, now, always) Ẉhich hat did Mike quip that she never wore? (95) . PPs Ḟiona might be here by 5 o'clock. (426) . When İ inquired when could we leave. (520)
These are adjuncts of VPs and NPs that specify a location of an event or a part of an event, or of an individual. Adjuncts are (usually) optional, and they do not change the category of the expression they modify. . Included Ṡhort adverbials PPs Ṫhe bed was slept in. (298) *Anson demonized up the Khyber (479) Some people consider dogs in my neighborhood dangerous. (802) Mary saw the boy walking toward the railroad station. (73) . Where İ found the place where we can relax. (307)
. Excluded Ŀocative arguments Ṣam gave the ball out of the basket. (129) Jessica loaded boxes on the wagon. (164) I went to Rome.
These are adjuncts of VPs and NPs not described by some other category (with the exception of (6-7)), i.e. not temporal, locative, or relative clauses. Adjuncts are (usually) optional, and they do not change the category of the expression they modify.
. Included Ḃeneficiary Ị know which book José didn't read for class, and which book Lilly did it for him. (58) . Instrument Ŀee saw the student with a telescope. (770) . Comitative J̇oan ate dinner with someone but I don't know who. (544) . VP adjuncts Ẇhich article did Terry file papers without reading? (431) . Purpose Ẇe need another run to win. (769)
Argument Types
Oblique arguments of verbs are individual-denoting arguments (DPs or PPs) which act as the third argument of verb, i.e. not a subject or (direct) object. They may or may not be marked by a preposition. Obliques are only found in VPs that have three or more individual arguments. Arguments are selected for by the verb, and they are (generally) not optional, though in some cases they may be omitted where they are understood or implicitly existentially quantified over. See [p.40]kim2008syntax.
. Included Ṗrepositional Ṣue gave to Bill a book. (42) Mary has always preferred lemons to limes. (70) *Janet broke Bill on the finger. (141) . Benefactives Ṁartha carved the baby a toy out of wood. (139) . Double object Ṡusan told her a story. (875) Locative arguments Ȧnn may spend her vacation in Italy. (289) . High-arity Passives Ṃary was given by John the book. (626)
. Excluded Ṅon-DP arguments Ẇe want John to win (28) . 3rd argments where not all three arguments are DPs Ẇe want John to win (28)
Prepositional Phrase arguments of VPs are individual-denoting arguments of a verb which are marked by a proposition. They may or may not be obliques. Arguments are selected for by the verb, and they are (generally) not optional, though in some cases they may be omitted where they are understood or implicitly existentially quantified over.
. Included Ḋative Ṣue gave to Bill a book. (42) . Conative (at) C̣arla slid at the book. (179) . Idiosyncratic prepositional verbs İ wonder who to place my trust in. (711) She voted for herself. (743) . Locative J̇ohn was found in the office. (283) . PP predicates Ėverything you like is on the table. (736)
. Excluded ṖP adjuncts Particles Arguments of deverbal expressions ṭhe putter of books left. (892) . By-phrase Ṫed was bitten by the spider. (613)
Prepositional Phrase arguments of NPs or APs are individual-denoting arguments of a noun or adjective which are marked by a proposition. Arguments are selected for by the head, and they are (generally) not optional, though in some cases they may be omitted where they are understood or implicitly existentially quantified over.
. Included Ṙelational adjectives Ṁany people were fond of Pat. (936) *I was already aware of fact. (824) . Relational nouns Ẇe admired the pictures of us in the album. (759) They found the book on the atom. (780) . Arguments of deverbal nouns ṭhe putter of books left. (892)
Prepositional arguments introduced with by. Usually, this is the (semantic) subject of a passive verb, but in rare cases it may be the subject of a nominalized verb. Arguments are usually selected for by the head, and they are generally not optional. In this case, the argument introduced with by is semantically selected for by the verb, but it is syntactically optional. See [p.190]adger2003core and []collins2005smuggling.
. Included Ṗassives Ṫed was bitten by the spider. (613) . Subjects of deverbal nouns ṫhe attempt by John to leave surprised me. (1003)
Expletives, or “dummy” arguments, are semantically inert arguments. The most common expletives in English are it and there, although not all occurrences of these items are expletives. Arguments are usually selected for by the head, and they are generally not optional. In this case, the expletive occupies a syntactic argument slot, but it is not semantically selected by the verb, and there is often a syntactic variation without the expletive. See [p.170-172]adger2003core and [p.82-83]kim2008syntax.
. Included Ṫhere—inserted, existential Ṭhere loved Sandy. (939) There is a nurse available. (466) . It—cleft, inserted İt was a brand new car that he bought. (347) It bothers me that John coughs. (314) It is nice to go abroad. (47) . Environmental it K̇erry remarked it was late. (821) Poor Bill, it had started to rain and he had no umbrella. (116) You've really lived it up. (160)
. Excluded J̇ohn counted on Bill to get there on time. (996) I bought it to read. (1026)
Arg Altern (Argument Alternations)
These are verbs with 3 or more arguments of any kind. Arity refers to the number of arguments that a head (or function) selects for. Arguments are usually selected for by the head, and they are generally not optional. They may be DPs, PPs, CPs, VPs, APs or other categories.
. Included Ḋitransitive [̣Sue] gave [to Bill] [a book]. (42) [Martha] carved [the baby] [a toy] out of wood. (139) . VP arguments [̣We] believed [John] [to be a fountain in the park]. (274) [We] made [them] [be rude]. (260) . Particles He] let [the cats which were whining] [out]. (71) . Passives with by-phrase [̣A good friend] is remained [to me] [by him]. (237) . Expletives [̣We] expect [there] [to will rain]. (282) [There] is [a seat] [available]. (934) [It] bothers [me] [that he is here]. (1009) . Small clause John] considers [Bill] [silly]. (1039)
. Excluded Ṙesults, depictives John] broke [the geode] [open].
These are VPs where a canonical argument of the verb is missing. This can be difficult to determine, but in many cases the missing argument is understood with existential quantification or generically, or contextually salient. See [p.106-109]sportiche2013introduction.
. Included Ṁiddle voice/causative inchoative Ṭhe problem perceives easily. (66) . Passive Ṫhe car was driven. (296) . Null complement anaphora J̇ean persuaded Robert. (380) Nobody told Susan. (883) . Dropped argument Ḳim put in the box. (253) The guests dined. (835) I wrote to Bill. (1030) . Transitive adjective J̇ohn is eager. (27) We pulled free. (144) . Transitive noun İ sensed his eagerness. (155) . Expletive insertion Ịt loved Sandy. (949)
. Excluded Ṫed was bitten by the spider. (613)
These are VPs in which a non-canonical argument of the verb has been added. These cases are clearer to identify where the additional argument is a DP. In general, PPs which mark locations, times, beneficiaries, or purposes should be analyzed as adjuncts, while PPs marking causes can be considered arguments. See []pylkkanen2008introducing.
. Included Ėxtra argument Ḷinda winked her lip. (202) Sharon fainted from hunger. (204) I shaved myself. (526) . Causative Ị squeaked the door. (207) . Expletive insertion Ṫhere is a monster in Loch Ness. (928) It annoys people that dogs bark. (943) . Benefactive Ṁartha carved the baby a toy out of wood. (139)
The passive voice is marked by the demotion of the subject (either complete omission or to a by-phrase) and the verb appearing as a past participle. In the stereotypical construction there is an auxiliary be verb, though this may be absent. See [p.175-190]kim2008syntax, collins2005smuggling, and [p.311-333]sag2003syntactic.
. Included V̇erbs Ṫhe earth was believed to be round. (157) . Psuedopassive Ṫhe bed was slept in. (298) . Past participle adjuncts Ṫhe horse raced past the barn fell. (900)
Imperative
The imperative mood is marked by the absence of the a subject and the bare form of the verb, and expresses a command, request, or other directive speech act.
. Included Ẉash you! (224) Somebody just left - guess who. (528)
Binding
These are cases in which a reflexive (non-possessive) pronoun, usually bound by an antecedent. See [p.163-186]sportiche2013introduction and [p.203-226]sag2003syntactic.
. Included Ọurselves like ourselves. (742) Which pictures of himself does John like? (386)
These are cases in which a non-reflexive pronoun appears along with its antecedent. This includes donkey anaphora, quantificational binding, and bound possessives, among other bound pronouns. See [p.163-186]sportiche2013introduction and [p.203-226]sag2003syntactic.
. Included Ḃound possessor Ṫhe children admire their mother. (382) . Quantificational binding Ėverybody gets on well with a certain relative, but often only his therapist knows which one. (562) . Bound pronoun Ẉe gave us to the cause. (747)
Question
These are sentences in which the matrix clause is interrogative (either a wh- or polar question). See [pp.282-213]adger2003core, [pp.193-222]kim2008syntax, and [p.315-350]carnie2013syntax.
. Included Ẇh-question Ẇho always drinks milk? (684) . Polar question Ḋid Athena help us? (486)
These are embedded interrogative clauses appearing as arguments of verbs, nouns, and adjectives. Not including relative clauses and free relatives. See [p.297]adger2003core.
. Included U̇nder VP İ forgot how good beer tastes. (235) *What did you ask who saw? (508) . Under NP Ṫhat is the reason why he resigned. (313) . Under AP Ṫhey claimed they had settled on something, but it wasn't clear what they had settled on. (529) . Free relative Ẇhat the water did to the bottle was fill it. (33)
. Excluded Relative clauses, free relatives
These are phrasal Wh-phrases, in which the wh-word moves along with other expressions, including prepositions (pied-piping) or nouns in the case of determiner wh-words such as how many and which.
. Included Ṗied-piping Ṭhe ship sank, but I don't know with what. (541) . Other phrasal wh-phrases İ know which book Mag read, and which book Bob read my report that you hadn't. (61) How sane is Peter? (88)
Relative clauses are noun modifiers appearing with a relativizer (either that or a wh-word) and an associated gap. See [p.223-244]kim2008syntax.
. Included Ṫhough he may hate those that criticize Carter, it doesn't matter. (332) *The book what inspired them was very long. (686) Everything you like is on the table. (736)
. Excluded Ṭhe more you would want, the less you would eat. (6)
This is wh-movement out of an extraction island, or near-island. Islands include, for example, complex NPs, adjuncts, embedded questions, coordination. A near-island is an extraction that closely resembles an island violation, such as extraction out of an embedded clause, or across-the-board extraction. See [pp.323-333]adger2003core and [pp.332-334]carnie2013syntax.
. Included Ėmbedded question *What did you ask who Medea gave? (493) . Adjunct Ẉhat did you leave before they did? (598) . Parasitic gaps Ẇhich topic did you choose without getting his approval? (311) . Complex NP Ẇho did you get an accurate description of? (483)
Comp Clause (Complement Clauses)
These are complement clauses acting as the (syntactic) subject of verbs. See [pp.90-91]kim2008syntax.
. Included Ṫhat dogs bark annoys people. (942) The socks are ready for for you to put on to be planned. (112)
. Excluded Ėxpletive insertion İt bothers me that John coughs. (314)
These are complement clauses acting as (non-subject) arguments of verbs. See [pp.84-90]kim2008syntax.
. Included İ can't believe Fred won't, either. (50) I saw that gas can explode. (222) It bothers me that John coughs. (314) Clefts İt was a brand new car that he bought. (347)
These are complement clauses acting as an argument of a noun or adjective. See [pp.91-94]kim2008syntax.
. Included U̇nder NP Ḋo you believe the claim that somebody was looking for something? (99) . Under AP Ṭhe children are fond that they have ice cream. (842)
These are complement clauses with a non-finite matrix verb. Often, the complementizer is for, or there is no complementizer. See [pp.252-253,256-260]adger2003core.
. Included Ḟor complementizer İ would prefer for John to leave. (990) . No Complementizer Ṁary intended John to go abroad. (48) . Ungrammatical Ḣeidi thinks that Andy to eat salmon flavored candy bars. (363) . V-ing Ȯnly Churchill remembered Churchill giving the Blood, Sweat and Tears speech. (469)
These are complement clauses with no overt complementizer.
. Included Ċomplement clause İ'm sure we even got these tickets! (325) He announced he would marry the woman he loved most, but none of his relatives could figure out who. (572) . Relative clause Ṫhe Peter we all like was at the party (484)
These are sentences with three or nested verbs, where VP is not an aux or modal, i.e. with the following syntax: [S ...[ VP ...[ VP ...[ VP ...] ...] ...] ...]
. Included Ėmbedded VPs Ṁax seemed to be trying to force Ted to leave the room, and Walt, Ira. (657) . Embedded clauses İ threw away a book that Sandy thought we had read. (713)
Aux (Auxiliaries)
Any occurrence of negation in a sentence, including sentential negation, negative quantifiers, and negative adverbs.
. Included Ṡentential İ can't remember the name of somebody who had misgivings. (123) . Quantifier Ṅo writer, and no playwright, meets in Vienna. (124) . Adverb Ṫhey realised that never had Sir Thomas been so offended. (409)
Modal verbs (may, might, can, could, will, would, shall, should, must). See [pp.152-155]kim2008syntax.
. Included J̇ohn can kick the ball. (280) As a statesman, scarcely could he do anything worth mentioning. (292)
. Excluded Ṗseudo-modals Ṡandy was trying to work out which students would be able to solve a certain problem. (600)
Auxiliary verbs (e.g. be, have, do). See [pp.149-174]kim2008syntax.
. Included Ṫhey love to play golf, but I do not. (290) The car was driven. (296) he had spent five thousand dollars. (301)
. Excluded Ṗseudo-auxiliaries Ṣally asked if somebody was going to fail math class, but I can't remember who. (589) The cat got bitten. (926)
These are predicates acting as near-auxiliary (e.g. get-passive) or near-modals (e.g. willing)
. Included Ṅear-auxiliaries Ṃary came to be introduced by the bartender and I also came to be. (55) *Sally asked if somebody was going to fail math class, but I can't remember who. (589) The cat got bitten. (926) . Near-modals Ċlinton is anxious to find out which budget dilemmas Panetta would be willing to tackle in a certain way, but he won't say in which. (593) Sandy was trying to work out which students would be able to solve a certain problem. (600)
to-VP (Infinitival VPs)
These are VPs with control verbs, where one argument is a non-finite to-VP without a covert subject co-indexed with an argument of the matrix verb. See [pp.252,266-291]adger2003core, [pp.203-222]sportiche2013introduction, and [pp.125-148]kim2008syntax.
. Included İntransitive subject control Ịt tries to leave the country. (275) . Transitive subject control J̇ohn promised Bill to leave. (977) . Transitive object control İ want her to dance. (379) John considers Bill to be silly. (1040)
. Excluded V̇P args of NP/AP Ṫhis violin is difficult to play sonatas on. (114) . Purpose Ṫhere is a bench to sit on. (309) . Subject VPs Ṫo please John is easy. (315) . Argument present participles Ṁedea denied poisoning the phoenix. (490) . Raising Ȧnson believed himself to be handsome. (499)
These are VPs with raising predicates, where one argument is a non-finite to-VP without a covert subject co-indexed with an argument of the matrix verb. Unlike control verbs, the coindexed argument is not a semantic argument of the raising predicate. See [pp.260-266]adger2003core, [pp.203-222]sportiche2013introduction, and [pp.125-148]kim2008syntax.
. Included Ṡubject raising U̇nder the bed seems to be a fun place to hide. (277) . Object raising Ȧnson believed himself to be handsome. (499) . Raising adjective J̇ohn is likely to leave. (370)
These are embedded infinitival VPs containing a (non-subject) gap that is filled by an argument in the upper clause. Examples are purpose-VPs and tough-movement. See [pp.246-252]kim2008syntax.
. Included Ṫough-movement Ḍrowning cats, which is against the law, are hard to rescue. (79) . Infinitival relatives F̣ed knows which politician her to vote for. (302) . Purpose ṫhe one with a red cover takes a very long time to read. (352) . Other non-finite VPs with extraction Ȧs a statesman, scarcely could he do anything worth mentioning. (292)
These are non-finite VP arguments of nouns and adjectives.
. Included Ṙaising adjectives J̇ohn is likely to leave. (370) . Control adjectives Ṫhe administration has issued a statement that it is willing to meet a student group, but I'm not sure which one. (604) . Control nouns Ȧs a teacher, you have to deal simultaneously with the administration's pressure on you to succeed, and the children's to be a nice guy. (673) . Purpose VPs ṫhere is nothing to do. (983)
These are miscellaneous non-finite VPs.
. Included İ saw that gas can explode. (222) Gerunds/Present participles Ṣtudents studying English reads Conrad's Heart of Darkness while at university. (262) Knowing the country well, he took a short cut. (411) John became deadly afraid of flying. (440) . Subject VPs Ṫo please John is easy. (315) . Nominalized VPs Ẉhat Mary did Bill was give a book. (473)
. Excluded ṫo-VPs acting as complements or modifiers of verbs, nouns, or adjectives
N, Adj (Nouns and Adjectives)
These are nouns and adjectives derived from verbs.
. Included Ḋeverbal nouns ṭhe election of John president surprised me. (1001) . “Light” verbs Ṫhe birds give the worm a tug. (815) . Gerunds İf only Superman would stop flying planes! (773) . Event-wh Ẇhat the water did to the bottle was fill it. (33) . Deverbal adjectives Ḣis or her least known work. (95)
Relational nouns are NPs with an obligatory (or existentially closed) argument. A particular relation holds between the members of the extension of NP and the argument. The argument must be a DP possessor or a PP. See [pp.82-83]kim2008syntax.
. Included Ṅouns with of-arguments J̇ohn has a fear of dogs. (353) . Nouns with other PP-arguments Ḣenri wants to buy which books about cooking? (442) . Measure nouns İ bought three quarts of wine and two of Clorox. (667) . Possessed relational nouns J̣ohn's mother likes himself. (484)
. Excluded Ṅouns with PP modifiers Ṡome people consider dogs in my neighborhood dangerous. (802)
Transitive (non-relational) nouns take a VP or CP argument. See [pp.82-83]kim2008syntax.
. Included V̇P argument ṫhe attempt by John to leave surprised me. (1003) . CP argument Ẉhich report that John was incompetent did he submit? (69) . QP argument Ṫhat is the reason why he resigned. (313)
These are complex NPs, including coordinated nouns and nouns with modifiers (excluding prenominal adjectives).
. Included Ṁodified NPs Ṭhe madrigals which Henry plays the lute and sings sound lousy. (84) John bought a book on the table. (233) . NPs with coordination Ṭhe soundly and furry cat slept. (871) The love of my life and mother of my children would never do such a thing. (806)
Noun-noun compounds are NPs consisting of two constituent nouns.
. Included İt was the peasant girl who got it. (320) A felon was elected to the city council. (938)
These are adjectives that take an obligatory (or existentially closed) argument. A particular relation holds between the members of the extension of the modified NP and the argument. The argument must be a DP or PP. See [pp.80-82]kim2008syntax.
. Included Ȯf-arguments Ṫhe chickens seem fond of the farmer. (254) . Other PP arguments Ṫhis week will be a difficult one for us. (241) John made Bill mad at himself. (1035)
A transitive (non-relational) adjective. I.e. an adjectives that takes a VP or CP argument. See [pp.80-82]kim2008syntax.
. Included V̇P argument J̇ohn is likely to leave. (370) . CP argument J̇ohn is aware of it that Bill is here. (1013) . QP argument Ṫhe administration has issued a statement that it is willing to meet a student group, but I'm not sure which one. (604)
S-Syntax (Sentence-Level Syntax)
These are expressions with non-canonical word order. See, for example, [p.76]sportiche2013introduction.
. Includes Ṗarticle shift Ṃickey looked up it. (24) . Preposed modifiers Ȯut of the box jumped a little white rabbit. (215) *Because she's so pleasant, as for Mary I really like her. (331) . Quantifier float Ṫhe men will all leave. (43) . Preposed argument Ẇith no job would John be happy. (333) . Relative clause extraposition Ẇhich book's, author did you meet who you liked? (731) . Misplaced phrases Ṁary was given by John the book. (626)
This includes topicalization and focus constructions. See [pp.258-269]kim2008syntax and [pp.68-75]sportiche2013introduction.
. Included Ṫopicalization Ṁost elections are quickly forgotten, but the election of 2000, everyone will remember for a long time. (807) . Clefts İt was a brand new car that he bought. (347) . Pseudo-clefts Ẇhat John promised is to be gentle. (441)
. Excluded Ṫhere-insertion Passive
These are parentheticals or fragmentary expressions. . Included Ṗarenthetical Ṁary asked me if, in St. Louis, John could rent a house cheap. (704) . Fragments Ṫhe soup cooks, thickens. (448) . Tag question Ġeorge has spent a lot of money, hasn't he? (291)
Coordinations and disjunctions are expressions joined with and, but, or, etc. See [pp.61-68]sportiche2013introduction.
. Included ḊP coordination Ḋave, Dan, Erin, Jaime, and Alina left. (341) . Right Node Raising K̇im gave a dollar to Bobbie and a dime to Jean. (435) . Clausal coordination Ṡhe talked to Harry, but I don't know who else. (575) . Or, nor Ṇo writer, nor any playwright, meets in Vienna. (125) . Pseudo-coordination İ want to try and buy some whiskey. (432) . Juxtaposed clauses Ŀights go out at ten. There will be no talking afterwards. (779)
This includes subordinate clauses, especially with subordinating conjunctions, and conditionals.
. Included Ċonditional İf I can, I will work on it. (56) . Subordinate clause Ẉhat did you leave before they did? (598) *Because Steve's of a spider's eye had been stolen, I borrowed Fred's diagram of a snake's fang. (677) . Correlative Ạs you eat the most, you want the least. (5)
This includes VP or NP ellipsis, or anaphora standing for VPs or NPs (not DPs). See [pp.55-61]sportiche2013introduction.
. Included V̇P Ellipsis İf I can, I will work on it. (56) Mary likes to tour art galleries, but Bill hates to. (287) . VP Anaphor İ saw Bill while you did so Mary. (472) . NP Ellipsis Ṫom's dog with one eye attacked Fred's. (679) . NP anaphor ṫhe one with a red cover takes a very long time to read. (352) . Sluicing Ṁost columnists claim that a senior White House official has been briefing them, and the newspaper today reveals which one. (557) . Gapping Ḃill ate the peaches, but Harry the grapes. (646)
These are adjuncts modifying sentences, sentence-level adverbs, subordinate clauses.
. Included Ṡentence-level adverbs Ṡuddenly, there arrived two inspectors from the INS. (447) . Subordinate clauses Ṫhe storm arrived while we ate lunch. (852)
Determiner
These are quantificational DPs, i.e. the determiner is a quantifier.
. Included Q̇uantifiers Ẹvery student, and he wears socks, is a swinger. (118) We need another run to win. (769) . Partitive Ṇeither of students failed. (265)
These are quantifiers that take PP arguments, and measure nouns. See [pp.109-118]kim2008syntax.
. Included Q̇uantifiers with PP arguments Ṇeither of students failed. (265) . Numerals Ȯne of Korea's most famous poets wrote these lines. (294) . Measure nouns İ bought three quarts of wine and two of Clorox. (667)
These are negative polarity items (any, ever, etc.) and free choice items (any). See kadmon1993any.
. Included ṄPI Ėverybody around here who ever buys anything on credit talks in his sleep. (122) I didn't have a red cent. (350) . FCI Ȧny owl hunts mice. (387)
These are comparative constructions. See BIBREF22 .
. Included Ċorrelative Ṫhe angrier Mary got, the more she looked at pictures. (9) They may grow as high as bamboo. (337) I know you like the back of my hand. (775)
Violations
These are sentences that include a semantic violation, including type mismatches, violations of selectional restrictions, polarity violations, definiteness violations.
. Included V̇olation of selectional restrictions ṃany information was provided. (218) *It tries to leave the country. (275) . Aspectual violations J̣ohn is tall on several occasions. (540) . Definiteness violations Ịt is the problem that he is here. (1018) . Polarity violations Ȧny man didn't eat dinner. (388)
These are sentences that include a violation in inflectional morphology, including tense-aspect marking, or agreement.
. Included Ċase Ụs love they. (46) . Agreement Ṣtudents studying English reads Conrad's Heart of Darkness while at university. (262) . Gender Ṣally kissed himself. (339) . Tense/Aspect Ḳim alienated cats and beating his dog. (429)
These are sentences with a violation that can be identified with the presence or absence of a single word.
. Included Ṁissing word J̣ohn put under the bathtub. (247) *I noticed the. (788) . Extra word Ẹveryone hopes everyone to sleep. (467) *He can will go (510) | labeled by experts |
c4a6b727769328333bb48d59d3fc4036a084875d | c4a6b727769328333bb48d59d3fc4036a084875d_0 | Q: What baseline did they compare Entity-GCN to?
Text: Introduction
The long-standing goal of natural language understanding is the development of systems which can acquire knowledge from text collections. Fresh interest in reading comprehension tasks was sparked by the availability of large-scale datasets, such as SQuAD BIBREF1 and CNN/Daily Mail BIBREF2 , enabling end-to-end training of neural models BIBREF3 , BIBREF4 , BIBREF5 . These systems, given a text and a question, need to answer the query relying on the given document. Recently, it has been observed that most questions in these datasets do not require reasoning across the document, but they can be answered relying on information contained in a single sentence BIBREF6 . The last generation of large-scale reading comprehension datasets, such as a NarrativeQA BIBREF7 , TriviaQA BIBREF8 , and RACE BIBREF9 , have been created in such a way as to address this shortcoming and to ensure that systems relying only on local information cannot achieve competitive performance.
Even though these new datasets are challenging and require reasoning within documents, many question answering and search applications require aggregation of information across multiple documents. The WikiHop dataset BIBREF0 was explicitly created to facilitate the development of systems dealing with these scenarios. Each example in WikiHop consists of a collection of documents, a query and a set of candidate answers (Figure 1 ). Though there is no guarantee that a question cannot be answered by relying just on a single sentence, the authors ensure that it is answerable using a chain of reasoning crossing document boundaries.
Though an important practical problem, the multi-hop setting has so far received little attention. The methods reported by BIBREF0 approach the task by merely concatenating all documents into a single long text and training a standard RNN-based reading comprehension model, namely, BiDAF BIBREF3 and FastQA BIBREF6 . Document concatenation in this setting is also used in Weaver BIBREF10 and MHPGM BIBREF11 . The only published paper which goes beyond concatenation is due to BIBREF12 , where they augment RNNs with jump-links corresponding to co-reference edges. Though these edges provide a structural bias, the RNN states are still tasked with passing the information across the document and performing multi-hop reasoning.
Instead, we frame question answering as an inference problem on a graph representing the document collection. Nodes in this graph correspond to named entities in a document whereas edges encode relations between them (e.g., cross- and within-document coreference links or simply co-occurrence in a document). We assume that reasoning chains can be captured by propagating local contextual information along edges in this graph using a graph convolutional network (GCN) BIBREF13 .
The multi-document setting imposes scalability challenges. In realistic scenarios, a system needs to learn to answer a query for a given collection (e.g., Wikipedia or a domain-specific set of documents). In such scenarios one cannot afford to run expensive document encoders (e.g., RNN or transformer-like self-attention BIBREF14 ), unless the computation can be preprocessed both at train and test time. Even if (similarly to WikiHop creators) one considers a coarse-to-fine approach, where a set of potentially relevant documents is provided, re-encoding them in a query-specific way remains the bottleneck. In contrast to other proposed methods (e.g., BIBREF12 , BIBREF10 , BIBREF3 ), we avoid training expensive document encoders.
In our approach, only a small query encoder, the GCN layers and a simple feed-forward answer selection component are learned. Instead of training RNN encoders, we use contextualized embeddings (ELMo) to obtain initial (local) representations of nodes. This implies that only a lightweight computation has to be performed online, both at train and test time, whereas the rest is preprocessed. Even in the somewhat contrived WikiHop setting, where fairly small sets of candidates are provided, the model is at least 5 times faster to train than BiDAF. Interestingly, when we substitute ELMo with simple pre-trained word embeddings, Entity-GCN still performs on par with many techniques that use expensive question-aware recurrent document encoders.
Despite not using recurrent document encoders, the full Entity-GCN model achieves over 2% improvement over the best previously-published results. As our model is efficient, we also reported results of an ensemble which brings further 3.6% of improvement and only 3% below the human performance reported by BIBREF0 . Our contributions can be summarized as follows:
Method
In this section we explain our method. We first introduce the dataset we focus on, WikiHop by BIBREF0 , as well as the task abstraction. We then present the building blocks that make up our Entity-GCN model, namely, an entity graph used to relate mentions to entities within and across documents, a document encoder used to obtain representations of mentions in context, and a relational graph convolutional network that propagates information through the entity graph.
Dataset and task abstraction
The WikiHop dataset comprises of tuples $\langle q, S_q, C_q, a^\star \rangle $ where: $q$ is a query/question, $S_q$ is a set of supporting documents, $C_q$ is a set of candidate answers (all of which are entities mentioned in $S_q$ ), and $a^\star \in C_q$ is the entity that correctly answers the question. WikiHop is assembled assuming that there exists a corpus and a knowledge base (KB) related to each other. The KB contains triples $\langle s, r, o \rangle $ where $s$ is a subject entity, $o$ an object entity, and $r$ a unidirectional relation between them. BIBREF0 used Wikipedia as corpus and Wikidata BIBREF15 as KB. The KB is only used for constructing WikiHop: BIBREF0 retrieved the supporting documents $q$0 from the corpus looking at mentions of subject and object entities in the text. Note that the set $q$1 (not the KB) is provided to the QA system, and not all of the supporting documents are relevant for the query but some of them act as distractors. Queries, on the other hand, are not expressed in natural language, but instead consist of tuples $q$2 where the object entity is unknown and it has to be inferred by reading the support documents. Therefore, answering a query corresponds to finding the entity $q$3 that is the object of a tuple in the KB with subject $q$4 and relation $q$5 among the provided set of candidate answers $q$6 .
The goal is to learn a model that can identify the correct answer $a^\star $ from the set of supporting documents $S_q$ . To that end, we exploit the available supervision to train a neural network that computes scores for candidates in $C_q$ . We estimate the parameters of the architecture by maximizing the likelihood of observations. For prediction, we then output the candidate that achieves the highest probability. In the following, we present our model discussing the design decisions that enable multi-step reasoning and an efficient computation.
Reasoning on an entity graph
In an offline step, we organize the content of each training instance in a graph connecting mentions of candidate answers within and across supporting documents. For a given query $q = \langle s, r, ? \rangle $ , we identify mentions in $S_q$ of the entities in $C_q \cup \lbrace s\rbrace $ and create one node per mention. This process is based on the following heuristic:
we consider mentions spans in $S_q$ exactly matching an element of $C_q \cup \lbrace s\rbrace $ . Admittedly, this is a rather simple strategy which may suffer from low recall.
we use predictions from a coreference resolution system to add mentions of elements in $C_q \cup \lbrace s\rbrace $ beyond exact matching (including both noun phrases and anaphoric pronouns). In particular, we use the end-to-end coreference resolution by BIBREF16 .
we discard mentions which are ambiguously resolved to multiple coreference chains; this may sacrifice recall, but avoids propagating ambiguity.
To each node $v_i$ , we associate a continuous annotation $\mathbf {x}_i \in \mathbb {R}^D$ which represents an entity in the context where it was mentioned (details in Section "Node annotations" ). We then proceed to connect these mentions i) if they co-occur within the same document (we will refer to this as DOC-BASED edges), ii) if the pair of named entity mentions is identical (MATCH edges—these may connect nodes across and within documents), or iii) if they are in the same coreference chain, as predicted by the external coreference system (COREF edges). Note that MATCH edges when connecting mentions in the same document are mostly included in the set of edges predicted by the coreference system. Having the two types of edges lets us distinguish between less reliable edges provided by the coreference system and more reliable (but also more sparse) edges given by the exact-match heuristic. We treat these three types of connections as three different types of relations. See Figure 2 for an illustration. In addition to that, and to prevent having disconnected graphs, we add a fourth type of relation (COMPLEMENT edge) between any two nodes that are not connected with any of the other relations. We can think of these edges as those in the complement set of the entity graph with respect to a fully connected graph.
Our model then approaches multi-step reasoning by transforming node representations (Section "Node annotations" for details) with a differentiable message passing algorithm that propagates information through the entity graph. The algorithm is parameterized by a graph convolutional network (GCN) BIBREF13 , in particular, we employ relational-GCNs BIBREF17 , an extended version that accommodates edges of different types. In Section "Entity relational graph convolutional network" we describe the propagation rule.
Each step of the algorithm (also referred to as a hop) updates all node representations in parallel. In particular, a node is updated as a function of messages from its direct neighbours, and a message is possibly specific to a certain relation. At the end of the first step, every node is aware of every other node it connects directly to. Besides, the neighbourhood of a node may include mentions of the same entity as well as others (e.g., same-document relation), and these mentions may have occurred in different documents. Taking this idea recursively, each further step of the algorithm allows a node to indirectly interact with nodes already known to their neighbours. After $L$ layers of R-GCN, information has been propagated through paths connecting up to $L+1$ nodes.
We start with node representations $\lbrace \mathbf {h}_i^{(0)}\rbrace _{i=1}^N$ , and transform them by applying $L$ layers of R-GCN obtaining $\lbrace \mathbf {h}_i^{(L)}\rbrace _{i=1}^N$ . Together with a representation $\mathbf {q}$ of the query, we define a distribution over candidate answers and we train maximizing the likelihood of observations. The probability of selecting a candidate $c \in C_q$ as an answer is then
$$ P(c|q, C_q, S_q) \propto \exp \left(\max _{i \in \mathcal {M}_c} f_o([\mathbf {q}, \mathbf {h}^{(L)}_i]) \right)\;,$$ (Eq. 16)
where $f_o$ is a parameterized affine transformation, and $\mathcal {M}_c$ is the set of node indices such that $i\in \mathcal {M}_c$ only if node $v_i$ is a mention of $c$ . The $\max $ operator in Equation 16 is necessary to select the node with highest predicted probability since a candidate answer is realized in multiple locations via different nodes.
Node annotations
Keeping in mind we want an efficient model, we encode words in supporting documents and in the query using only a pre-trained model for contextualized word representations rather than training our own encoder. Specifically, we use ELMo BIBREF20 , a pre-trained bi-directional language model that relies on character-based input representation. ELMo representations, differently from other pre-trained word-based models (e.g., word2vec BIBREF21 or GloVe BIBREF22 ), are contextualized since each token representation depends on the entire text excerpt (i.e., the whole sentence).
We choose not to fine tune nor propagate gradients through the ELMo architecture, as it would have defied the goal of not having specialized RNN encoders. In the experiments, we will also ablate the use of ELMo showing how our model behaves using non-contextualized word representations (we use GloVe).
ELMo encodings are used to produce a set of representations $\lbrace \mathbf {x}_i\rbrace _{i=1}^N$ , where $\mathbf {x}_i \in \mathbb {R}^D$ denotes the $i$ th candidate mention in context. Note that these representations do not depend on the query yet and no trainable model was used to process the documents so far, that is, we use ELMo as a fixed pre-trained encoder. Therefore, we can pre-compute representation of mentions once and store them for later use.
ELMo encodings are used to produce a query representation $\mathbf {q} \in \mathbb {R}^K$ as well. Here, $\mathbf {q}$ is a concatenation of the final outputs from a bidirectional RNN layer trained to re-encode ELMo representations of words in the query. The vector $\mathbf {q}$ is used to compute a query-dependent representation of mentions $\lbrace \mathbf { \hat{x}}_i\rbrace _{i=1}^N$ as well as to compute a probability distribution over candidates (as in Equation 16 ). Query-dependent mention encodings $\mathbf {\hat{x}}_i = f_x(\mathbf {q}, \mathbf {x}_i)$ are generated by a trainable function $f_x$ which is parameterized by a feed-forward neural network.
Entity relational graph convolutional network
Our model uses a gated version of the original R-GCN propagation rule. At the first layer, all hidden node representation are initialized with the query-aware encodings $\mathbf {h}_i^{(0)} = \mathbf {\hat{x}}_i$ . Then, at each layer $0\le \ell \le L$ , the update message $\mathbf {u}_i^{(\ell )}$ to the $i$ th node is a sum of a transformation $f_s$ of the current node representation $\mathbf {h}^{(\ell )}_i$ and transformations of its neighbours:
$$\mathbf {u}^{(\ell )}_i = f_s(\mathbf {h}^{(\ell )}_i) + \frac{1}{|\mathcal {N}_i|} \sum _{j \in \mathcal {N}_i} \sum _{r \in \mathcal {R}_{ij}} f_r(\mathbf {h}_j^{(\ell )})\;,$$ (Eq. 22)
where $\mathcal {N}_i$ is the set of indices of nodes neighbouring the $i$ th node, $\mathcal {R}_{ij}$ is the set of edge annotations between $i$ and $j$ , and $f_r$ is a parametrized function specific to an edge type $r\in \mathcal {R}$ . Recall the available relations from Section "Ablation study" , namely, $\mathcal {R} =\lbrace $ DOC-BASED, MATCH, COREF, COMPLEMENT $\rbrace $ .
A gating mechanism regulates how much of the update message propagates to the next step. This provides the model a way to prevent completely overwriting past information. Indeed, if all necessary information to answer a question is present at a layer which is not the last, then the model should learn to stop using neighbouring information for the next steps. Gate levels are computed as
$$\mathbf {a}^{(\ell )}_i = \sigma \left( f_a\left([\mathbf {u}^{(\ell )}_i, \mathbf {h}^{(\ell )}_i ]\right) \right) \;,$$ (Eq. 23)
where $\sigma (\cdot )$ is the sigmoid function and $f_a$ a parametrized transformation. Ultimately, the updated representation is a gated combination of the previous representation and a non-linear transformation of the update message:
$$\mathbf {h}^{(\ell + 1)}_i = \phi (\mathbf {u}^{(\ell )}_i) \odot \mathbf {a}^{(\ell )}_i + \mathbf {h}^{(\ell )}_i \odot (1 - \mathbf {a}^{(\ell )}_i ) \;,$$ (Eq. 24)
where $\phi (\cdot )$ is any nonlinear function (we used $\tanh $ ) and $\odot $ stands for element-wise multiplication. All transformations $f_*$ are affine and they are not layer-dependent (since we would like to use as few parameters as possible to decrease model complexity promoting efficiency and scalability).
Experiments
In this section, we compare our method against recent work as well as preforming an ablation study using the WikiHop dataset BIBREF0 . See Appendix "Implementation and experiments details" in the supplementary material for a description of the hyper-parameters of our model and training details.
Comparison
In this experiment, we compare our Enitity-GCN against recent prior work on the same task. We present test and development results (when present) for both versions of the dataset in Table 2 . From BIBREF0 , we list an oracle based on human performance as well as two standard reading comprehension models, namely BiDAF BIBREF3 and FastQA BIBREF6 . We also compare against Coref-GRU BIBREF12 , MHPGM BIBREF11 , and Weaver BIBREF10 . Additionally, we include results of MHQA-GRN BIBREF23 , from a recent arXiv preprint describing concurrent work. They jointly train graph neural networks and recurrent encoders. We report single runs of our two best single models and an ensemble one on the unmasked test set (recall that the test set is not publicly available and the task organizers only report unmasked results) as well as both versions of the validation set.
Entity-GCN (best single model without coreference edges) outperforms all previous work by over 2% points. We additionally re-ran BiDAF baseline to compare training time: when using a single Titan X GPU, BiDAF and Entity-GCN process 12.5 and 57.8 document sets per second, respectively. Note that BIBREF0 had to use BiDAF with very small state dimensionalities (20), and smaller batch size due to the scalability issues (both memory and computation costs). We compare applying the same reductions. Eventually, we also report an ensemble of 5 independently trained models. All models are trained on the same dataset splits with different weight initializations. The ensemble prediction is obtained as $\arg \max \limits _c \prod \limits _{i=1}^5 P_i(c|q, C_q, S_q)$ from each model.
Ablation study
To help determine the sources of improvements, we perform an ablation study using the publicly available validation set (see Table 3 ). We perform two groups of ablation, one on the embedding layer, to study the effect of ELMo, and one on the edges, to study how different relations affect the overall model performance.
We argue that ELMo is crucial, since we do not rely on any other context encoder. However, it is interesting to explore how our R-GCN performs without it. Therefore, in this experiment, we replace the deep contextualized embeddings of both the query and the nodes with GloVe BIBREF22 vectors (insensitive to context). Since we do not have any component in our model that processes the documents, we expect a drop in performance. In other words, in this ablation our model tries to answer questions without reading the context at all. For example, in Figure 1 , our model would be aware that “Stockholm” and “Sweden” appear in the same document but any context words, including the ones encoding relations (e.g., “is the capital of”) will be hidden. Besides, in the masked case all mentions become `unknown' tokens with GloVe and therefore the predictions are equivalent to a random guess. Once the strong pre-trained encoder is out of the way, we also ablate the use of our R-GCN component, thus completely depriving the model from inductive biases that aim at multi-hop reasoning.
The first important observation is that replacing ELMo by GloVe (GloVe with R-GCN in Table 3 ) still yields a competitive system that ranks far above baselines from BIBREF0 and even above the Coref-GRU of BIBREF12 , in terms of accuracy on (unmasked) validation set. The second important observation is that if we then remove R-GCN (GloVe w/o R-GCN in Table 3 ), we lose 8.0 points. That is, the R-GCN component pushes the model to perform above Coref-GRU still without accessing context, but rather by updating mention representations based on their relation to other ones. These results highlight the impact of our R-GCN component.
In this experiment we investigate the effect of the different relations available in the entity graph and processed by the R-GCN module. We start off by testing our stronger encoder (i.e., ELMo) in absence of edges connecting mentions in the supporting documents (i.e., using only self-loops – No R-GCN in Table 3 ). The results suggest that WikipHop genuinely requires multihop inference, as our best model is 6.1% and 8.4% more accurate than this local model, in unmasked and masked settings, respectively. However, it also shows that ELMo representations capture predictive context features, without being explicitly trained for the task. It confirms that our goal of getting away with training expensive document encoders is a realistic one.
We then inspect our model's effectiveness in making use of the structure encoded in the graph. We start naively by fully-connecting all nodes within and across documents without distinguishing edges by type (No relation types in Table 3 ). We observe only marginal improvements with respect to ELMo alone (No R-GCN in Table 3 ) in both the unmasked and masked setting suggesting that a GCN operating over a naive entity graph would not add much to this task and a more informative graph construction and/or a more sophisticated parameterization is indeed needed.
Next, we ablate each type of relations independently, that is, we either remove connections of mentions that co-occur in the same document (DOC-BASED), connections between mentions matching exactly (MATCH), or edges predicted by the coreference system (COREF). The first thing to note is that the model makes better use of DOC-BASED connections than MATCH or COREF connections. This is mostly because i) the majority of the connections are indeed between mentions in the same document, and ii) without connecting mentions within the same document we remove important information since the model is unaware they appear closely in the document. Secondly, we notice that coreference links and complement edges seem to play a more marginal role. Though it may be surprising for coreference edges, recall that the MATCH heuristic already captures the easiest coreference cases, and for the rest the out-of-domain coreference system may not be reliable. Still, modelling all these different relations together gives our Entity-GCN a clear advantage. This is our best system evaluating on the development. Since Entity-GCN seems to gain little advantage using the coreference system, we report test results both with and without using it. Surprisingly, with coreference, we observe performance degradation on the test set. It is likely that the test documents are harder for the coreference system.
We do perform one last ablation, namely, we replace our heuristic for assigning edges and their labels by a model component that predicts them. The last row of Table 3 (Induced edges) shows model performance when edges are not predetermined but predicted. For this experiment, we use a bilinear function $f_e(\mathbf {\hat{x}}_i, \mathbf {\hat{x}}_j) = \sigma \left( \mathbf {\hat{x}}^\top _i \mathbf {W}_e \mathbf {\hat{x}}_j \right)$ that predicts the importance of a single edge connecting two nodes $i,j$ using the query-dependent representation of mentions (see Section "Node annotations" ). The performance drops below `No R-GCN' suggesting that it cannot learn these dependencies on its own.
Most results are stronger for the masked settings even though we do not apply the coreference resolution system in this setting due to masking. It is not surprising as coreferred mentions are labeled with the same identifier in the masked version, even if their original surface forms did not match ( BIBREF0 used Wikipedia links for masking). Indeed, in the masked version, an entity is always referred to via the same unique surface form (e.g., MASK1) within and across documents. In the unmasked setting, on the other hand, mentions to an entity may differ (e.g., “US” vs “United States”) and they might not be retrieved by the coreference system we are employing, making the task harder for all models. Therefore, as we rely mostly on exact matching when constructing our graph for the masked case, we are more effective in recovering coreference links on the masked rather than unmasked version.
In Figure 3 , we show how the model performance goes when the input graph is large. In particular, how Entity-GCN performs as the number of candidate answers or the number of nodes increases.
Error analysis
In this section we provide an error analysis for our best single model predictions. First of all, we look at which type of questions our model performs well or poorly. There are more than 150 query types in the validation set but we filtered the three with the best and with the worst accuracy that have at least 50 supporting documents and at least 5 candidates. We show results in Table 4 . We observe that questions regarding places (birth and death) are considered harder for Entity-GCN. We then inspect samples where our model fails while assigning highest likelihood and noticed two principal sources of failure i) a mismatch between what is written in Wikipedia and what is annotated in Wikidata, and ii) a different degree of granularity (e.g., born in “London” vs “UK” could be considered both correct by a human but not when measuring accuracy). See Table 6 in the supplement material for some reported samples.
Secondly, we study how the model performance degrades when the input graph is large. In particular, we observe a negative Pearson's correlation (-0.687) between accuracy and the number of candidate answers. However, the performance does not decrease steeply. The distribution of the number of candidates in the dataset peaks at 5 and has an average of approximately 20. Therefore, the model does not see many samples where there are a large number of candidate entities during training. Differently, we notice that as the number of nodes in the graph increases, the model performance drops but more gently (negative but closer to zero Pearson's correlation). This is important as document sets can be large in practical applications. See Figure 3 in the supplemental material for plots.
In Table 6 , we report three samples from WikiHop development set where out Entity-GCN fails. In particular, we show two instances where our model presents high confidence on the answer, and one where is not. We commented these samples explaining why our model might fail in these cases.
Related work
In previous work, BiDAF BIBREF3 , FastQA BIBREF6 , Coref-GRU BIBREF12 , MHPGM BIBREF11 , and Weaver / Jenga BIBREF10 have been applied to multi-document question answering. The first two mainly focus on single document QA and BIBREF0 adapted both of them to work with WikiHop. They process each instance of the dataset by concatenating all $d \in S_q$ in a random order adding document separator tokens. They trained using the first answer mention in the concatenated document and evaluating exact match at test time. Coref-GRU, similarly to us, encodes relations between entity mentions in the document. Instead of using graph neural network layers, as we do, they augment RNNs with jump links corresponding to pairs of corefereed mentions. MHPGM uses a multi-attention mechanism in combination with external commonsense relations to perform multiple hops of reasoning. Weaver is a deep co-encoding model that uses several alternating bi-LSTMs to process the concatenated documents and the query.
Graph neural networks have been shown successful on a number of NLP tasks BIBREF24 , BIBREF25 , BIBREF26 , including those involving document level modeling BIBREF27 . They have also been applied in the context of asking questions about knowledge contained in a knowledge base BIBREF28 . In schlichtkrull2017modeling, GCNs are used to capture reasoning chains in a knowledge base. Our work and unpublished concurrent work by BIBREF23 are the first to study graph neural networks in the context of multi-document QA. Besides differences in the architecture, BIBREF23 propose to train a combination of a graph recurrent network and an RNN encoder. We do not train any RNN document encoders in this work.
Conclusion
We designed a graph neural network that operates over a compact graph representation of a set of documents where nodes are mentions to entities and edges signal relations such as within and cross-document coreference. The model learns to answer questions by gathering evidence from different documents via a differentiable message passing algorithm that updates node representations based on their neighbourhood. Our model outperforms published results where ablations show substantial evidence in favour of multi-step reasoning. Moreover, we make the model fast by using pre-trained (contextual) embeddings.
Acknowledgments
We would like to thank Johannes Welbl for helping to test our system on WikiHop. This project is supported by SAP Innovation Center Network, ERC Starting Grant BroadSem (678254) and the Dutch Organization for Scientific Research (NWO) VIDI 639.022.518. Wilker Aziz is supported by the Dutch Organisation for Scientific Research (NWO) VICI Grant nr. 277-89-002.
Architecture
See table 5 for an outline of Entity-GCN architectural detail. Here the computational steps
ELMo embeddings are a concatenation of three 1024-dimensional vectors resulting in 3072-dimensional input vectors $\lbrace \mathbf {x}_i\rbrace _{i=1}^N$ .
For the query representation $\mathbf {q}$ , we apply 2 bi-LSTM layers of 256 and 128 hidden units to its ELMo vectors. The concatenation of the forward and backward states results in a 256-dimensional question representation.
ELMo embeddings of candidates are projected to 256-dimensional vectors, concatenated to the $\mathbf {q}$ , and further transformed with a two layers MLP of 1024 and 512 hidden units in 512-dimensional query aware entity representations $\lbrace \mathbf {\hat{x}}_i\rbrace _{i=1}^N \in \mathbb {R}^{512}$ .
All transformations $f_*$ in R-GCN-layers are affine and they do maintain the input and output dimensionality of node representations the same (512-dimensional).
Eventually, a 2-layers MLP with [256, 128] hidden units takes the concatenation between $\lbrace \mathbf {h}_i^{(L)}\rbrace _{i=1}^N$ and $\mathbf {q}$ to predict the probability that a candidate node $v_i$ may be the answer to the query $q$ (see Equation 16 ).
During preliminary trials, we experimented with different numbers of R-GCN-layers (in the range 1-7). We observed that with WikiHop, for $L \ge 3$ models reach essentially the same performance, but more layers increase the time required to train them. Besides, we observed that the gating mechanism learns to keep more and more information from the past at each layer making unnecessary to have more layers than required.
Training details
We train our models with a batch size of 32 for at most 20 epochs using the Adam optimizer BIBREF29 with $\beta _1=0.9$ , $\beta _2=0.999$ and a learning rate of $10^{-4}$ . To help against overfitting, we employ dropout (drop rate $\in {0, 0.1, 0.15, 0.2, 0.25}$ ) BIBREF30 and early-stopping on validation accuracy. We report the best results of each experiment based on accuracy on validation set. | Human, FastQA, BiDAF, Coref-GRU, MHPGM, Weaver / Jenga, MHQA-GRN |
bbeb74731b9ac7f61e2d74a7d9ea74caa85e62ef | bbeb74731b9ac7f61e2d74a7d9ea74caa85e62ef_0 | Q: How many documents at a time can Entity-GCN handle?
Text: Introduction
The long-standing goal of natural language understanding is the development of systems which can acquire knowledge from text collections. Fresh interest in reading comprehension tasks was sparked by the availability of large-scale datasets, such as SQuAD BIBREF1 and CNN/Daily Mail BIBREF2 , enabling end-to-end training of neural models BIBREF3 , BIBREF4 , BIBREF5 . These systems, given a text and a question, need to answer the query relying on the given document. Recently, it has been observed that most questions in these datasets do not require reasoning across the document, but they can be answered relying on information contained in a single sentence BIBREF6 . The last generation of large-scale reading comprehension datasets, such as a NarrativeQA BIBREF7 , TriviaQA BIBREF8 , and RACE BIBREF9 , have been created in such a way as to address this shortcoming and to ensure that systems relying only on local information cannot achieve competitive performance.
Even though these new datasets are challenging and require reasoning within documents, many question answering and search applications require aggregation of information across multiple documents. The WikiHop dataset BIBREF0 was explicitly created to facilitate the development of systems dealing with these scenarios. Each example in WikiHop consists of a collection of documents, a query and a set of candidate answers (Figure 1 ). Though there is no guarantee that a question cannot be answered by relying just on a single sentence, the authors ensure that it is answerable using a chain of reasoning crossing document boundaries.
Though an important practical problem, the multi-hop setting has so far received little attention. The methods reported by BIBREF0 approach the task by merely concatenating all documents into a single long text and training a standard RNN-based reading comprehension model, namely, BiDAF BIBREF3 and FastQA BIBREF6 . Document concatenation in this setting is also used in Weaver BIBREF10 and MHPGM BIBREF11 . The only published paper which goes beyond concatenation is due to BIBREF12 , where they augment RNNs with jump-links corresponding to co-reference edges. Though these edges provide a structural bias, the RNN states are still tasked with passing the information across the document and performing multi-hop reasoning.
Instead, we frame question answering as an inference problem on a graph representing the document collection. Nodes in this graph correspond to named entities in a document whereas edges encode relations between them (e.g., cross- and within-document coreference links or simply co-occurrence in a document). We assume that reasoning chains can be captured by propagating local contextual information along edges in this graph using a graph convolutional network (GCN) BIBREF13 .
The multi-document setting imposes scalability challenges. In realistic scenarios, a system needs to learn to answer a query for a given collection (e.g., Wikipedia or a domain-specific set of documents). In such scenarios one cannot afford to run expensive document encoders (e.g., RNN or transformer-like self-attention BIBREF14 ), unless the computation can be preprocessed both at train and test time. Even if (similarly to WikiHop creators) one considers a coarse-to-fine approach, where a set of potentially relevant documents is provided, re-encoding them in a query-specific way remains the bottleneck. In contrast to other proposed methods (e.g., BIBREF12 , BIBREF10 , BIBREF3 ), we avoid training expensive document encoders.
In our approach, only a small query encoder, the GCN layers and a simple feed-forward answer selection component are learned. Instead of training RNN encoders, we use contextualized embeddings (ELMo) to obtain initial (local) representations of nodes. This implies that only a lightweight computation has to be performed online, both at train and test time, whereas the rest is preprocessed. Even in the somewhat contrived WikiHop setting, where fairly small sets of candidates are provided, the model is at least 5 times faster to train than BiDAF. Interestingly, when we substitute ELMo with simple pre-trained word embeddings, Entity-GCN still performs on par with many techniques that use expensive question-aware recurrent document encoders.
Despite not using recurrent document encoders, the full Entity-GCN model achieves over 2% improvement over the best previously-published results. As our model is efficient, we also reported results of an ensemble which brings further 3.6% of improvement and only 3% below the human performance reported by BIBREF0 . Our contributions can be summarized as follows:
Method
In this section we explain our method. We first introduce the dataset we focus on, WikiHop by BIBREF0 , as well as the task abstraction. We then present the building blocks that make up our Entity-GCN model, namely, an entity graph used to relate mentions to entities within and across documents, a document encoder used to obtain representations of mentions in context, and a relational graph convolutional network that propagates information through the entity graph.
Dataset and task abstraction
The WikiHop dataset comprises of tuples $\langle q, S_q, C_q, a^\star \rangle $ where: $q$ is a query/question, $S_q$ is a set of supporting documents, $C_q$ is a set of candidate answers (all of which are entities mentioned in $S_q$ ), and $a^\star \in C_q$ is the entity that correctly answers the question. WikiHop is assembled assuming that there exists a corpus and a knowledge base (KB) related to each other. The KB contains triples $\langle s, r, o \rangle $ where $s$ is a subject entity, $o$ an object entity, and $r$ a unidirectional relation between them. BIBREF0 used Wikipedia as corpus and Wikidata BIBREF15 as KB. The KB is only used for constructing WikiHop: BIBREF0 retrieved the supporting documents $q$0 from the corpus looking at mentions of subject and object entities in the text. Note that the set $q$1 (not the KB) is provided to the QA system, and not all of the supporting documents are relevant for the query but some of them act as distractors. Queries, on the other hand, are not expressed in natural language, but instead consist of tuples $q$2 where the object entity is unknown and it has to be inferred by reading the support documents. Therefore, answering a query corresponds to finding the entity $q$3 that is the object of a tuple in the KB with subject $q$4 and relation $q$5 among the provided set of candidate answers $q$6 .
The goal is to learn a model that can identify the correct answer $a^\star $ from the set of supporting documents $S_q$ . To that end, we exploit the available supervision to train a neural network that computes scores for candidates in $C_q$ . We estimate the parameters of the architecture by maximizing the likelihood of observations. For prediction, we then output the candidate that achieves the highest probability. In the following, we present our model discussing the design decisions that enable multi-step reasoning and an efficient computation.
Reasoning on an entity graph
In an offline step, we organize the content of each training instance in a graph connecting mentions of candidate answers within and across supporting documents. For a given query $q = \langle s, r, ? \rangle $ , we identify mentions in $S_q$ of the entities in $C_q \cup \lbrace s\rbrace $ and create one node per mention. This process is based on the following heuristic:
we consider mentions spans in $S_q$ exactly matching an element of $C_q \cup \lbrace s\rbrace $ . Admittedly, this is a rather simple strategy which may suffer from low recall.
we use predictions from a coreference resolution system to add mentions of elements in $C_q \cup \lbrace s\rbrace $ beyond exact matching (including both noun phrases and anaphoric pronouns). In particular, we use the end-to-end coreference resolution by BIBREF16 .
we discard mentions which are ambiguously resolved to multiple coreference chains; this may sacrifice recall, but avoids propagating ambiguity.
To each node $v_i$ , we associate a continuous annotation $\mathbf {x}_i \in \mathbb {R}^D$ which represents an entity in the context where it was mentioned (details in Section "Node annotations" ). We then proceed to connect these mentions i) if they co-occur within the same document (we will refer to this as DOC-BASED edges), ii) if the pair of named entity mentions is identical (MATCH edges—these may connect nodes across and within documents), or iii) if they are in the same coreference chain, as predicted by the external coreference system (COREF edges). Note that MATCH edges when connecting mentions in the same document are mostly included in the set of edges predicted by the coreference system. Having the two types of edges lets us distinguish between less reliable edges provided by the coreference system and more reliable (but also more sparse) edges given by the exact-match heuristic. We treat these three types of connections as three different types of relations. See Figure 2 for an illustration. In addition to that, and to prevent having disconnected graphs, we add a fourth type of relation (COMPLEMENT edge) between any two nodes that are not connected with any of the other relations. We can think of these edges as those in the complement set of the entity graph with respect to a fully connected graph.
Our model then approaches multi-step reasoning by transforming node representations (Section "Node annotations" for details) with a differentiable message passing algorithm that propagates information through the entity graph. The algorithm is parameterized by a graph convolutional network (GCN) BIBREF13 , in particular, we employ relational-GCNs BIBREF17 , an extended version that accommodates edges of different types. In Section "Entity relational graph convolutional network" we describe the propagation rule.
Each step of the algorithm (also referred to as a hop) updates all node representations in parallel. In particular, a node is updated as a function of messages from its direct neighbours, and a message is possibly specific to a certain relation. At the end of the first step, every node is aware of every other node it connects directly to. Besides, the neighbourhood of a node may include mentions of the same entity as well as others (e.g., same-document relation), and these mentions may have occurred in different documents. Taking this idea recursively, each further step of the algorithm allows a node to indirectly interact with nodes already known to their neighbours. After $L$ layers of R-GCN, information has been propagated through paths connecting up to $L+1$ nodes.
We start with node representations $\lbrace \mathbf {h}_i^{(0)}\rbrace _{i=1}^N$ , and transform them by applying $L$ layers of R-GCN obtaining $\lbrace \mathbf {h}_i^{(L)}\rbrace _{i=1}^N$ . Together with a representation $\mathbf {q}$ of the query, we define a distribution over candidate answers and we train maximizing the likelihood of observations. The probability of selecting a candidate $c \in C_q$ as an answer is then
$$ P(c|q, C_q, S_q) \propto \exp \left(\max _{i \in \mathcal {M}_c} f_o([\mathbf {q}, \mathbf {h}^{(L)}_i]) \right)\;,$$ (Eq. 16)
where $f_o$ is a parameterized affine transformation, and $\mathcal {M}_c$ is the set of node indices such that $i\in \mathcal {M}_c$ only if node $v_i$ is a mention of $c$ . The $\max $ operator in Equation 16 is necessary to select the node with highest predicted probability since a candidate answer is realized in multiple locations via different nodes.
Node annotations
Keeping in mind we want an efficient model, we encode words in supporting documents and in the query using only a pre-trained model for contextualized word representations rather than training our own encoder. Specifically, we use ELMo BIBREF20 , a pre-trained bi-directional language model that relies on character-based input representation. ELMo representations, differently from other pre-trained word-based models (e.g., word2vec BIBREF21 or GloVe BIBREF22 ), are contextualized since each token representation depends on the entire text excerpt (i.e., the whole sentence).
We choose not to fine tune nor propagate gradients through the ELMo architecture, as it would have defied the goal of not having specialized RNN encoders. In the experiments, we will also ablate the use of ELMo showing how our model behaves using non-contextualized word representations (we use GloVe).
ELMo encodings are used to produce a set of representations $\lbrace \mathbf {x}_i\rbrace _{i=1}^N$ , where $\mathbf {x}_i \in \mathbb {R}^D$ denotes the $i$ th candidate mention in context. Note that these representations do not depend on the query yet and no trainable model was used to process the documents so far, that is, we use ELMo as a fixed pre-trained encoder. Therefore, we can pre-compute representation of mentions once and store them for later use.
ELMo encodings are used to produce a query representation $\mathbf {q} \in \mathbb {R}^K$ as well. Here, $\mathbf {q}$ is a concatenation of the final outputs from a bidirectional RNN layer trained to re-encode ELMo representations of words in the query. The vector $\mathbf {q}$ is used to compute a query-dependent representation of mentions $\lbrace \mathbf { \hat{x}}_i\rbrace _{i=1}^N$ as well as to compute a probability distribution over candidates (as in Equation 16 ). Query-dependent mention encodings $\mathbf {\hat{x}}_i = f_x(\mathbf {q}, \mathbf {x}_i)$ are generated by a trainable function $f_x$ which is parameterized by a feed-forward neural network.
Entity relational graph convolutional network
Our model uses a gated version of the original R-GCN propagation rule. At the first layer, all hidden node representation are initialized with the query-aware encodings $\mathbf {h}_i^{(0)} = \mathbf {\hat{x}}_i$ . Then, at each layer $0\le \ell \le L$ , the update message $\mathbf {u}_i^{(\ell )}$ to the $i$ th node is a sum of a transformation $f_s$ of the current node representation $\mathbf {h}^{(\ell )}_i$ and transformations of its neighbours:
$$\mathbf {u}^{(\ell )}_i = f_s(\mathbf {h}^{(\ell )}_i) + \frac{1}{|\mathcal {N}_i|} \sum _{j \in \mathcal {N}_i} \sum _{r \in \mathcal {R}_{ij}} f_r(\mathbf {h}_j^{(\ell )})\;,$$ (Eq. 22)
where $\mathcal {N}_i$ is the set of indices of nodes neighbouring the $i$ th node, $\mathcal {R}_{ij}$ is the set of edge annotations between $i$ and $j$ , and $f_r$ is a parametrized function specific to an edge type $r\in \mathcal {R}$ . Recall the available relations from Section "Ablation study" , namely, $\mathcal {R} =\lbrace $ DOC-BASED, MATCH, COREF, COMPLEMENT $\rbrace $ .
A gating mechanism regulates how much of the update message propagates to the next step. This provides the model a way to prevent completely overwriting past information. Indeed, if all necessary information to answer a question is present at a layer which is not the last, then the model should learn to stop using neighbouring information for the next steps. Gate levels are computed as
$$\mathbf {a}^{(\ell )}_i = \sigma \left( f_a\left([\mathbf {u}^{(\ell )}_i, \mathbf {h}^{(\ell )}_i ]\right) \right) \;,$$ (Eq. 23)
where $\sigma (\cdot )$ is the sigmoid function and $f_a$ a parametrized transformation. Ultimately, the updated representation is a gated combination of the previous representation and a non-linear transformation of the update message:
$$\mathbf {h}^{(\ell + 1)}_i = \phi (\mathbf {u}^{(\ell )}_i) \odot \mathbf {a}^{(\ell )}_i + \mathbf {h}^{(\ell )}_i \odot (1 - \mathbf {a}^{(\ell )}_i ) \;,$$ (Eq. 24)
where $\phi (\cdot )$ is any nonlinear function (we used $\tanh $ ) and $\odot $ stands for element-wise multiplication. All transformations $f_*$ are affine and they are not layer-dependent (since we would like to use as few parameters as possible to decrease model complexity promoting efficiency and scalability).
Experiments
In this section, we compare our method against recent work as well as preforming an ablation study using the WikiHop dataset BIBREF0 . See Appendix "Implementation and experiments details" in the supplementary material for a description of the hyper-parameters of our model and training details.
Comparison
In this experiment, we compare our Enitity-GCN against recent prior work on the same task. We present test and development results (when present) for both versions of the dataset in Table 2 . From BIBREF0 , we list an oracle based on human performance as well as two standard reading comprehension models, namely BiDAF BIBREF3 and FastQA BIBREF6 . We also compare against Coref-GRU BIBREF12 , MHPGM BIBREF11 , and Weaver BIBREF10 . Additionally, we include results of MHQA-GRN BIBREF23 , from a recent arXiv preprint describing concurrent work. They jointly train graph neural networks and recurrent encoders. We report single runs of our two best single models and an ensemble one on the unmasked test set (recall that the test set is not publicly available and the task organizers only report unmasked results) as well as both versions of the validation set.
Entity-GCN (best single model without coreference edges) outperforms all previous work by over 2% points. We additionally re-ran BiDAF baseline to compare training time: when using a single Titan X GPU, BiDAF and Entity-GCN process 12.5 and 57.8 document sets per second, respectively. Note that BIBREF0 had to use BiDAF with very small state dimensionalities (20), and smaller batch size due to the scalability issues (both memory and computation costs). We compare applying the same reductions. Eventually, we also report an ensemble of 5 independently trained models. All models are trained on the same dataset splits with different weight initializations. The ensemble prediction is obtained as $\arg \max \limits _c \prod \limits _{i=1}^5 P_i(c|q, C_q, S_q)$ from each model.
Ablation study
To help determine the sources of improvements, we perform an ablation study using the publicly available validation set (see Table 3 ). We perform two groups of ablation, one on the embedding layer, to study the effect of ELMo, and one on the edges, to study how different relations affect the overall model performance.
We argue that ELMo is crucial, since we do not rely on any other context encoder. However, it is interesting to explore how our R-GCN performs without it. Therefore, in this experiment, we replace the deep contextualized embeddings of both the query and the nodes with GloVe BIBREF22 vectors (insensitive to context). Since we do not have any component in our model that processes the documents, we expect a drop in performance. In other words, in this ablation our model tries to answer questions without reading the context at all. For example, in Figure 1 , our model would be aware that “Stockholm” and “Sweden” appear in the same document but any context words, including the ones encoding relations (e.g., “is the capital of”) will be hidden. Besides, in the masked case all mentions become `unknown' tokens with GloVe and therefore the predictions are equivalent to a random guess. Once the strong pre-trained encoder is out of the way, we also ablate the use of our R-GCN component, thus completely depriving the model from inductive biases that aim at multi-hop reasoning.
The first important observation is that replacing ELMo by GloVe (GloVe with R-GCN in Table 3 ) still yields a competitive system that ranks far above baselines from BIBREF0 and even above the Coref-GRU of BIBREF12 , in terms of accuracy on (unmasked) validation set. The second important observation is that if we then remove R-GCN (GloVe w/o R-GCN in Table 3 ), we lose 8.0 points. That is, the R-GCN component pushes the model to perform above Coref-GRU still without accessing context, but rather by updating mention representations based on their relation to other ones. These results highlight the impact of our R-GCN component.
In this experiment we investigate the effect of the different relations available in the entity graph and processed by the R-GCN module. We start off by testing our stronger encoder (i.e., ELMo) in absence of edges connecting mentions in the supporting documents (i.e., using only self-loops – No R-GCN in Table 3 ). The results suggest that WikipHop genuinely requires multihop inference, as our best model is 6.1% and 8.4% more accurate than this local model, in unmasked and masked settings, respectively. However, it also shows that ELMo representations capture predictive context features, without being explicitly trained for the task. It confirms that our goal of getting away with training expensive document encoders is a realistic one.
We then inspect our model's effectiveness in making use of the structure encoded in the graph. We start naively by fully-connecting all nodes within and across documents without distinguishing edges by type (No relation types in Table 3 ). We observe only marginal improvements with respect to ELMo alone (No R-GCN in Table 3 ) in both the unmasked and masked setting suggesting that a GCN operating over a naive entity graph would not add much to this task and a more informative graph construction and/or a more sophisticated parameterization is indeed needed.
Next, we ablate each type of relations independently, that is, we either remove connections of mentions that co-occur in the same document (DOC-BASED), connections between mentions matching exactly (MATCH), or edges predicted by the coreference system (COREF). The first thing to note is that the model makes better use of DOC-BASED connections than MATCH or COREF connections. This is mostly because i) the majority of the connections are indeed between mentions in the same document, and ii) without connecting mentions within the same document we remove important information since the model is unaware they appear closely in the document. Secondly, we notice that coreference links and complement edges seem to play a more marginal role. Though it may be surprising for coreference edges, recall that the MATCH heuristic already captures the easiest coreference cases, and for the rest the out-of-domain coreference system may not be reliable. Still, modelling all these different relations together gives our Entity-GCN a clear advantage. This is our best system evaluating on the development. Since Entity-GCN seems to gain little advantage using the coreference system, we report test results both with and without using it. Surprisingly, with coreference, we observe performance degradation on the test set. It is likely that the test documents are harder for the coreference system.
We do perform one last ablation, namely, we replace our heuristic for assigning edges and their labels by a model component that predicts them. The last row of Table 3 (Induced edges) shows model performance when edges are not predetermined but predicted. For this experiment, we use a bilinear function $f_e(\mathbf {\hat{x}}_i, \mathbf {\hat{x}}_j) = \sigma \left( \mathbf {\hat{x}}^\top _i \mathbf {W}_e \mathbf {\hat{x}}_j \right)$ that predicts the importance of a single edge connecting two nodes $i,j$ using the query-dependent representation of mentions (see Section "Node annotations" ). The performance drops below `No R-GCN' suggesting that it cannot learn these dependencies on its own.
Most results are stronger for the masked settings even though we do not apply the coreference resolution system in this setting due to masking. It is not surprising as coreferred mentions are labeled with the same identifier in the masked version, even if their original surface forms did not match ( BIBREF0 used Wikipedia links for masking). Indeed, in the masked version, an entity is always referred to via the same unique surface form (e.g., MASK1) within and across documents. In the unmasked setting, on the other hand, mentions to an entity may differ (e.g., “US” vs “United States”) and they might not be retrieved by the coreference system we are employing, making the task harder for all models. Therefore, as we rely mostly on exact matching when constructing our graph for the masked case, we are more effective in recovering coreference links on the masked rather than unmasked version.
In Figure 3 , we show how the model performance goes when the input graph is large. In particular, how Entity-GCN performs as the number of candidate answers or the number of nodes increases.
Error analysis
In this section we provide an error analysis for our best single model predictions. First of all, we look at which type of questions our model performs well or poorly. There are more than 150 query types in the validation set but we filtered the three with the best and with the worst accuracy that have at least 50 supporting documents and at least 5 candidates. We show results in Table 4 . We observe that questions regarding places (birth and death) are considered harder for Entity-GCN. We then inspect samples where our model fails while assigning highest likelihood and noticed two principal sources of failure i) a mismatch between what is written in Wikipedia and what is annotated in Wikidata, and ii) a different degree of granularity (e.g., born in “London” vs “UK” could be considered both correct by a human but not when measuring accuracy). See Table 6 in the supplement material for some reported samples.
Secondly, we study how the model performance degrades when the input graph is large. In particular, we observe a negative Pearson's correlation (-0.687) between accuracy and the number of candidate answers. However, the performance does not decrease steeply. The distribution of the number of candidates in the dataset peaks at 5 and has an average of approximately 20. Therefore, the model does not see many samples where there are a large number of candidate entities during training. Differently, we notice that as the number of nodes in the graph increases, the model performance drops but more gently (negative but closer to zero Pearson's correlation). This is important as document sets can be large in practical applications. See Figure 3 in the supplemental material for plots.
In Table 6 , we report three samples from WikiHop development set where out Entity-GCN fails. In particular, we show two instances where our model presents high confidence on the answer, and one where is not. We commented these samples explaining why our model might fail in these cases.
Related work
In previous work, BiDAF BIBREF3 , FastQA BIBREF6 , Coref-GRU BIBREF12 , MHPGM BIBREF11 , and Weaver / Jenga BIBREF10 have been applied to multi-document question answering. The first two mainly focus on single document QA and BIBREF0 adapted both of them to work with WikiHop. They process each instance of the dataset by concatenating all $d \in S_q$ in a random order adding document separator tokens. They trained using the first answer mention in the concatenated document and evaluating exact match at test time. Coref-GRU, similarly to us, encodes relations between entity mentions in the document. Instead of using graph neural network layers, as we do, they augment RNNs with jump links corresponding to pairs of corefereed mentions. MHPGM uses a multi-attention mechanism in combination with external commonsense relations to perform multiple hops of reasoning. Weaver is a deep co-encoding model that uses several alternating bi-LSTMs to process the concatenated documents and the query.
Graph neural networks have been shown successful on a number of NLP tasks BIBREF24 , BIBREF25 , BIBREF26 , including those involving document level modeling BIBREF27 . They have also been applied in the context of asking questions about knowledge contained in a knowledge base BIBREF28 . In schlichtkrull2017modeling, GCNs are used to capture reasoning chains in a knowledge base. Our work and unpublished concurrent work by BIBREF23 are the first to study graph neural networks in the context of multi-document QA. Besides differences in the architecture, BIBREF23 propose to train a combination of a graph recurrent network and an RNN encoder. We do not train any RNN document encoders in this work.
Conclusion
We designed a graph neural network that operates over a compact graph representation of a set of documents where nodes are mentions to entities and edges signal relations such as within and cross-document coreference. The model learns to answer questions by gathering evidence from different documents via a differentiable message passing algorithm that updates node representations based on their neighbourhood. Our model outperforms published results where ablations show substantial evidence in favour of multi-step reasoning. Moreover, we make the model fast by using pre-trained (contextual) embeddings.
Acknowledgments
We would like to thank Johannes Welbl for helping to test our system on WikiHop. This project is supported by SAP Innovation Center Network, ERC Starting Grant BroadSem (678254) and the Dutch Organization for Scientific Research (NWO) VIDI 639.022.518. Wilker Aziz is supported by the Dutch Organisation for Scientific Research (NWO) VICI Grant nr. 277-89-002.
Architecture
See table 5 for an outline of Entity-GCN architectural detail. Here the computational steps
ELMo embeddings are a concatenation of three 1024-dimensional vectors resulting in 3072-dimensional input vectors $\lbrace \mathbf {x}_i\rbrace _{i=1}^N$ .
For the query representation $\mathbf {q}$ , we apply 2 bi-LSTM layers of 256 and 128 hidden units to its ELMo vectors. The concatenation of the forward and backward states results in a 256-dimensional question representation.
ELMo embeddings of candidates are projected to 256-dimensional vectors, concatenated to the $\mathbf {q}$ , and further transformed with a two layers MLP of 1024 and 512 hidden units in 512-dimensional query aware entity representations $\lbrace \mathbf {\hat{x}}_i\rbrace _{i=1}^N \in \mathbb {R}^{512}$ .
All transformations $f_*$ in R-GCN-layers are affine and they do maintain the input and output dimensionality of node representations the same (512-dimensional).
Eventually, a 2-layers MLP with [256, 128] hidden units takes the concatenation between $\lbrace \mathbf {h}_i^{(L)}\rbrace _{i=1}^N$ and $\mathbf {q}$ to predict the probability that a candidate node $v_i$ may be the answer to the query $q$ (see Equation 16 ).
During preliminary trials, we experimented with different numbers of R-GCN-layers (in the range 1-7). We observed that with WikiHop, for $L \ge 3$ models reach essentially the same performance, but more layers increase the time required to train them. Besides, we observed that the gating mechanism learns to keep more and more information from the past at each layer making unnecessary to have more layers than required.
Training details
We train our models with a batch size of 32 for at most 20 epochs using the Adam optimizer BIBREF29 with $\beta _1=0.9$ , $\beta _2=0.999$ and a learning rate of $10^{-4}$ . To help against overfitting, we employ dropout (drop rate $\in {0, 0.1, 0.15, 0.2, 0.25}$ ) BIBREF30 and early-stopping on validation accuracy. We report the best results of each experiment based on accuracy on validation set. | Unanswerable |
93e8ce62361b9f687d5200d2e26015723721a90f | 93e8ce62361b9f687d5200d2e26015723721a90f_0 | Q: Did they use a relation extraction method to construct the edges in the graph?
Text: Introduction
The long-standing goal of natural language understanding is the development of systems which can acquire knowledge from text collections. Fresh interest in reading comprehension tasks was sparked by the availability of large-scale datasets, such as SQuAD BIBREF1 and CNN/Daily Mail BIBREF2 , enabling end-to-end training of neural models BIBREF3 , BIBREF4 , BIBREF5 . These systems, given a text and a question, need to answer the query relying on the given document. Recently, it has been observed that most questions in these datasets do not require reasoning across the document, but they can be answered relying on information contained in a single sentence BIBREF6 . The last generation of large-scale reading comprehension datasets, such as a NarrativeQA BIBREF7 , TriviaQA BIBREF8 , and RACE BIBREF9 , have been created in such a way as to address this shortcoming and to ensure that systems relying only on local information cannot achieve competitive performance.
Even though these new datasets are challenging and require reasoning within documents, many question answering and search applications require aggregation of information across multiple documents. The WikiHop dataset BIBREF0 was explicitly created to facilitate the development of systems dealing with these scenarios. Each example in WikiHop consists of a collection of documents, a query and a set of candidate answers (Figure 1 ). Though there is no guarantee that a question cannot be answered by relying just on a single sentence, the authors ensure that it is answerable using a chain of reasoning crossing document boundaries.
Though an important practical problem, the multi-hop setting has so far received little attention. The methods reported by BIBREF0 approach the task by merely concatenating all documents into a single long text and training a standard RNN-based reading comprehension model, namely, BiDAF BIBREF3 and FastQA BIBREF6 . Document concatenation in this setting is also used in Weaver BIBREF10 and MHPGM BIBREF11 . The only published paper which goes beyond concatenation is due to BIBREF12 , where they augment RNNs with jump-links corresponding to co-reference edges. Though these edges provide a structural bias, the RNN states are still tasked with passing the information across the document and performing multi-hop reasoning.
Instead, we frame question answering as an inference problem on a graph representing the document collection. Nodes in this graph correspond to named entities in a document whereas edges encode relations between them (e.g., cross- and within-document coreference links or simply co-occurrence in a document). We assume that reasoning chains can be captured by propagating local contextual information along edges in this graph using a graph convolutional network (GCN) BIBREF13 .
The multi-document setting imposes scalability challenges. In realistic scenarios, a system needs to learn to answer a query for a given collection (e.g., Wikipedia or a domain-specific set of documents). In such scenarios one cannot afford to run expensive document encoders (e.g., RNN or transformer-like self-attention BIBREF14 ), unless the computation can be preprocessed both at train and test time. Even if (similarly to WikiHop creators) one considers a coarse-to-fine approach, where a set of potentially relevant documents is provided, re-encoding them in a query-specific way remains the bottleneck. In contrast to other proposed methods (e.g., BIBREF12 , BIBREF10 , BIBREF3 ), we avoid training expensive document encoders.
In our approach, only a small query encoder, the GCN layers and a simple feed-forward answer selection component are learned. Instead of training RNN encoders, we use contextualized embeddings (ELMo) to obtain initial (local) representations of nodes. This implies that only a lightweight computation has to be performed online, both at train and test time, whereas the rest is preprocessed. Even in the somewhat contrived WikiHop setting, where fairly small sets of candidates are provided, the model is at least 5 times faster to train than BiDAF. Interestingly, when we substitute ELMo with simple pre-trained word embeddings, Entity-GCN still performs on par with many techniques that use expensive question-aware recurrent document encoders.
Despite not using recurrent document encoders, the full Entity-GCN model achieves over 2% improvement over the best previously-published results. As our model is efficient, we also reported results of an ensemble which brings further 3.6% of improvement and only 3% below the human performance reported by BIBREF0 . Our contributions can be summarized as follows:
Method
In this section we explain our method. We first introduce the dataset we focus on, WikiHop by BIBREF0 , as well as the task abstraction. We then present the building blocks that make up our Entity-GCN model, namely, an entity graph used to relate mentions to entities within and across documents, a document encoder used to obtain representations of mentions in context, and a relational graph convolutional network that propagates information through the entity graph.
Dataset and task abstraction
The WikiHop dataset comprises of tuples $\langle q, S_q, C_q, a^\star \rangle $ where: $q$ is a query/question, $S_q$ is a set of supporting documents, $C_q$ is a set of candidate answers (all of which are entities mentioned in $S_q$ ), and $a^\star \in C_q$ is the entity that correctly answers the question. WikiHop is assembled assuming that there exists a corpus and a knowledge base (KB) related to each other. The KB contains triples $\langle s, r, o \rangle $ where $s$ is a subject entity, $o$ an object entity, and $r$ a unidirectional relation between them. BIBREF0 used Wikipedia as corpus and Wikidata BIBREF15 as KB. The KB is only used for constructing WikiHop: BIBREF0 retrieved the supporting documents $q$0 from the corpus looking at mentions of subject and object entities in the text. Note that the set $q$1 (not the KB) is provided to the QA system, and not all of the supporting documents are relevant for the query but some of them act as distractors. Queries, on the other hand, are not expressed in natural language, but instead consist of tuples $q$2 where the object entity is unknown and it has to be inferred by reading the support documents. Therefore, answering a query corresponds to finding the entity $q$3 that is the object of a tuple in the KB with subject $q$4 and relation $q$5 among the provided set of candidate answers $q$6 .
The goal is to learn a model that can identify the correct answer $a^\star $ from the set of supporting documents $S_q$ . To that end, we exploit the available supervision to train a neural network that computes scores for candidates in $C_q$ . We estimate the parameters of the architecture by maximizing the likelihood of observations. For prediction, we then output the candidate that achieves the highest probability. In the following, we present our model discussing the design decisions that enable multi-step reasoning and an efficient computation.
Reasoning on an entity graph
In an offline step, we organize the content of each training instance in a graph connecting mentions of candidate answers within and across supporting documents. For a given query $q = \langle s, r, ? \rangle $ , we identify mentions in $S_q$ of the entities in $C_q \cup \lbrace s\rbrace $ and create one node per mention. This process is based on the following heuristic:
we consider mentions spans in $S_q$ exactly matching an element of $C_q \cup \lbrace s\rbrace $ . Admittedly, this is a rather simple strategy which may suffer from low recall.
we use predictions from a coreference resolution system to add mentions of elements in $C_q \cup \lbrace s\rbrace $ beyond exact matching (including both noun phrases and anaphoric pronouns). In particular, we use the end-to-end coreference resolution by BIBREF16 .
we discard mentions which are ambiguously resolved to multiple coreference chains; this may sacrifice recall, but avoids propagating ambiguity.
To each node $v_i$ , we associate a continuous annotation $\mathbf {x}_i \in \mathbb {R}^D$ which represents an entity in the context where it was mentioned (details in Section "Node annotations" ). We then proceed to connect these mentions i) if they co-occur within the same document (we will refer to this as DOC-BASED edges), ii) if the pair of named entity mentions is identical (MATCH edges—these may connect nodes across and within documents), or iii) if they are in the same coreference chain, as predicted by the external coreference system (COREF edges). Note that MATCH edges when connecting mentions in the same document are mostly included in the set of edges predicted by the coreference system. Having the two types of edges lets us distinguish between less reliable edges provided by the coreference system and more reliable (but also more sparse) edges given by the exact-match heuristic. We treat these three types of connections as three different types of relations. See Figure 2 for an illustration. In addition to that, and to prevent having disconnected graphs, we add a fourth type of relation (COMPLEMENT edge) between any two nodes that are not connected with any of the other relations. We can think of these edges as those in the complement set of the entity graph with respect to a fully connected graph.
Our model then approaches multi-step reasoning by transforming node representations (Section "Node annotations" for details) with a differentiable message passing algorithm that propagates information through the entity graph. The algorithm is parameterized by a graph convolutional network (GCN) BIBREF13 , in particular, we employ relational-GCNs BIBREF17 , an extended version that accommodates edges of different types. In Section "Entity relational graph convolutional network" we describe the propagation rule.
Each step of the algorithm (also referred to as a hop) updates all node representations in parallel. In particular, a node is updated as a function of messages from its direct neighbours, and a message is possibly specific to a certain relation. At the end of the first step, every node is aware of every other node it connects directly to. Besides, the neighbourhood of a node may include mentions of the same entity as well as others (e.g., same-document relation), and these mentions may have occurred in different documents. Taking this idea recursively, each further step of the algorithm allows a node to indirectly interact with nodes already known to their neighbours. After $L$ layers of R-GCN, information has been propagated through paths connecting up to $L+1$ nodes.
We start with node representations $\lbrace \mathbf {h}_i^{(0)}\rbrace _{i=1}^N$ , and transform them by applying $L$ layers of R-GCN obtaining $\lbrace \mathbf {h}_i^{(L)}\rbrace _{i=1}^N$ . Together with a representation $\mathbf {q}$ of the query, we define a distribution over candidate answers and we train maximizing the likelihood of observations. The probability of selecting a candidate $c \in C_q$ as an answer is then
$$ P(c|q, C_q, S_q) \propto \exp \left(\max _{i \in \mathcal {M}_c} f_o([\mathbf {q}, \mathbf {h}^{(L)}_i]) \right)\;,$$ (Eq. 16)
where $f_o$ is a parameterized affine transformation, and $\mathcal {M}_c$ is the set of node indices such that $i\in \mathcal {M}_c$ only if node $v_i$ is a mention of $c$ . The $\max $ operator in Equation 16 is necessary to select the node with highest predicted probability since a candidate answer is realized in multiple locations via different nodes.
Node annotations
Keeping in mind we want an efficient model, we encode words in supporting documents and in the query using only a pre-trained model for contextualized word representations rather than training our own encoder. Specifically, we use ELMo BIBREF20 , a pre-trained bi-directional language model that relies on character-based input representation. ELMo representations, differently from other pre-trained word-based models (e.g., word2vec BIBREF21 or GloVe BIBREF22 ), are contextualized since each token representation depends on the entire text excerpt (i.e., the whole sentence).
We choose not to fine tune nor propagate gradients through the ELMo architecture, as it would have defied the goal of not having specialized RNN encoders. In the experiments, we will also ablate the use of ELMo showing how our model behaves using non-contextualized word representations (we use GloVe).
ELMo encodings are used to produce a set of representations $\lbrace \mathbf {x}_i\rbrace _{i=1}^N$ , where $\mathbf {x}_i \in \mathbb {R}^D$ denotes the $i$ th candidate mention in context. Note that these representations do not depend on the query yet and no trainable model was used to process the documents so far, that is, we use ELMo as a fixed pre-trained encoder. Therefore, we can pre-compute representation of mentions once and store them for later use.
ELMo encodings are used to produce a query representation $\mathbf {q} \in \mathbb {R}^K$ as well. Here, $\mathbf {q}$ is a concatenation of the final outputs from a bidirectional RNN layer trained to re-encode ELMo representations of words in the query. The vector $\mathbf {q}$ is used to compute a query-dependent representation of mentions $\lbrace \mathbf { \hat{x}}_i\rbrace _{i=1}^N$ as well as to compute a probability distribution over candidates (as in Equation 16 ). Query-dependent mention encodings $\mathbf {\hat{x}}_i = f_x(\mathbf {q}, \mathbf {x}_i)$ are generated by a trainable function $f_x$ which is parameterized by a feed-forward neural network.
Entity relational graph convolutional network
Our model uses a gated version of the original R-GCN propagation rule. At the first layer, all hidden node representation are initialized with the query-aware encodings $\mathbf {h}_i^{(0)} = \mathbf {\hat{x}}_i$ . Then, at each layer $0\le \ell \le L$ , the update message $\mathbf {u}_i^{(\ell )}$ to the $i$ th node is a sum of a transformation $f_s$ of the current node representation $\mathbf {h}^{(\ell )}_i$ and transformations of its neighbours:
$$\mathbf {u}^{(\ell )}_i = f_s(\mathbf {h}^{(\ell )}_i) + \frac{1}{|\mathcal {N}_i|} \sum _{j \in \mathcal {N}_i} \sum _{r \in \mathcal {R}_{ij}} f_r(\mathbf {h}_j^{(\ell )})\;,$$ (Eq. 22)
where $\mathcal {N}_i$ is the set of indices of nodes neighbouring the $i$ th node, $\mathcal {R}_{ij}$ is the set of edge annotations between $i$ and $j$ , and $f_r$ is a parametrized function specific to an edge type $r\in \mathcal {R}$ . Recall the available relations from Section "Ablation study" , namely, $\mathcal {R} =\lbrace $ DOC-BASED, MATCH, COREF, COMPLEMENT $\rbrace $ .
A gating mechanism regulates how much of the update message propagates to the next step. This provides the model a way to prevent completely overwriting past information. Indeed, if all necessary information to answer a question is present at a layer which is not the last, then the model should learn to stop using neighbouring information for the next steps. Gate levels are computed as
$$\mathbf {a}^{(\ell )}_i = \sigma \left( f_a\left([\mathbf {u}^{(\ell )}_i, \mathbf {h}^{(\ell )}_i ]\right) \right) \;,$$ (Eq. 23)
where $\sigma (\cdot )$ is the sigmoid function and $f_a$ a parametrized transformation. Ultimately, the updated representation is a gated combination of the previous representation and a non-linear transformation of the update message:
$$\mathbf {h}^{(\ell + 1)}_i = \phi (\mathbf {u}^{(\ell )}_i) \odot \mathbf {a}^{(\ell )}_i + \mathbf {h}^{(\ell )}_i \odot (1 - \mathbf {a}^{(\ell )}_i ) \;,$$ (Eq. 24)
where $\phi (\cdot )$ is any nonlinear function (we used $\tanh $ ) and $\odot $ stands for element-wise multiplication. All transformations $f_*$ are affine and they are not layer-dependent (since we would like to use as few parameters as possible to decrease model complexity promoting efficiency and scalability).
Experiments
In this section, we compare our method against recent work as well as preforming an ablation study using the WikiHop dataset BIBREF0 . See Appendix "Implementation and experiments details" in the supplementary material for a description of the hyper-parameters of our model and training details.
Comparison
In this experiment, we compare our Enitity-GCN against recent prior work on the same task. We present test and development results (when present) for both versions of the dataset in Table 2 . From BIBREF0 , we list an oracle based on human performance as well as two standard reading comprehension models, namely BiDAF BIBREF3 and FastQA BIBREF6 . We also compare against Coref-GRU BIBREF12 , MHPGM BIBREF11 , and Weaver BIBREF10 . Additionally, we include results of MHQA-GRN BIBREF23 , from a recent arXiv preprint describing concurrent work. They jointly train graph neural networks and recurrent encoders. We report single runs of our two best single models and an ensemble one on the unmasked test set (recall that the test set is not publicly available and the task organizers only report unmasked results) as well as both versions of the validation set.
Entity-GCN (best single model without coreference edges) outperforms all previous work by over 2% points. We additionally re-ran BiDAF baseline to compare training time: when using a single Titan X GPU, BiDAF and Entity-GCN process 12.5 and 57.8 document sets per second, respectively. Note that BIBREF0 had to use BiDAF with very small state dimensionalities (20), and smaller batch size due to the scalability issues (both memory and computation costs). We compare applying the same reductions. Eventually, we also report an ensemble of 5 independently trained models. All models are trained on the same dataset splits with different weight initializations. The ensemble prediction is obtained as $\arg \max \limits _c \prod \limits _{i=1}^5 P_i(c|q, C_q, S_q)$ from each model.
Ablation study
To help determine the sources of improvements, we perform an ablation study using the publicly available validation set (see Table 3 ). We perform two groups of ablation, one on the embedding layer, to study the effect of ELMo, and one on the edges, to study how different relations affect the overall model performance.
We argue that ELMo is crucial, since we do not rely on any other context encoder. However, it is interesting to explore how our R-GCN performs without it. Therefore, in this experiment, we replace the deep contextualized embeddings of both the query and the nodes with GloVe BIBREF22 vectors (insensitive to context). Since we do not have any component in our model that processes the documents, we expect a drop in performance. In other words, in this ablation our model tries to answer questions without reading the context at all. For example, in Figure 1 , our model would be aware that “Stockholm” and “Sweden” appear in the same document but any context words, including the ones encoding relations (e.g., “is the capital of”) will be hidden. Besides, in the masked case all mentions become `unknown' tokens with GloVe and therefore the predictions are equivalent to a random guess. Once the strong pre-trained encoder is out of the way, we also ablate the use of our R-GCN component, thus completely depriving the model from inductive biases that aim at multi-hop reasoning.
The first important observation is that replacing ELMo by GloVe (GloVe with R-GCN in Table 3 ) still yields a competitive system that ranks far above baselines from BIBREF0 and even above the Coref-GRU of BIBREF12 , in terms of accuracy on (unmasked) validation set. The second important observation is that if we then remove R-GCN (GloVe w/o R-GCN in Table 3 ), we lose 8.0 points. That is, the R-GCN component pushes the model to perform above Coref-GRU still without accessing context, but rather by updating mention representations based on their relation to other ones. These results highlight the impact of our R-GCN component.
In this experiment we investigate the effect of the different relations available in the entity graph and processed by the R-GCN module. We start off by testing our stronger encoder (i.e., ELMo) in absence of edges connecting mentions in the supporting documents (i.e., using only self-loops – No R-GCN in Table 3 ). The results suggest that WikipHop genuinely requires multihop inference, as our best model is 6.1% and 8.4% more accurate than this local model, in unmasked and masked settings, respectively. However, it also shows that ELMo representations capture predictive context features, without being explicitly trained for the task. It confirms that our goal of getting away with training expensive document encoders is a realistic one.
We then inspect our model's effectiveness in making use of the structure encoded in the graph. We start naively by fully-connecting all nodes within and across documents without distinguishing edges by type (No relation types in Table 3 ). We observe only marginal improvements with respect to ELMo alone (No R-GCN in Table 3 ) in both the unmasked and masked setting suggesting that a GCN operating over a naive entity graph would not add much to this task and a more informative graph construction and/or a more sophisticated parameterization is indeed needed.
Next, we ablate each type of relations independently, that is, we either remove connections of mentions that co-occur in the same document (DOC-BASED), connections between mentions matching exactly (MATCH), or edges predicted by the coreference system (COREF). The first thing to note is that the model makes better use of DOC-BASED connections than MATCH or COREF connections. This is mostly because i) the majority of the connections are indeed between mentions in the same document, and ii) without connecting mentions within the same document we remove important information since the model is unaware they appear closely in the document. Secondly, we notice that coreference links and complement edges seem to play a more marginal role. Though it may be surprising for coreference edges, recall that the MATCH heuristic already captures the easiest coreference cases, and for the rest the out-of-domain coreference system may not be reliable. Still, modelling all these different relations together gives our Entity-GCN a clear advantage. This is our best system evaluating on the development. Since Entity-GCN seems to gain little advantage using the coreference system, we report test results both with and without using it. Surprisingly, with coreference, we observe performance degradation on the test set. It is likely that the test documents are harder for the coreference system.
We do perform one last ablation, namely, we replace our heuristic for assigning edges and their labels by a model component that predicts them. The last row of Table 3 (Induced edges) shows model performance when edges are not predetermined but predicted. For this experiment, we use a bilinear function $f_e(\mathbf {\hat{x}}_i, \mathbf {\hat{x}}_j) = \sigma \left( \mathbf {\hat{x}}^\top _i \mathbf {W}_e \mathbf {\hat{x}}_j \right)$ that predicts the importance of a single edge connecting two nodes $i,j$ using the query-dependent representation of mentions (see Section "Node annotations" ). The performance drops below `No R-GCN' suggesting that it cannot learn these dependencies on its own.
Most results are stronger for the masked settings even though we do not apply the coreference resolution system in this setting due to masking. It is not surprising as coreferred mentions are labeled with the same identifier in the masked version, even if their original surface forms did not match ( BIBREF0 used Wikipedia links for masking). Indeed, in the masked version, an entity is always referred to via the same unique surface form (e.g., MASK1) within and across documents. In the unmasked setting, on the other hand, mentions to an entity may differ (e.g., “US” vs “United States”) and they might not be retrieved by the coreference system we are employing, making the task harder for all models. Therefore, as we rely mostly on exact matching when constructing our graph for the masked case, we are more effective in recovering coreference links on the masked rather than unmasked version.
In Figure 3 , we show how the model performance goes when the input graph is large. In particular, how Entity-GCN performs as the number of candidate answers or the number of nodes increases.
Error analysis
In this section we provide an error analysis for our best single model predictions. First of all, we look at which type of questions our model performs well or poorly. There are more than 150 query types in the validation set but we filtered the three with the best and with the worst accuracy that have at least 50 supporting documents and at least 5 candidates. We show results in Table 4 . We observe that questions regarding places (birth and death) are considered harder for Entity-GCN. We then inspect samples where our model fails while assigning highest likelihood and noticed two principal sources of failure i) a mismatch between what is written in Wikipedia and what is annotated in Wikidata, and ii) a different degree of granularity (e.g., born in “London” vs “UK” could be considered both correct by a human but not when measuring accuracy). See Table 6 in the supplement material for some reported samples.
Secondly, we study how the model performance degrades when the input graph is large. In particular, we observe a negative Pearson's correlation (-0.687) between accuracy and the number of candidate answers. However, the performance does not decrease steeply. The distribution of the number of candidates in the dataset peaks at 5 and has an average of approximately 20. Therefore, the model does not see many samples where there are a large number of candidate entities during training. Differently, we notice that as the number of nodes in the graph increases, the model performance drops but more gently (negative but closer to zero Pearson's correlation). This is important as document sets can be large in practical applications. See Figure 3 in the supplemental material for plots.
In Table 6 , we report three samples from WikiHop development set where out Entity-GCN fails. In particular, we show two instances where our model presents high confidence on the answer, and one where is not. We commented these samples explaining why our model might fail in these cases.
Related work
In previous work, BiDAF BIBREF3 , FastQA BIBREF6 , Coref-GRU BIBREF12 , MHPGM BIBREF11 , and Weaver / Jenga BIBREF10 have been applied to multi-document question answering. The first two mainly focus on single document QA and BIBREF0 adapted both of them to work with WikiHop. They process each instance of the dataset by concatenating all $d \in S_q$ in a random order adding document separator tokens. They trained using the first answer mention in the concatenated document and evaluating exact match at test time. Coref-GRU, similarly to us, encodes relations between entity mentions in the document. Instead of using graph neural network layers, as we do, they augment RNNs with jump links corresponding to pairs of corefereed mentions. MHPGM uses a multi-attention mechanism in combination with external commonsense relations to perform multiple hops of reasoning. Weaver is a deep co-encoding model that uses several alternating bi-LSTMs to process the concatenated documents and the query.
Graph neural networks have been shown successful on a number of NLP tasks BIBREF24 , BIBREF25 , BIBREF26 , including those involving document level modeling BIBREF27 . They have also been applied in the context of asking questions about knowledge contained in a knowledge base BIBREF28 . In schlichtkrull2017modeling, GCNs are used to capture reasoning chains in a knowledge base. Our work and unpublished concurrent work by BIBREF23 are the first to study graph neural networks in the context of multi-document QA. Besides differences in the architecture, BIBREF23 propose to train a combination of a graph recurrent network and an RNN encoder. We do not train any RNN document encoders in this work.
Conclusion
We designed a graph neural network that operates over a compact graph representation of a set of documents where nodes are mentions to entities and edges signal relations such as within and cross-document coreference. The model learns to answer questions by gathering evidence from different documents via a differentiable message passing algorithm that updates node representations based on their neighbourhood. Our model outperforms published results where ablations show substantial evidence in favour of multi-step reasoning. Moreover, we make the model fast by using pre-trained (contextual) embeddings.
Acknowledgments
We would like to thank Johannes Welbl for helping to test our system on WikiHop. This project is supported by SAP Innovation Center Network, ERC Starting Grant BroadSem (678254) and the Dutch Organization for Scientific Research (NWO) VIDI 639.022.518. Wilker Aziz is supported by the Dutch Organisation for Scientific Research (NWO) VICI Grant nr. 277-89-002.
Architecture
See table 5 for an outline of Entity-GCN architectural detail. Here the computational steps
ELMo embeddings are a concatenation of three 1024-dimensional vectors resulting in 3072-dimensional input vectors $\lbrace \mathbf {x}_i\rbrace _{i=1}^N$ .
For the query representation $\mathbf {q}$ , we apply 2 bi-LSTM layers of 256 and 128 hidden units to its ELMo vectors. The concatenation of the forward and backward states results in a 256-dimensional question representation.
ELMo embeddings of candidates are projected to 256-dimensional vectors, concatenated to the $\mathbf {q}$ , and further transformed with a two layers MLP of 1024 and 512 hidden units in 512-dimensional query aware entity representations $\lbrace \mathbf {\hat{x}}_i\rbrace _{i=1}^N \in \mathbb {R}^{512}$ .
All transformations $f_*$ in R-GCN-layers are affine and they do maintain the input and output dimensionality of node representations the same (512-dimensional).
Eventually, a 2-layers MLP with [256, 128] hidden units takes the concatenation between $\lbrace \mathbf {h}_i^{(L)}\rbrace _{i=1}^N$ and $\mathbf {q}$ to predict the probability that a candidate node $v_i$ may be the answer to the query $q$ (see Equation 16 ).
During preliminary trials, we experimented with different numbers of R-GCN-layers (in the range 1-7). We observed that with WikiHop, for $L \ge 3$ models reach essentially the same performance, but more layers increase the time required to train them. Besides, we observed that the gating mechanism learns to keep more and more information from the past at each layer making unnecessary to have more layers than required.
Training details
We train our models with a batch size of 32 for at most 20 epochs using the Adam optimizer BIBREF29 with $\beta _1=0.9$ , $\beta _2=0.999$ and a learning rate of $10^{-4}$ . To help against overfitting, we employ dropout (drop rate $\in {0, 0.1, 0.15, 0.2, 0.25}$ ) BIBREF30 and early-stopping on validation accuracy. We report the best results of each experiment based on accuracy on validation set. | No |
d05d667822cb49cefd03c24a97721f1fe9dc0f4c | d05d667822cb49cefd03c24a97721f1fe9dc0f4c_0 | Q: How did they get relations between mentions?
Text: Introduction
The long-standing goal of natural language understanding is the development of systems which can acquire knowledge from text collections. Fresh interest in reading comprehension tasks was sparked by the availability of large-scale datasets, such as SQuAD BIBREF1 and CNN/Daily Mail BIBREF2 , enabling end-to-end training of neural models BIBREF3 , BIBREF4 , BIBREF5 . These systems, given a text and a question, need to answer the query relying on the given document. Recently, it has been observed that most questions in these datasets do not require reasoning across the document, but they can be answered relying on information contained in a single sentence BIBREF6 . The last generation of large-scale reading comprehension datasets, such as a NarrativeQA BIBREF7 , TriviaQA BIBREF8 , and RACE BIBREF9 , have been created in such a way as to address this shortcoming and to ensure that systems relying only on local information cannot achieve competitive performance.
Even though these new datasets are challenging and require reasoning within documents, many question answering and search applications require aggregation of information across multiple documents. The WikiHop dataset BIBREF0 was explicitly created to facilitate the development of systems dealing with these scenarios. Each example in WikiHop consists of a collection of documents, a query and a set of candidate answers (Figure 1 ). Though there is no guarantee that a question cannot be answered by relying just on a single sentence, the authors ensure that it is answerable using a chain of reasoning crossing document boundaries.
Though an important practical problem, the multi-hop setting has so far received little attention. The methods reported by BIBREF0 approach the task by merely concatenating all documents into a single long text and training a standard RNN-based reading comprehension model, namely, BiDAF BIBREF3 and FastQA BIBREF6 . Document concatenation in this setting is also used in Weaver BIBREF10 and MHPGM BIBREF11 . The only published paper which goes beyond concatenation is due to BIBREF12 , where they augment RNNs with jump-links corresponding to co-reference edges. Though these edges provide a structural bias, the RNN states are still tasked with passing the information across the document and performing multi-hop reasoning.
Instead, we frame question answering as an inference problem on a graph representing the document collection. Nodes in this graph correspond to named entities in a document whereas edges encode relations between them (e.g., cross- and within-document coreference links or simply co-occurrence in a document). We assume that reasoning chains can be captured by propagating local contextual information along edges in this graph using a graph convolutional network (GCN) BIBREF13 .
The multi-document setting imposes scalability challenges. In realistic scenarios, a system needs to learn to answer a query for a given collection (e.g., Wikipedia or a domain-specific set of documents). In such scenarios one cannot afford to run expensive document encoders (e.g., RNN or transformer-like self-attention BIBREF14 ), unless the computation can be preprocessed both at train and test time. Even if (similarly to WikiHop creators) one considers a coarse-to-fine approach, where a set of potentially relevant documents is provided, re-encoding them in a query-specific way remains the bottleneck. In contrast to other proposed methods (e.g., BIBREF12 , BIBREF10 , BIBREF3 ), we avoid training expensive document encoders.
In our approach, only a small query encoder, the GCN layers and a simple feed-forward answer selection component are learned. Instead of training RNN encoders, we use contextualized embeddings (ELMo) to obtain initial (local) representations of nodes. This implies that only a lightweight computation has to be performed online, both at train and test time, whereas the rest is preprocessed. Even in the somewhat contrived WikiHop setting, where fairly small sets of candidates are provided, the model is at least 5 times faster to train than BiDAF. Interestingly, when we substitute ELMo with simple pre-trained word embeddings, Entity-GCN still performs on par with many techniques that use expensive question-aware recurrent document encoders.
Despite not using recurrent document encoders, the full Entity-GCN model achieves over 2% improvement over the best previously-published results. As our model is efficient, we also reported results of an ensemble which brings further 3.6% of improvement and only 3% below the human performance reported by BIBREF0 . Our contributions can be summarized as follows:
Method
In this section we explain our method. We first introduce the dataset we focus on, WikiHop by BIBREF0 , as well as the task abstraction. We then present the building blocks that make up our Entity-GCN model, namely, an entity graph used to relate mentions to entities within and across documents, a document encoder used to obtain representations of mentions in context, and a relational graph convolutional network that propagates information through the entity graph.
Dataset and task abstraction
The WikiHop dataset comprises of tuples $\langle q, S_q, C_q, a^\star \rangle $ where: $q$ is a query/question, $S_q$ is a set of supporting documents, $C_q$ is a set of candidate answers (all of which are entities mentioned in $S_q$ ), and $a^\star \in C_q$ is the entity that correctly answers the question. WikiHop is assembled assuming that there exists a corpus and a knowledge base (KB) related to each other. The KB contains triples $\langle s, r, o \rangle $ where $s$ is a subject entity, $o$ an object entity, and $r$ a unidirectional relation between them. BIBREF0 used Wikipedia as corpus and Wikidata BIBREF15 as KB. The KB is only used for constructing WikiHop: BIBREF0 retrieved the supporting documents $q$0 from the corpus looking at mentions of subject and object entities in the text. Note that the set $q$1 (not the KB) is provided to the QA system, and not all of the supporting documents are relevant for the query but some of them act as distractors. Queries, on the other hand, are not expressed in natural language, but instead consist of tuples $q$2 where the object entity is unknown and it has to be inferred by reading the support documents. Therefore, answering a query corresponds to finding the entity $q$3 that is the object of a tuple in the KB with subject $q$4 and relation $q$5 among the provided set of candidate answers $q$6 .
The goal is to learn a model that can identify the correct answer $a^\star $ from the set of supporting documents $S_q$ . To that end, we exploit the available supervision to train a neural network that computes scores for candidates in $C_q$ . We estimate the parameters of the architecture by maximizing the likelihood of observations. For prediction, we then output the candidate that achieves the highest probability. In the following, we present our model discussing the design decisions that enable multi-step reasoning and an efficient computation.
Reasoning on an entity graph
In an offline step, we organize the content of each training instance in a graph connecting mentions of candidate answers within and across supporting documents. For a given query $q = \langle s, r, ? \rangle $ , we identify mentions in $S_q$ of the entities in $C_q \cup \lbrace s\rbrace $ and create one node per mention. This process is based on the following heuristic:
we consider mentions spans in $S_q$ exactly matching an element of $C_q \cup \lbrace s\rbrace $ . Admittedly, this is a rather simple strategy which may suffer from low recall.
we use predictions from a coreference resolution system to add mentions of elements in $C_q \cup \lbrace s\rbrace $ beyond exact matching (including both noun phrases and anaphoric pronouns). In particular, we use the end-to-end coreference resolution by BIBREF16 .
we discard mentions which are ambiguously resolved to multiple coreference chains; this may sacrifice recall, but avoids propagating ambiguity.
To each node $v_i$ , we associate a continuous annotation $\mathbf {x}_i \in \mathbb {R}^D$ which represents an entity in the context where it was mentioned (details in Section "Node annotations" ). We then proceed to connect these mentions i) if they co-occur within the same document (we will refer to this as DOC-BASED edges), ii) if the pair of named entity mentions is identical (MATCH edges—these may connect nodes across and within documents), or iii) if they are in the same coreference chain, as predicted by the external coreference system (COREF edges). Note that MATCH edges when connecting mentions in the same document are mostly included in the set of edges predicted by the coreference system. Having the two types of edges lets us distinguish between less reliable edges provided by the coreference system and more reliable (but also more sparse) edges given by the exact-match heuristic. We treat these three types of connections as three different types of relations. See Figure 2 for an illustration. In addition to that, and to prevent having disconnected graphs, we add a fourth type of relation (COMPLEMENT edge) between any two nodes that are not connected with any of the other relations. We can think of these edges as those in the complement set of the entity graph with respect to a fully connected graph.
Our model then approaches multi-step reasoning by transforming node representations (Section "Node annotations" for details) with a differentiable message passing algorithm that propagates information through the entity graph. The algorithm is parameterized by a graph convolutional network (GCN) BIBREF13 , in particular, we employ relational-GCNs BIBREF17 , an extended version that accommodates edges of different types. In Section "Entity relational graph convolutional network" we describe the propagation rule.
Each step of the algorithm (also referred to as a hop) updates all node representations in parallel. In particular, a node is updated as a function of messages from its direct neighbours, and a message is possibly specific to a certain relation. At the end of the first step, every node is aware of every other node it connects directly to. Besides, the neighbourhood of a node may include mentions of the same entity as well as others (e.g., same-document relation), and these mentions may have occurred in different documents. Taking this idea recursively, each further step of the algorithm allows a node to indirectly interact with nodes already known to their neighbours. After $L$ layers of R-GCN, information has been propagated through paths connecting up to $L+1$ nodes.
We start with node representations $\lbrace \mathbf {h}_i^{(0)}\rbrace _{i=1}^N$ , and transform them by applying $L$ layers of R-GCN obtaining $\lbrace \mathbf {h}_i^{(L)}\rbrace _{i=1}^N$ . Together with a representation $\mathbf {q}$ of the query, we define a distribution over candidate answers and we train maximizing the likelihood of observations. The probability of selecting a candidate $c \in C_q$ as an answer is then
$$ P(c|q, C_q, S_q) \propto \exp \left(\max _{i \in \mathcal {M}_c} f_o([\mathbf {q}, \mathbf {h}^{(L)}_i]) \right)\;,$$ (Eq. 16)
where $f_o$ is a parameterized affine transformation, and $\mathcal {M}_c$ is the set of node indices such that $i\in \mathcal {M}_c$ only if node $v_i$ is a mention of $c$ . The $\max $ operator in Equation 16 is necessary to select the node with highest predicted probability since a candidate answer is realized in multiple locations via different nodes.
Node annotations
Keeping in mind we want an efficient model, we encode words in supporting documents and in the query using only a pre-trained model for contextualized word representations rather than training our own encoder. Specifically, we use ELMo BIBREF20 , a pre-trained bi-directional language model that relies on character-based input representation. ELMo representations, differently from other pre-trained word-based models (e.g., word2vec BIBREF21 or GloVe BIBREF22 ), are contextualized since each token representation depends on the entire text excerpt (i.e., the whole sentence).
We choose not to fine tune nor propagate gradients through the ELMo architecture, as it would have defied the goal of not having specialized RNN encoders. In the experiments, we will also ablate the use of ELMo showing how our model behaves using non-contextualized word representations (we use GloVe).
ELMo encodings are used to produce a set of representations $\lbrace \mathbf {x}_i\rbrace _{i=1}^N$ , where $\mathbf {x}_i \in \mathbb {R}^D$ denotes the $i$ th candidate mention in context. Note that these representations do not depend on the query yet and no trainable model was used to process the documents so far, that is, we use ELMo as a fixed pre-trained encoder. Therefore, we can pre-compute representation of mentions once and store them for later use.
ELMo encodings are used to produce a query representation $\mathbf {q} \in \mathbb {R}^K$ as well. Here, $\mathbf {q}$ is a concatenation of the final outputs from a bidirectional RNN layer trained to re-encode ELMo representations of words in the query. The vector $\mathbf {q}$ is used to compute a query-dependent representation of mentions $\lbrace \mathbf { \hat{x}}_i\rbrace _{i=1}^N$ as well as to compute a probability distribution over candidates (as in Equation 16 ). Query-dependent mention encodings $\mathbf {\hat{x}}_i = f_x(\mathbf {q}, \mathbf {x}_i)$ are generated by a trainable function $f_x$ which is parameterized by a feed-forward neural network.
Entity relational graph convolutional network
Our model uses a gated version of the original R-GCN propagation rule. At the first layer, all hidden node representation are initialized with the query-aware encodings $\mathbf {h}_i^{(0)} = \mathbf {\hat{x}}_i$ . Then, at each layer $0\le \ell \le L$ , the update message $\mathbf {u}_i^{(\ell )}$ to the $i$ th node is a sum of a transformation $f_s$ of the current node representation $\mathbf {h}^{(\ell )}_i$ and transformations of its neighbours:
$$\mathbf {u}^{(\ell )}_i = f_s(\mathbf {h}^{(\ell )}_i) + \frac{1}{|\mathcal {N}_i|} \sum _{j \in \mathcal {N}_i} \sum _{r \in \mathcal {R}_{ij}} f_r(\mathbf {h}_j^{(\ell )})\;,$$ (Eq. 22)
where $\mathcal {N}_i$ is the set of indices of nodes neighbouring the $i$ th node, $\mathcal {R}_{ij}$ is the set of edge annotations between $i$ and $j$ , and $f_r$ is a parametrized function specific to an edge type $r\in \mathcal {R}$ . Recall the available relations from Section "Ablation study" , namely, $\mathcal {R} =\lbrace $ DOC-BASED, MATCH, COREF, COMPLEMENT $\rbrace $ .
A gating mechanism regulates how much of the update message propagates to the next step. This provides the model a way to prevent completely overwriting past information. Indeed, if all necessary information to answer a question is present at a layer which is not the last, then the model should learn to stop using neighbouring information for the next steps. Gate levels are computed as
$$\mathbf {a}^{(\ell )}_i = \sigma \left( f_a\left([\mathbf {u}^{(\ell )}_i, \mathbf {h}^{(\ell )}_i ]\right) \right) \;,$$ (Eq. 23)
where $\sigma (\cdot )$ is the sigmoid function and $f_a$ a parametrized transformation. Ultimately, the updated representation is a gated combination of the previous representation and a non-linear transformation of the update message:
$$\mathbf {h}^{(\ell + 1)}_i = \phi (\mathbf {u}^{(\ell )}_i) \odot \mathbf {a}^{(\ell )}_i + \mathbf {h}^{(\ell )}_i \odot (1 - \mathbf {a}^{(\ell )}_i ) \;,$$ (Eq. 24)
where $\phi (\cdot )$ is any nonlinear function (we used $\tanh $ ) and $\odot $ stands for element-wise multiplication. All transformations $f_*$ are affine and they are not layer-dependent (since we would like to use as few parameters as possible to decrease model complexity promoting efficiency and scalability).
Experiments
In this section, we compare our method against recent work as well as preforming an ablation study using the WikiHop dataset BIBREF0 . See Appendix "Implementation and experiments details" in the supplementary material for a description of the hyper-parameters of our model and training details.
Comparison
In this experiment, we compare our Enitity-GCN against recent prior work on the same task. We present test and development results (when present) for both versions of the dataset in Table 2 . From BIBREF0 , we list an oracle based on human performance as well as two standard reading comprehension models, namely BiDAF BIBREF3 and FastQA BIBREF6 . We also compare against Coref-GRU BIBREF12 , MHPGM BIBREF11 , and Weaver BIBREF10 . Additionally, we include results of MHQA-GRN BIBREF23 , from a recent arXiv preprint describing concurrent work. They jointly train graph neural networks and recurrent encoders. We report single runs of our two best single models and an ensemble one on the unmasked test set (recall that the test set is not publicly available and the task organizers only report unmasked results) as well as both versions of the validation set.
Entity-GCN (best single model without coreference edges) outperforms all previous work by over 2% points. We additionally re-ran BiDAF baseline to compare training time: when using a single Titan X GPU, BiDAF and Entity-GCN process 12.5 and 57.8 document sets per second, respectively. Note that BIBREF0 had to use BiDAF with very small state dimensionalities (20), and smaller batch size due to the scalability issues (both memory and computation costs). We compare applying the same reductions. Eventually, we also report an ensemble of 5 independently trained models. All models are trained on the same dataset splits with different weight initializations. The ensemble prediction is obtained as $\arg \max \limits _c \prod \limits _{i=1}^5 P_i(c|q, C_q, S_q)$ from each model.
Ablation study
To help determine the sources of improvements, we perform an ablation study using the publicly available validation set (see Table 3 ). We perform two groups of ablation, one on the embedding layer, to study the effect of ELMo, and one on the edges, to study how different relations affect the overall model performance.
We argue that ELMo is crucial, since we do not rely on any other context encoder. However, it is interesting to explore how our R-GCN performs without it. Therefore, in this experiment, we replace the deep contextualized embeddings of both the query and the nodes with GloVe BIBREF22 vectors (insensitive to context). Since we do not have any component in our model that processes the documents, we expect a drop in performance. In other words, in this ablation our model tries to answer questions without reading the context at all. For example, in Figure 1 , our model would be aware that “Stockholm” and “Sweden” appear in the same document but any context words, including the ones encoding relations (e.g., “is the capital of”) will be hidden. Besides, in the masked case all mentions become `unknown' tokens with GloVe and therefore the predictions are equivalent to a random guess. Once the strong pre-trained encoder is out of the way, we also ablate the use of our R-GCN component, thus completely depriving the model from inductive biases that aim at multi-hop reasoning.
The first important observation is that replacing ELMo by GloVe (GloVe with R-GCN in Table 3 ) still yields a competitive system that ranks far above baselines from BIBREF0 and even above the Coref-GRU of BIBREF12 , in terms of accuracy on (unmasked) validation set. The second important observation is that if we then remove R-GCN (GloVe w/o R-GCN in Table 3 ), we lose 8.0 points. That is, the R-GCN component pushes the model to perform above Coref-GRU still without accessing context, but rather by updating mention representations based on their relation to other ones. These results highlight the impact of our R-GCN component.
In this experiment we investigate the effect of the different relations available in the entity graph and processed by the R-GCN module. We start off by testing our stronger encoder (i.e., ELMo) in absence of edges connecting mentions in the supporting documents (i.e., using only self-loops – No R-GCN in Table 3 ). The results suggest that WikipHop genuinely requires multihop inference, as our best model is 6.1% and 8.4% more accurate than this local model, in unmasked and masked settings, respectively. However, it also shows that ELMo representations capture predictive context features, without being explicitly trained for the task. It confirms that our goal of getting away with training expensive document encoders is a realistic one.
We then inspect our model's effectiveness in making use of the structure encoded in the graph. We start naively by fully-connecting all nodes within and across documents without distinguishing edges by type (No relation types in Table 3 ). We observe only marginal improvements with respect to ELMo alone (No R-GCN in Table 3 ) in both the unmasked and masked setting suggesting that a GCN operating over a naive entity graph would not add much to this task and a more informative graph construction and/or a more sophisticated parameterization is indeed needed.
Next, we ablate each type of relations independently, that is, we either remove connections of mentions that co-occur in the same document (DOC-BASED), connections between mentions matching exactly (MATCH), or edges predicted by the coreference system (COREF). The first thing to note is that the model makes better use of DOC-BASED connections than MATCH or COREF connections. This is mostly because i) the majority of the connections are indeed between mentions in the same document, and ii) without connecting mentions within the same document we remove important information since the model is unaware they appear closely in the document. Secondly, we notice that coreference links and complement edges seem to play a more marginal role. Though it may be surprising for coreference edges, recall that the MATCH heuristic already captures the easiest coreference cases, and for the rest the out-of-domain coreference system may not be reliable. Still, modelling all these different relations together gives our Entity-GCN a clear advantage. This is our best system evaluating on the development. Since Entity-GCN seems to gain little advantage using the coreference system, we report test results both with and without using it. Surprisingly, with coreference, we observe performance degradation on the test set. It is likely that the test documents are harder for the coreference system.
We do perform one last ablation, namely, we replace our heuristic for assigning edges and their labels by a model component that predicts them. The last row of Table 3 (Induced edges) shows model performance when edges are not predetermined but predicted. For this experiment, we use a bilinear function $f_e(\mathbf {\hat{x}}_i, \mathbf {\hat{x}}_j) = \sigma \left( \mathbf {\hat{x}}^\top _i \mathbf {W}_e \mathbf {\hat{x}}_j \right)$ that predicts the importance of a single edge connecting two nodes $i,j$ using the query-dependent representation of mentions (see Section "Node annotations" ). The performance drops below `No R-GCN' suggesting that it cannot learn these dependencies on its own.
Most results are stronger for the masked settings even though we do not apply the coreference resolution system in this setting due to masking. It is not surprising as coreferred mentions are labeled with the same identifier in the masked version, even if their original surface forms did not match ( BIBREF0 used Wikipedia links for masking). Indeed, in the masked version, an entity is always referred to via the same unique surface form (e.g., MASK1) within and across documents. In the unmasked setting, on the other hand, mentions to an entity may differ (e.g., “US” vs “United States”) and they might not be retrieved by the coreference system we are employing, making the task harder for all models. Therefore, as we rely mostly on exact matching when constructing our graph for the masked case, we are more effective in recovering coreference links on the masked rather than unmasked version.
In Figure 3 , we show how the model performance goes when the input graph is large. In particular, how Entity-GCN performs as the number of candidate answers or the number of nodes increases.
Error analysis
In this section we provide an error analysis for our best single model predictions. First of all, we look at which type of questions our model performs well or poorly. There are more than 150 query types in the validation set but we filtered the three with the best and with the worst accuracy that have at least 50 supporting documents and at least 5 candidates. We show results in Table 4 . We observe that questions regarding places (birth and death) are considered harder for Entity-GCN. We then inspect samples where our model fails while assigning highest likelihood and noticed two principal sources of failure i) a mismatch between what is written in Wikipedia and what is annotated in Wikidata, and ii) a different degree of granularity (e.g., born in “London” vs “UK” could be considered both correct by a human but not when measuring accuracy). See Table 6 in the supplement material for some reported samples.
Secondly, we study how the model performance degrades when the input graph is large. In particular, we observe a negative Pearson's correlation (-0.687) between accuracy and the number of candidate answers. However, the performance does not decrease steeply. The distribution of the number of candidates in the dataset peaks at 5 and has an average of approximately 20. Therefore, the model does not see many samples where there are a large number of candidate entities during training. Differently, we notice that as the number of nodes in the graph increases, the model performance drops but more gently (negative but closer to zero Pearson's correlation). This is important as document sets can be large in practical applications. See Figure 3 in the supplemental material for plots.
In Table 6 , we report three samples from WikiHop development set where out Entity-GCN fails. In particular, we show two instances where our model presents high confidence on the answer, and one where is not. We commented these samples explaining why our model might fail in these cases.
Related work
In previous work, BiDAF BIBREF3 , FastQA BIBREF6 , Coref-GRU BIBREF12 , MHPGM BIBREF11 , and Weaver / Jenga BIBREF10 have been applied to multi-document question answering. The first two mainly focus on single document QA and BIBREF0 adapted both of them to work with WikiHop. They process each instance of the dataset by concatenating all $d \in S_q$ in a random order adding document separator tokens. They trained using the first answer mention in the concatenated document and evaluating exact match at test time. Coref-GRU, similarly to us, encodes relations between entity mentions in the document. Instead of using graph neural network layers, as we do, they augment RNNs with jump links corresponding to pairs of corefereed mentions. MHPGM uses a multi-attention mechanism in combination with external commonsense relations to perform multiple hops of reasoning. Weaver is a deep co-encoding model that uses several alternating bi-LSTMs to process the concatenated documents and the query.
Graph neural networks have been shown successful on a number of NLP tasks BIBREF24 , BIBREF25 , BIBREF26 , including those involving document level modeling BIBREF27 . They have also been applied in the context of asking questions about knowledge contained in a knowledge base BIBREF28 . In schlichtkrull2017modeling, GCNs are used to capture reasoning chains in a knowledge base. Our work and unpublished concurrent work by BIBREF23 are the first to study graph neural networks in the context of multi-document QA. Besides differences in the architecture, BIBREF23 propose to train a combination of a graph recurrent network and an RNN encoder. We do not train any RNN document encoders in this work.
Conclusion
We designed a graph neural network that operates over a compact graph representation of a set of documents where nodes are mentions to entities and edges signal relations such as within and cross-document coreference. The model learns to answer questions by gathering evidence from different documents via a differentiable message passing algorithm that updates node representations based on their neighbourhood. Our model outperforms published results where ablations show substantial evidence in favour of multi-step reasoning. Moreover, we make the model fast by using pre-trained (contextual) embeddings.
Acknowledgments
We would like to thank Johannes Welbl for helping to test our system on WikiHop. This project is supported by SAP Innovation Center Network, ERC Starting Grant BroadSem (678254) and the Dutch Organization for Scientific Research (NWO) VIDI 639.022.518. Wilker Aziz is supported by the Dutch Organisation for Scientific Research (NWO) VICI Grant nr. 277-89-002.
Architecture
See table 5 for an outline of Entity-GCN architectural detail. Here the computational steps
ELMo embeddings are a concatenation of three 1024-dimensional vectors resulting in 3072-dimensional input vectors $\lbrace \mathbf {x}_i\rbrace _{i=1}^N$ .
For the query representation $\mathbf {q}$ , we apply 2 bi-LSTM layers of 256 and 128 hidden units to its ELMo vectors. The concatenation of the forward and backward states results in a 256-dimensional question representation.
ELMo embeddings of candidates are projected to 256-dimensional vectors, concatenated to the $\mathbf {q}$ , and further transformed with a two layers MLP of 1024 and 512 hidden units in 512-dimensional query aware entity representations $\lbrace \mathbf {\hat{x}}_i\rbrace _{i=1}^N \in \mathbb {R}^{512}$ .
All transformations $f_*$ in R-GCN-layers are affine and they do maintain the input and output dimensionality of node representations the same (512-dimensional).
Eventually, a 2-layers MLP with [256, 128] hidden units takes the concatenation between $\lbrace \mathbf {h}_i^{(L)}\rbrace _{i=1}^N$ and $\mathbf {q}$ to predict the probability that a candidate node $v_i$ may be the answer to the query $q$ (see Equation 16 ).
During preliminary trials, we experimented with different numbers of R-GCN-layers (in the range 1-7). We observed that with WikiHop, for $L \ge 3$ models reach essentially the same performance, but more layers increase the time required to train them. Besides, we observed that the gating mechanism learns to keep more and more information from the past at each layer making unnecessary to have more layers than required.
Training details
We train our models with a batch size of 32 for at most 20 epochs using the Adam optimizer BIBREF29 with $\beta _1=0.9$ , $\beta _2=0.999$ and a learning rate of $10^{-4}$ . To help against overfitting, we employ dropout (drop rate $\in {0, 0.1, 0.15, 0.2, 0.25}$ ) BIBREF30 and early-stopping on validation accuracy. We report the best results of each experiment based on accuracy on validation set. | Assign a value to the relation based on whether mentions occur in the same document, if mentions are identical, or if mentions are in the same coreference chain. |
2a1e6a69e06da2328fc73016ee057378821e0754 | 2a1e6a69e06da2328fc73016ee057378821e0754_0 | Q: How did they detect entity mentions?
Text: Introduction
The long-standing goal of natural language understanding is the development of systems which can acquire knowledge from text collections. Fresh interest in reading comprehension tasks was sparked by the availability of large-scale datasets, such as SQuAD BIBREF1 and CNN/Daily Mail BIBREF2 , enabling end-to-end training of neural models BIBREF3 , BIBREF4 , BIBREF5 . These systems, given a text and a question, need to answer the query relying on the given document. Recently, it has been observed that most questions in these datasets do not require reasoning across the document, but they can be answered relying on information contained in a single sentence BIBREF6 . The last generation of large-scale reading comprehension datasets, such as a NarrativeQA BIBREF7 , TriviaQA BIBREF8 , and RACE BIBREF9 , have been created in such a way as to address this shortcoming and to ensure that systems relying only on local information cannot achieve competitive performance.
Even though these new datasets are challenging and require reasoning within documents, many question answering and search applications require aggregation of information across multiple documents. The WikiHop dataset BIBREF0 was explicitly created to facilitate the development of systems dealing with these scenarios. Each example in WikiHop consists of a collection of documents, a query and a set of candidate answers (Figure 1 ). Though there is no guarantee that a question cannot be answered by relying just on a single sentence, the authors ensure that it is answerable using a chain of reasoning crossing document boundaries.
Though an important practical problem, the multi-hop setting has so far received little attention. The methods reported by BIBREF0 approach the task by merely concatenating all documents into a single long text and training a standard RNN-based reading comprehension model, namely, BiDAF BIBREF3 and FastQA BIBREF6 . Document concatenation in this setting is also used in Weaver BIBREF10 and MHPGM BIBREF11 . The only published paper which goes beyond concatenation is due to BIBREF12 , where they augment RNNs with jump-links corresponding to co-reference edges. Though these edges provide a structural bias, the RNN states are still tasked with passing the information across the document and performing multi-hop reasoning.
Instead, we frame question answering as an inference problem on a graph representing the document collection. Nodes in this graph correspond to named entities in a document whereas edges encode relations between them (e.g., cross- and within-document coreference links or simply co-occurrence in a document). We assume that reasoning chains can be captured by propagating local contextual information along edges in this graph using a graph convolutional network (GCN) BIBREF13 .
The multi-document setting imposes scalability challenges. In realistic scenarios, a system needs to learn to answer a query for a given collection (e.g., Wikipedia or a domain-specific set of documents). In such scenarios one cannot afford to run expensive document encoders (e.g., RNN or transformer-like self-attention BIBREF14 ), unless the computation can be preprocessed both at train and test time. Even if (similarly to WikiHop creators) one considers a coarse-to-fine approach, where a set of potentially relevant documents is provided, re-encoding them in a query-specific way remains the bottleneck. In contrast to other proposed methods (e.g., BIBREF12 , BIBREF10 , BIBREF3 ), we avoid training expensive document encoders.
In our approach, only a small query encoder, the GCN layers and a simple feed-forward answer selection component are learned. Instead of training RNN encoders, we use contextualized embeddings (ELMo) to obtain initial (local) representations of nodes. This implies that only a lightweight computation has to be performed online, both at train and test time, whereas the rest is preprocessed. Even in the somewhat contrived WikiHop setting, where fairly small sets of candidates are provided, the model is at least 5 times faster to train than BiDAF. Interestingly, when we substitute ELMo with simple pre-trained word embeddings, Entity-GCN still performs on par with many techniques that use expensive question-aware recurrent document encoders.
Despite not using recurrent document encoders, the full Entity-GCN model achieves over 2% improvement over the best previously-published results. As our model is efficient, we also reported results of an ensemble which brings further 3.6% of improvement and only 3% below the human performance reported by BIBREF0 . Our contributions can be summarized as follows:
Method
In this section we explain our method. We first introduce the dataset we focus on, WikiHop by BIBREF0 , as well as the task abstraction. We then present the building blocks that make up our Entity-GCN model, namely, an entity graph used to relate mentions to entities within and across documents, a document encoder used to obtain representations of mentions in context, and a relational graph convolutional network that propagates information through the entity graph.
Dataset and task abstraction
The WikiHop dataset comprises of tuples $\langle q, S_q, C_q, a^\star \rangle $ where: $q$ is a query/question, $S_q$ is a set of supporting documents, $C_q$ is a set of candidate answers (all of which are entities mentioned in $S_q$ ), and $a^\star \in C_q$ is the entity that correctly answers the question. WikiHop is assembled assuming that there exists a corpus and a knowledge base (KB) related to each other. The KB contains triples $\langle s, r, o \rangle $ where $s$ is a subject entity, $o$ an object entity, and $r$ a unidirectional relation between them. BIBREF0 used Wikipedia as corpus and Wikidata BIBREF15 as KB. The KB is only used for constructing WikiHop: BIBREF0 retrieved the supporting documents $q$0 from the corpus looking at mentions of subject and object entities in the text. Note that the set $q$1 (not the KB) is provided to the QA system, and not all of the supporting documents are relevant for the query but some of them act as distractors. Queries, on the other hand, are not expressed in natural language, but instead consist of tuples $q$2 where the object entity is unknown and it has to be inferred by reading the support documents. Therefore, answering a query corresponds to finding the entity $q$3 that is the object of a tuple in the KB with subject $q$4 and relation $q$5 among the provided set of candidate answers $q$6 .
The goal is to learn a model that can identify the correct answer $a^\star $ from the set of supporting documents $S_q$ . To that end, we exploit the available supervision to train a neural network that computes scores for candidates in $C_q$ . We estimate the parameters of the architecture by maximizing the likelihood of observations. For prediction, we then output the candidate that achieves the highest probability. In the following, we present our model discussing the design decisions that enable multi-step reasoning and an efficient computation.
Reasoning on an entity graph
In an offline step, we organize the content of each training instance in a graph connecting mentions of candidate answers within and across supporting documents. For a given query $q = \langle s, r, ? \rangle $ , we identify mentions in $S_q$ of the entities in $C_q \cup \lbrace s\rbrace $ and create one node per mention. This process is based on the following heuristic:
we consider mentions spans in $S_q$ exactly matching an element of $C_q \cup \lbrace s\rbrace $ . Admittedly, this is a rather simple strategy which may suffer from low recall.
we use predictions from a coreference resolution system to add mentions of elements in $C_q \cup \lbrace s\rbrace $ beyond exact matching (including both noun phrases and anaphoric pronouns). In particular, we use the end-to-end coreference resolution by BIBREF16 .
we discard mentions which are ambiguously resolved to multiple coreference chains; this may sacrifice recall, but avoids propagating ambiguity.
To each node $v_i$ , we associate a continuous annotation $\mathbf {x}_i \in \mathbb {R}^D$ which represents an entity in the context where it was mentioned (details in Section "Node annotations" ). We then proceed to connect these mentions i) if they co-occur within the same document (we will refer to this as DOC-BASED edges), ii) if the pair of named entity mentions is identical (MATCH edges—these may connect nodes across and within documents), or iii) if they are in the same coreference chain, as predicted by the external coreference system (COREF edges). Note that MATCH edges when connecting mentions in the same document are mostly included in the set of edges predicted by the coreference system. Having the two types of edges lets us distinguish between less reliable edges provided by the coreference system and more reliable (but also more sparse) edges given by the exact-match heuristic. We treat these three types of connections as three different types of relations. See Figure 2 for an illustration. In addition to that, and to prevent having disconnected graphs, we add a fourth type of relation (COMPLEMENT edge) between any two nodes that are not connected with any of the other relations. We can think of these edges as those in the complement set of the entity graph with respect to a fully connected graph.
Our model then approaches multi-step reasoning by transforming node representations (Section "Node annotations" for details) with a differentiable message passing algorithm that propagates information through the entity graph. The algorithm is parameterized by a graph convolutional network (GCN) BIBREF13 , in particular, we employ relational-GCNs BIBREF17 , an extended version that accommodates edges of different types. In Section "Entity relational graph convolutional network" we describe the propagation rule.
Each step of the algorithm (also referred to as a hop) updates all node representations in parallel. In particular, a node is updated as a function of messages from its direct neighbours, and a message is possibly specific to a certain relation. At the end of the first step, every node is aware of every other node it connects directly to. Besides, the neighbourhood of a node may include mentions of the same entity as well as others (e.g., same-document relation), and these mentions may have occurred in different documents. Taking this idea recursively, each further step of the algorithm allows a node to indirectly interact with nodes already known to their neighbours. After $L$ layers of R-GCN, information has been propagated through paths connecting up to $L+1$ nodes.
We start with node representations $\lbrace \mathbf {h}_i^{(0)}\rbrace _{i=1}^N$ , and transform them by applying $L$ layers of R-GCN obtaining $\lbrace \mathbf {h}_i^{(L)}\rbrace _{i=1}^N$ . Together with a representation $\mathbf {q}$ of the query, we define a distribution over candidate answers and we train maximizing the likelihood of observations. The probability of selecting a candidate $c \in C_q$ as an answer is then
$$ P(c|q, C_q, S_q) \propto \exp \left(\max _{i \in \mathcal {M}_c} f_o([\mathbf {q}, \mathbf {h}^{(L)}_i]) \right)\;,$$ (Eq. 16)
where $f_o$ is a parameterized affine transformation, and $\mathcal {M}_c$ is the set of node indices such that $i\in \mathcal {M}_c$ only if node $v_i$ is a mention of $c$ . The $\max $ operator in Equation 16 is necessary to select the node with highest predicted probability since a candidate answer is realized in multiple locations via different nodes.
Node annotations
Keeping in mind we want an efficient model, we encode words in supporting documents and in the query using only a pre-trained model for contextualized word representations rather than training our own encoder. Specifically, we use ELMo BIBREF20 , a pre-trained bi-directional language model that relies on character-based input representation. ELMo representations, differently from other pre-trained word-based models (e.g., word2vec BIBREF21 or GloVe BIBREF22 ), are contextualized since each token representation depends on the entire text excerpt (i.e., the whole sentence).
We choose not to fine tune nor propagate gradients through the ELMo architecture, as it would have defied the goal of not having specialized RNN encoders. In the experiments, we will also ablate the use of ELMo showing how our model behaves using non-contextualized word representations (we use GloVe).
ELMo encodings are used to produce a set of representations $\lbrace \mathbf {x}_i\rbrace _{i=1}^N$ , where $\mathbf {x}_i \in \mathbb {R}^D$ denotes the $i$ th candidate mention in context. Note that these representations do not depend on the query yet and no trainable model was used to process the documents so far, that is, we use ELMo as a fixed pre-trained encoder. Therefore, we can pre-compute representation of mentions once and store them for later use.
ELMo encodings are used to produce a query representation $\mathbf {q} \in \mathbb {R}^K$ as well. Here, $\mathbf {q}$ is a concatenation of the final outputs from a bidirectional RNN layer trained to re-encode ELMo representations of words in the query. The vector $\mathbf {q}$ is used to compute a query-dependent representation of mentions $\lbrace \mathbf { \hat{x}}_i\rbrace _{i=1}^N$ as well as to compute a probability distribution over candidates (as in Equation 16 ). Query-dependent mention encodings $\mathbf {\hat{x}}_i = f_x(\mathbf {q}, \mathbf {x}_i)$ are generated by a trainable function $f_x$ which is parameterized by a feed-forward neural network.
Entity relational graph convolutional network
Our model uses a gated version of the original R-GCN propagation rule. At the first layer, all hidden node representation are initialized with the query-aware encodings $\mathbf {h}_i^{(0)} = \mathbf {\hat{x}}_i$ . Then, at each layer $0\le \ell \le L$ , the update message $\mathbf {u}_i^{(\ell )}$ to the $i$ th node is a sum of a transformation $f_s$ of the current node representation $\mathbf {h}^{(\ell )}_i$ and transformations of its neighbours:
$$\mathbf {u}^{(\ell )}_i = f_s(\mathbf {h}^{(\ell )}_i) + \frac{1}{|\mathcal {N}_i|} \sum _{j \in \mathcal {N}_i} \sum _{r \in \mathcal {R}_{ij}} f_r(\mathbf {h}_j^{(\ell )})\;,$$ (Eq. 22)
where $\mathcal {N}_i$ is the set of indices of nodes neighbouring the $i$ th node, $\mathcal {R}_{ij}$ is the set of edge annotations between $i$ and $j$ , and $f_r$ is a parametrized function specific to an edge type $r\in \mathcal {R}$ . Recall the available relations from Section "Ablation study" , namely, $\mathcal {R} =\lbrace $ DOC-BASED, MATCH, COREF, COMPLEMENT $\rbrace $ .
A gating mechanism regulates how much of the update message propagates to the next step. This provides the model a way to prevent completely overwriting past information. Indeed, if all necessary information to answer a question is present at a layer which is not the last, then the model should learn to stop using neighbouring information for the next steps. Gate levels are computed as
$$\mathbf {a}^{(\ell )}_i = \sigma \left( f_a\left([\mathbf {u}^{(\ell )}_i, \mathbf {h}^{(\ell )}_i ]\right) \right) \;,$$ (Eq. 23)
where $\sigma (\cdot )$ is the sigmoid function and $f_a$ a parametrized transformation. Ultimately, the updated representation is a gated combination of the previous representation and a non-linear transformation of the update message:
$$\mathbf {h}^{(\ell + 1)}_i = \phi (\mathbf {u}^{(\ell )}_i) \odot \mathbf {a}^{(\ell )}_i + \mathbf {h}^{(\ell )}_i \odot (1 - \mathbf {a}^{(\ell )}_i ) \;,$$ (Eq. 24)
where $\phi (\cdot )$ is any nonlinear function (we used $\tanh $ ) and $\odot $ stands for element-wise multiplication. All transformations $f_*$ are affine and they are not layer-dependent (since we would like to use as few parameters as possible to decrease model complexity promoting efficiency and scalability).
Experiments
In this section, we compare our method against recent work as well as preforming an ablation study using the WikiHop dataset BIBREF0 . See Appendix "Implementation and experiments details" in the supplementary material for a description of the hyper-parameters of our model and training details.
Comparison
In this experiment, we compare our Enitity-GCN against recent prior work on the same task. We present test and development results (when present) for both versions of the dataset in Table 2 . From BIBREF0 , we list an oracle based on human performance as well as two standard reading comprehension models, namely BiDAF BIBREF3 and FastQA BIBREF6 . We also compare against Coref-GRU BIBREF12 , MHPGM BIBREF11 , and Weaver BIBREF10 . Additionally, we include results of MHQA-GRN BIBREF23 , from a recent arXiv preprint describing concurrent work. They jointly train graph neural networks and recurrent encoders. We report single runs of our two best single models and an ensemble one on the unmasked test set (recall that the test set is not publicly available and the task organizers only report unmasked results) as well as both versions of the validation set.
Entity-GCN (best single model without coreference edges) outperforms all previous work by over 2% points. We additionally re-ran BiDAF baseline to compare training time: when using a single Titan X GPU, BiDAF and Entity-GCN process 12.5 and 57.8 document sets per second, respectively. Note that BIBREF0 had to use BiDAF with very small state dimensionalities (20), and smaller batch size due to the scalability issues (both memory and computation costs). We compare applying the same reductions. Eventually, we also report an ensemble of 5 independently trained models. All models are trained on the same dataset splits with different weight initializations. The ensemble prediction is obtained as $\arg \max \limits _c \prod \limits _{i=1}^5 P_i(c|q, C_q, S_q)$ from each model.
Ablation study
To help determine the sources of improvements, we perform an ablation study using the publicly available validation set (see Table 3 ). We perform two groups of ablation, one on the embedding layer, to study the effect of ELMo, and one on the edges, to study how different relations affect the overall model performance.
We argue that ELMo is crucial, since we do not rely on any other context encoder. However, it is interesting to explore how our R-GCN performs without it. Therefore, in this experiment, we replace the deep contextualized embeddings of both the query and the nodes with GloVe BIBREF22 vectors (insensitive to context). Since we do not have any component in our model that processes the documents, we expect a drop in performance. In other words, in this ablation our model tries to answer questions without reading the context at all. For example, in Figure 1 , our model would be aware that “Stockholm” and “Sweden” appear in the same document but any context words, including the ones encoding relations (e.g., “is the capital of”) will be hidden. Besides, in the masked case all mentions become `unknown' tokens with GloVe and therefore the predictions are equivalent to a random guess. Once the strong pre-trained encoder is out of the way, we also ablate the use of our R-GCN component, thus completely depriving the model from inductive biases that aim at multi-hop reasoning.
The first important observation is that replacing ELMo by GloVe (GloVe with R-GCN in Table 3 ) still yields a competitive system that ranks far above baselines from BIBREF0 and even above the Coref-GRU of BIBREF12 , in terms of accuracy on (unmasked) validation set. The second important observation is that if we then remove R-GCN (GloVe w/o R-GCN in Table 3 ), we lose 8.0 points. That is, the R-GCN component pushes the model to perform above Coref-GRU still without accessing context, but rather by updating mention representations based on their relation to other ones. These results highlight the impact of our R-GCN component.
In this experiment we investigate the effect of the different relations available in the entity graph and processed by the R-GCN module. We start off by testing our stronger encoder (i.e., ELMo) in absence of edges connecting mentions in the supporting documents (i.e., using only self-loops – No R-GCN in Table 3 ). The results suggest that WikipHop genuinely requires multihop inference, as our best model is 6.1% and 8.4% more accurate than this local model, in unmasked and masked settings, respectively. However, it also shows that ELMo representations capture predictive context features, without being explicitly trained for the task. It confirms that our goal of getting away with training expensive document encoders is a realistic one.
We then inspect our model's effectiveness in making use of the structure encoded in the graph. We start naively by fully-connecting all nodes within and across documents without distinguishing edges by type (No relation types in Table 3 ). We observe only marginal improvements with respect to ELMo alone (No R-GCN in Table 3 ) in both the unmasked and masked setting suggesting that a GCN operating over a naive entity graph would not add much to this task and a more informative graph construction and/or a more sophisticated parameterization is indeed needed.
Next, we ablate each type of relations independently, that is, we either remove connections of mentions that co-occur in the same document (DOC-BASED), connections between mentions matching exactly (MATCH), or edges predicted by the coreference system (COREF). The first thing to note is that the model makes better use of DOC-BASED connections than MATCH or COREF connections. This is mostly because i) the majority of the connections are indeed between mentions in the same document, and ii) without connecting mentions within the same document we remove important information since the model is unaware they appear closely in the document. Secondly, we notice that coreference links and complement edges seem to play a more marginal role. Though it may be surprising for coreference edges, recall that the MATCH heuristic already captures the easiest coreference cases, and for the rest the out-of-domain coreference system may not be reliable. Still, modelling all these different relations together gives our Entity-GCN a clear advantage. This is our best system evaluating on the development. Since Entity-GCN seems to gain little advantage using the coreference system, we report test results both with and without using it. Surprisingly, with coreference, we observe performance degradation on the test set. It is likely that the test documents are harder for the coreference system.
We do perform one last ablation, namely, we replace our heuristic for assigning edges and their labels by a model component that predicts them. The last row of Table 3 (Induced edges) shows model performance when edges are not predetermined but predicted. For this experiment, we use a bilinear function $f_e(\mathbf {\hat{x}}_i, \mathbf {\hat{x}}_j) = \sigma \left( \mathbf {\hat{x}}^\top _i \mathbf {W}_e \mathbf {\hat{x}}_j \right)$ that predicts the importance of a single edge connecting two nodes $i,j$ using the query-dependent representation of mentions (see Section "Node annotations" ). The performance drops below `No R-GCN' suggesting that it cannot learn these dependencies on its own.
Most results are stronger for the masked settings even though we do not apply the coreference resolution system in this setting due to masking. It is not surprising as coreferred mentions are labeled with the same identifier in the masked version, even if their original surface forms did not match ( BIBREF0 used Wikipedia links for masking). Indeed, in the masked version, an entity is always referred to via the same unique surface form (e.g., MASK1) within and across documents. In the unmasked setting, on the other hand, mentions to an entity may differ (e.g., “US” vs “United States”) and they might not be retrieved by the coreference system we are employing, making the task harder for all models. Therefore, as we rely mostly on exact matching when constructing our graph for the masked case, we are more effective in recovering coreference links on the masked rather than unmasked version.
In Figure 3 , we show how the model performance goes when the input graph is large. In particular, how Entity-GCN performs as the number of candidate answers or the number of nodes increases.
Error analysis
In this section we provide an error analysis for our best single model predictions. First of all, we look at which type of questions our model performs well or poorly. There are more than 150 query types in the validation set but we filtered the three with the best and with the worst accuracy that have at least 50 supporting documents and at least 5 candidates. We show results in Table 4 . We observe that questions regarding places (birth and death) are considered harder for Entity-GCN. We then inspect samples where our model fails while assigning highest likelihood and noticed two principal sources of failure i) a mismatch between what is written in Wikipedia and what is annotated in Wikidata, and ii) a different degree of granularity (e.g., born in “London” vs “UK” could be considered both correct by a human but not when measuring accuracy). See Table 6 in the supplement material for some reported samples.
Secondly, we study how the model performance degrades when the input graph is large. In particular, we observe a negative Pearson's correlation (-0.687) between accuracy and the number of candidate answers. However, the performance does not decrease steeply. The distribution of the number of candidates in the dataset peaks at 5 and has an average of approximately 20. Therefore, the model does not see many samples where there are a large number of candidate entities during training. Differently, we notice that as the number of nodes in the graph increases, the model performance drops but more gently (negative but closer to zero Pearson's correlation). This is important as document sets can be large in practical applications. See Figure 3 in the supplemental material for plots.
In Table 6 , we report three samples from WikiHop development set where out Entity-GCN fails. In particular, we show two instances where our model presents high confidence on the answer, and one where is not. We commented these samples explaining why our model might fail in these cases.
Related work
In previous work, BiDAF BIBREF3 , FastQA BIBREF6 , Coref-GRU BIBREF12 , MHPGM BIBREF11 , and Weaver / Jenga BIBREF10 have been applied to multi-document question answering. The first two mainly focus on single document QA and BIBREF0 adapted both of them to work with WikiHop. They process each instance of the dataset by concatenating all $d \in S_q$ in a random order adding document separator tokens. They trained using the first answer mention in the concatenated document and evaluating exact match at test time. Coref-GRU, similarly to us, encodes relations between entity mentions in the document. Instead of using graph neural network layers, as we do, they augment RNNs with jump links corresponding to pairs of corefereed mentions. MHPGM uses a multi-attention mechanism in combination with external commonsense relations to perform multiple hops of reasoning. Weaver is a deep co-encoding model that uses several alternating bi-LSTMs to process the concatenated documents and the query.
Graph neural networks have been shown successful on a number of NLP tasks BIBREF24 , BIBREF25 , BIBREF26 , including those involving document level modeling BIBREF27 . They have also been applied in the context of asking questions about knowledge contained in a knowledge base BIBREF28 . In schlichtkrull2017modeling, GCNs are used to capture reasoning chains in a knowledge base. Our work and unpublished concurrent work by BIBREF23 are the first to study graph neural networks in the context of multi-document QA. Besides differences in the architecture, BIBREF23 propose to train a combination of a graph recurrent network and an RNN encoder. We do not train any RNN document encoders in this work.
Conclusion
We designed a graph neural network that operates over a compact graph representation of a set of documents where nodes are mentions to entities and edges signal relations such as within and cross-document coreference. The model learns to answer questions by gathering evidence from different documents via a differentiable message passing algorithm that updates node representations based on their neighbourhood. Our model outperforms published results where ablations show substantial evidence in favour of multi-step reasoning. Moreover, we make the model fast by using pre-trained (contextual) embeddings.
Acknowledgments
We would like to thank Johannes Welbl for helping to test our system on WikiHop. This project is supported by SAP Innovation Center Network, ERC Starting Grant BroadSem (678254) and the Dutch Organization for Scientific Research (NWO) VIDI 639.022.518. Wilker Aziz is supported by the Dutch Organisation for Scientific Research (NWO) VICI Grant nr. 277-89-002.
Architecture
See table 5 for an outline of Entity-GCN architectural detail. Here the computational steps
ELMo embeddings are a concatenation of three 1024-dimensional vectors resulting in 3072-dimensional input vectors $\lbrace \mathbf {x}_i\rbrace _{i=1}^N$ .
For the query representation $\mathbf {q}$ , we apply 2 bi-LSTM layers of 256 and 128 hidden units to its ELMo vectors. The concatenation of the forward and backward states results in a 256-dimensional question representation.
ELMo embeddings of candidates are projected to 256-dimensional vectors, concatenated to the $\mathbf {q}$ , and further transformed with a two layers MLP of 1024 and 512 hidden units in 512-dimensional query aware entity representations $\lbrace \mathbf {\hat{x}}_i\rbrace _{i=1}^N \in \mathbb {R}^{512}$ .
All transformations $f_*$ in R-GCN-layers are affine and they do maintain the input and output dimensionality of node representations the same (512-dimensional).
Eventually, a 2-layers MLP with [256, 128] hidden units takes the concatenation between $\lbrace \mathbf {h}_i^{(L)}\rbrace _{i=1}^N$ and $\mathbf {q}$ to predict the probability that a candidate node $v_i$ may be the answer to the query $q$ (see Equation 16 ).
During preliminary trials, we experimented with different numbers of R-GCN-layers (in the range 1-7). We observed that with WikiHop, for $L \ge 3$ models reach essentially the same performance, but more layers increase the time required to train them. Besides, we observed that the gating mechanism learns to keep more and more information from the past at each layer making unnecessary to have more layers than required.
Training details
We train our models with a batch size of 32 for at most 20 epochs using the Adam optimizer BIBREF29 with $\beta _1=0.9$ , $\beta _2=0.999$ and a learning rate of $10^{-4}$ . To help against overfitting, we employ dropout (drop rate $\in {0, 0.1, 0.15, 0.2, 0.25}$ ) BIBREF30 and early-stopping on validation accuracy. We report the best results of each experiment based on accuracy on validation set. | Exact matches to the entity string and predictions from a coreference resolution system |
63403ffc0232ff041f3da8fa6c30827cfd6404b7 | 63403ffc0232ff041f3da8fa6c30827cfd6404b7_0 | Q: What is the metric used with WIKIHOP?
Text: Introduction
The long-standing goal of natural language understanding is the development of systems which can acquire knowledge from text collections. Fresh interest in reading comprehension tasks was sparked by the availability of large-scale datasets, such as SQuAD BIBREF1 and CNN/Daily Mail BIBREF2 , enabling end-to-end training of neural models BIBREF3 , BIBREF4 , BIBREF5 . These systems, given a text and a question, need to answer the query relying on the given document. Recently, it has been observed that most questions in these datasets do not require reasoning across the document, but they can be answered relying on information contained in a single sentence BIBREF6 . The last generation of large-scale reading comprehension datasets, such as a NarrativeQA BIBREF7 , TriviaQA BIBREF8 , and RACE BIBREF9 , have been created in such a way as to address this shortcoming and to ensure that systems relying only on local information cannot achieve competitive performance.
Even though these new datasets are challenging and require reasoning within documents, many question answering and search applications require aggregation of information across multiple documents. The WikiHop dataset BIBREF0 was explicitly created to facilitate the development of systems dealing with these scenarios. Each example in WikiHop consists of a collection of documents, a query and a set of candidate answers (Figure 1 ). Though there is no guarantee that a question cannot be answered by relying just on a single sentence, the authors ensure that it is answerable using a chain of reasoning crossing document boundaries.
Though an important practical problem, the multi-hop setting has so far received little attention. The methods reported by BIBREF0 approach the task by merely concatenating all documents into a single long text and training a standard RNN-based reading comprehension model, namely, BiDAF BIBREF3 and FastQA BIBREF6 . Document concatenation in this setting is also used in Weaver BIBREF10 and MHPGM BIBREF11 . The only published paper which goes beyond concatenation is due to BIBREF12 , where they augment RNNs with jump-links corresponding to co-reference edges. Though these edges provide a structural bias, the RNN states are still tasked with passing the information across the document and performing multi-hop reasoning.
Instead, we frame question answering as an inference problem on a graph representing the document collection. Nodes in this graph correspond to named entities in a document whereas edges encode relations between them (e.g., cross- and within-document coreference links or simply co-occurrence in a document). We assume that reasoning chains can be captured by propagating local contextual information along edges in this graph using a graph convolutional network (GCN) BIBREF13 .
The multi-document setting imposes scalability challenges. In realistic scenarios, a system needs to learn to answer a query for a given collection (e.g., Wikipedia or a domain-specific set of documents). In such scenarios one cannot afford to run expensive document encoders (e.g., RNN or transformer-like self-attention BIBREF14 ), unless the computation can be preprocessed both at train and test time. Even if (similarly to WikiHop creators) one considers a coarse-to-fine approach, where a set of potentially relevant documents is provided, re-encoding them in a query-specific way remains the bottleneck. In contrast to other proposed methods (e.g., BIBREF12 , BIBREF10 , BIBREF3 ), we avoid training expensive document encoders.
In our approach, only a small query encoder, the GCN layers and a simple feed-forward answer selection component are learned. Instead of training RNN encoders, we use contextualized embeddings (ELMo) to obtain initial (local) representations of nodes. This implies that only a lightweight computation has to be performed online, both at train and test time, whereas the rest is preprocessed. Even in the somewhat contrived WikiHop setting, where fairly small sets of candidates are provided, the model is at least 5 times faster to train than BiDAF. Interestingly, when we substitute ELMo with simple pre-trained word embeddings, Entity-GCN still performs on par with many techniques that use expensive question-aware recurrent document encoders.
Despite not using recurrent document encoders, the full Entity-GCN model achieves over 2% improvement over the best previously-published results. As our model is efficient, we also reported results of an ensemble which brings further 3.6% of improvement and only 3% below the human performance reported by BIBREF0 . Our contributions can be summarized as follows:
Method
In this section we explain our method. We first introduce the dataset we focus on, WikiHop by BIBREF0 , as well as the task abstraction. We then present the building blocks that make up our Entity-GCN model, namely, an entity graph used to relate mentions to entities within and across documents, a document encoder used to obtain representations of mentions in context, and a relational graph convolutional network that propagates information through the entity graph.
Dataset and task abstraction
The WikiHop dataset comprises of tuples $\langle q, S_q, C_q, a^\star \rangle $ where: $q$ is a query/question, $S_q$ is a set of supporting documents, $C_q$ is a set of candidate answers (all of which are entities mentioned in $S_q$ ), and $a^\star \in C_q$ is the entity that correctly answers the question. WikiHop is assembled assuming that there exists a corpus and a knowledge base (KB) related to each other. The KB contains triples $\langle s, r, o \rangle $ where $s$ is a subject entity, $o$ an object entity, and $r$ a unidirectional relation between them. BIBREF0 used Wikipedia as corpus and Wikidata BIBREF15 as KB. The KB is only used for constructing WikiHop: BIBREF0 retrieved the supporting documents $q$0 from the corpus looking at mentions of subject and object entities in the text. Note that the set $q$1 (not the KB) is provided to the QA system, and not all of the supporting documents are relevant for the query but some of them act as distractors. Queries, on the other hand, are not expressed in natural language, but instead consist of tuples $q$2 where the object entity is unknown and it has to be inferred by reading the support documents. Therefore, answering a query corresponds to finding the entity $q$3 that is the object of a tuple in the KB with subject $q$4 and relation $q$5 among the provided set of candidate answers $q$6 .
The goal is to learn a model that can identify the correct answer $a^\star $ from the set of supporting documents $S_q$ . To that end, we exploit the available supervision to train a neural network that computes scores for candidates in $C_q$ . We estimate the parameters of the architecture by maximizing the likelihood of observations. For prediction, we then output the candidate that achieves the highest probability. In the following, we present our model discussing the design decisions that enable multi-step reasoning and an efficient computation.
Reasoning on an entity graph
In an offline step, we organize the content of each training instance in a graph connecting mentions of candidate answers within and across supporting documents. For a given query $q = \langle s, r, ? \rangle $ , we identify mentions in $S_q$ of the entities in $C_q \cup \lbrace s\rbrace $ and create one node per mention. This process is based on the following heuristic:
we consider mentions spans in $S_q$ exactly matching an element of $C_q \cup \lbrace s\rbrace $ . Admittedly, this is a rather simple strategy which may suffer from low recall.
we use predictions from a coreference resolution system to add mentions of elements in $C_q \cup \lbrace s\rbrace $ beyond exact matching (including both noun phrases and anaphoric pronouns). In particular, we use the end-to-end coreference resolution by BIBREF16 .
we discard mentions which are ambiguously resolved to multiple coreference chains; this may sacrifice recall, but avoids propagating ambiguity.
To each node $v_i$ , we associate a continuous annotation $\mathbf {x}_i \in \mathbb {R}^D$ which represents an entity in the context where it was mentioned (details in Section "Node annotations" ). We then proceed to connect these mentions i) if they co-occur within the same document (we will refer to this as DOC-BASED edges), ii) if the pair of named entity mentions is identical (MATCH edges—these may connect nodes across and within documents), or iii) if they are in the same coreference chain, as predicted by the external coreference system (COREF edges). Note that MATCH edges when connecting mentions in the same document are mostly included in the set of edges predicted by the coreference system. Having the two types of edges lets us distinguish between less reliable edges provided by the coreference system and more reliable (but also more sparse) edges given by the exact-match heuristic. We treat these three types of connections as three different types of relations. See Figure 2 for an illustration. In addition to that, and to prevent having disconnected graphs, we add a fourth type of relation (COMPLEMENT edge) between any two nodes that are not connected with any of the other relations. We can think of these edges as those in the complement set of the entity graph with respect to a fully connected graph.
Our model then approaches multi-step reasoning by transforming node representations (Section "Node annotations" for details) with a differentiable message passing algorithm that propagates information through the entity graph. The algorithm is parameterized by a graph convolutional network (GCN) BIBREF13 , in particular, we employ relational-GCNs BIBREF17 , an extended version that accommodates edges of different types. In Section "Entity relational graph convolutional network" we describe the propagation rule.
Each step of the algorithm (also referred to as a hop) updates all node representations in parallel. In particular, a node is updated as a function of messages from its direct neighbours, and a message is possibly specific to a certain relation. At the end of the first step, every node is aware of every other node it connects directly to. Besides, the neighbourhood of a node may include mentions of the same entity as well as others (e.g., same-document relation), and these mentions may have occurred in different documents. Taking this idea recursively, each further step of the algorithm allows a node to indirectly interact with nodes already known to their neighbours. After $L$ layers of R-GCN, information has been propagated through paths connecting up to $L+1$ nodes.
We start with node representations $\lbrace \mathbf {h}_i^{(0)}\rbrace _{i=1}^N$ , and transform them by applying $L$ layers of R-GCN obtaining $\lbrace \mathbf {h}_i^{(L)}\rbrace _{i=1}^N$ . Together with a representation $\mathbf {q}$ of the query, we define a distribution over candidate answers and we train maximizing the likelihood of observations. The probability of selecting a candidate $c \in C_q$ as an answer is then
$$ P(c|q, C_q, S_q) \propto \exp \left(\max _{i \in \mathcal {M}_c} f_o([\mathbf {q}, \mathbf {h}^{(L)}_i]) \right)\;,$$ (Eq. 16)
where $f_o$ is a parameterized affine transformation, and $\mathcal {M}_c$ is the set of node indices such that $i\in \mathcal {M}_c$ only if node $v_i$ is a mention of $c$ . The $\max $ operator in Equation 16 is necessary to select the node with highest predicted probability since a candidate answer is realized in multiple locations via different nodes.
Node annotations
Keeping in mind we want an efficient model, we encode words in supporting documents and in the query using only a pre-trained model for contextualized word representations rather than training our own encoder. Specifically, we use ELMo BIBREF20 , a pre-trained bi-directional language model that relies on character-based input representation. ELMo representations, differently from other pre-trained word-based models (e.g., word2vec BIBREF21 or GloVe BIBREF22 ), are contextualized since each token representation depends on the entire text excerpt (i.e., the whole sentence).
We choose not to fine tune nor propagate gradients through the ELMo architecture, as it would have defied the goal of not having specialized RNN encoders. In the experiments, we will also ablate the use of ELMo showing how our model behaves using non-contextualized word representations (we use GloVe).
ELMo encodings are used to produce a set of representations $\lbrace \mathbf {x}_i\rbrace _{i=1}^N$ , where $\mathbf {x}_i \in \mathbb {R}^D$ denotes the $i$ th candidate mention in context. Note that these representations do not depend on the query yet and no trainable model was used to process the documents so far, that is, we use ELMo as a fixed pre-trained encoder. Therefore, we can pre-compute representation of mentions once and store them for later use.
ELMo encodings are used to produce a query representation $\mathbf {q} \in \mathbb {R}^K$ as well. Here, $\mathbf {q}$ is a concatenation of the final outputs from a bidirectional RNN layer trained to re-encode ELMo representations of words in the query. The vector $\mathbf {q}$ is used to compute a query-dependent representation of mentions $\lbrace \mathbf { \hat{x}}_i\rbrace _{i=1}^N$ as well as to compute a probability distribution over candidates (as in Equation 16 ). Query-dependent mention encodings $\mathbf {\hat{x}}_i = f_x(\mathbf {q}, \mathbf {x}_i)$ are generated by a trainable function $f_x$ which is parameterized by a feed-forward neural network.
Entity relational graph convolutional network
Our model uses a gated version of the original R-GCN propagation rule. At the first layer, all hidden node representation are initialized with the query-aware encodings $\mathbf {h}_i^{(0)} = \mathbf {\hat{x}}_i$ . Then, at each layer $0\le \ell \le L$ , the update message $\mathbf {u}_i^{(\ell )}$ to the $i$ th node is a sum of a transformation $f_s$ of the current node representation $\mathbf {h}^{(\ell )}_i$ and transformations of its neighbours:
$$\mathbf {u}^{(\ell )}_i = f_s(\mathbf {h}^{(\ell )}_i) + \frac{1}{|\mathcal {N}_i|} \sum _{j \in \mathcal {N}_i} \sum _{r \in \mathcal {R}_{ij}} f_r(\mathbf {h}_j^{(\ell )})\;,$$ (Eq. 22)
where $\mathcal {N}_i$ is the set of indices of nodes neighbouring the $i$ th node, $\mathcal {R}_{ij}$ is the set of edge annotations between $i$ and $j$ , and $f_r$ is a parametrized function specific to an edge type $r\in \mathcal {R}$ . Recall the available relations from Section "Ablation study" , namely, $\mathcal {R} =\lbrace $ DOC-BASED, MATCH, COREF, COMPLEMENT $\rbrace $ .
A gating mechanism regulates how much of the update message propagates to the next step. This provides the model a way to prevent completely overwriting past information. Indeed, if all necessary information to answer a question is present at a layer which is not the last, then the model should learn to stop using neighbouring information for the next steps. Gate levels are computed as
$$\mathbf {a}^{(\ell )}_i = \sigma \left( f_a\left([\mathbf {u}^{(\ell )}_i, \mathbf {h}^{(\ell )}_i ]\right) \right) \;,$$ (Eq. 23)
where $\sigma (\cdot )$ is the sigmoid function and $f_a$ a parametrized transformation. Ultimately, the updated representation is a gated combination of the previous representation and a non-linear transformation of the update message:
$$\mathbf {h}^{(\ell + 1)}_i = \phi (\mathbf {u}^{(\ell )}_i) \odot \mathbf {a}^{(\ell )}_i + \mathbf {h}^{(\ell )}_i \odot (1 - \mathbf {a}^{(\ell )}_i ) \;,$$ (Eq. 24)
where $\phi (\cdot )$ is any nonlinear function (we used $\tanh $ ) and $\odot $ stands for element-wise multiplication. All transformations $f_*$ are affine and they are not layer-dependent (since we would like to use as few parameters as possible to decrease model complexity promoting efficiency and scalability).
Experiments
In this section, we compare our method against recent work as well as preforming an ablation study using the WikiHop dataset BIBREF0 . See Appendix "Implementation and experiments details" in the supplementary material for a description of the hyper-parameters of our model and training details.
Comparison
In this experiment, we compare our Enitity-GCN against recent prior work on the same task. We present test and development results (when present) for both versions of the dataset in Table 2 . From BIBREF0 , we list an oracle based on human performance as well as two standard reading comprehension models, namely BiDAF BIBREF3 and FastQA BIBREF6 . We also compare against Coref-GRU BIBREF12 , MHPGM BIBREF11 , and Weaver BIBREF10 . Additionally, we include results of MHQA-GRN BIBREF23 , from a recent arXiv preprint describing concurrent work. They jointly train graph neural networks and recurrent encoders. We report single runs of our two best single models and an ensemble one on the unmasked test set (recall that the test set is not publicly available and the task organizers only report unmasked results) as well as both versions of the validation set.
Entity-GCN (best single model without coreference edges) outperforms all previous work by over 2% points. We additionally re-ran BiDAF baseline to compare training time: when using a single Titan X GPU, BiDAF and Entity-GCN process 12.5 and 57.8 document sets per second, respectively. Note that BIBREF0 had to use BiDAF with very small state dimensionalities (20), and smaller batch size due to the scalability issues (both memory and computation costs). We compare applying the same reductions. Eventually, we also report an ensemble of 5 independently trained models. All models are trained on the same dataset splits with different weight initializations. The ensemble prediction is obtained as $\arg \max \limits _c \prod \limits _{i=1}^5 P_i(c|q, C_q, S_q)$ from each model.
Ablation study
To help determine the sources of improvements, we perform an ablation study using the publicly available validation set (see Table 3 ). We perform two groups of ablation, one on the embedding layer, to study the effect of ELMo, and one on the edges, to study how different relations affect the overall model performance.
We argue that ELMo is crucial, since we do not rely on any other context encoder. However, it is interesting to explore how our R-GCN performs without it. Therefore, in this experiment, we replace the deep contextualized embeddings of both the query and the nodes with GloVe BIBREF22 vectors (insensitive to context). Since we do not have any component in our model that processes the documents, we expect a drop in performance. In other words, in this ablation our model tries to answer questions without reading the context at all. For example, in Figure 1 , our model would be aware that “Stockholm” and “Sweden” appear in the same document but any context words, including the ones encoding relations (e.g., “is the capital of”) will be hidden. Besides, in the masked case all mentions become `unknown' tokens with GloVe and therefore the predictions are equivalent to a random guess. Once the strong pre-trained encoder is out of the way, we also ablate the use of our R-GCN component, thus completely depriving the model from inductive biases that aim at multi-hop reasoning.
The first important observation is that replacing ELMo by GloVe (GloVe with R-GCN in Table 3 ) still yields a competitive system that ranks far above baselines from BIBREF0 and even above the Coref-GRU of BIBREF12 , in terms of accuracy on (unmasked) validation set. The second important observation is that if we then remove R-GCN (GloVe w/o R-GCN in Table 3 ), we lose 8.0 points. That is, the R-GCN component pushes the model to perform above Coref-GRU still without accessing context, but rather by updating mention representations based on their relation to other ones. These results highlight the impact of our R-GCN component.
In this experiment we investigate the effect of the different relations available in the entity graph and processed by the R-GCN module. We start off by testing our stronger encoder (i.e., ELMo) in absence of edges connecting mentions in the supporting documents (i.e., using only self-loops – No R-GCN in Table 3 ). The results suggest that WikipHop genuinely requires multihop inference, as our best model is 6.1% and 8.4% more accurate than this local model, in unmasked and masked settings, respectively. However, it also shows that ELMo representations capture predictive context features, without being explicitly trained for the task. It confirms that our goal of getting away with training expensive document encoders is a realistic one.
We then inspect our model's effectiveness in making use of the structure encoded in the graph. We start naively by fully-connecting all nodes within and across documents without distinguishing edges by type (No relation types in Table 3 ). We observe only marginal improvements with respect to ELMo alone (No R-GCN in Table 3 ) in both the unmasked and masked setting suggesting that a GCN operating over a naive entity graph would not add much to this task and a more informative graph construction and/or a more sophisticated parameterization is indeed needed.
Next, we ablate each type of relations independently, that is, we either remove connections of mentions that co-occur in the same document (DOC-BASED), connections between mentions matching exactly (MATCH), or edges predicted by the coreference system (COREF). The first thing to note is that the model makes better use of DOC-BASED connections than MATCH or COREF connections. This is mostly because i) the majority of the connections are indeed between mentions in the same document, and ii) without connecting mentions within the same document we remove important information since the model is unaware they appear closely in the document. Secondly, we notice that coreference links and complement edges seem to play a more marginal role. Though it may be surprising for coreference edges, recall that the MATCH heuristic already captures the easiest coreference cases, and for the rest the out-of-domain coreference system may not be reliable. Still, modelling all these different relations together gives our Entity-GCN a clear advantage. This is our best system evaluating on the development. Since Entity-GCN seems to gain little advantage using the coreference system, we report test results both with and without using it. Surprisingly, with coreference, we observe performance degradation on the test set. It is likely that the test documents are harder for the coreference system.
We do perform one last ablation, namely, we replace our heuristic for assigning edges and their labels by a model component that predicts them. The last row of Table 3 (Induced edges) shows model performance when edges are not predetermined but predicted. For this experiment, we use a bilinear function $f_e(\mathbf {\hat{x}}_i, \mathbf {\hat{x}}_j) = \sigma \left( \mathbf {\hat{x}}^\top _i \mathbf {W}_e \mathbf {\hat{x}}_j \right)$ that predicts the importance of a single edge connecting two nodes $i,j$ using the query-dependent representation of mentions (see Section "Node annotations" ). The performance drops below `No R-GCN' suggesting that it cannot learn these dependencies on its own.
Most results are stronger for the masked settings even though we do not apply the coreference resolution system in this setting due to masking. It is not surprising as coreferred mentions are labeled with the same identifier in the masked version, even if their original surface forms did not match ( BIBREF0 used Wikipedia links for masking). Indeed, in the masked version, an entity is always referred to via the same unique surface form (e.g., MASK1) within and across documents. In the unmasked setting, on the other hand, mentions to an entity may differ (e.g., “US” vs “United States”) and they might not be retrieved by the coreference system we are employing, making the task harder for all models. Therefore, as we rely mostly on exact matching when constructing our graph for the masked case, we are more effective in recovering coreference links on the masked rather than unmasked version.
In Figure 3 , we show how the model performance goes when the input graph is large. In particular, how Entity-GCN performs as the number of candidate answers or the number of nodes increases.
Error analysis
In this section we provide an error analysis for our best single model predictions. First of all, we look at which type of questions our model performs well or poorly. There are more than 150 query types in the validation set but we filtered the three with the best and with the worst accuracy that have at least 50 supporting documents and at least 5 candidates. We show results in Table 4 . We observe that questions regarding places (birth and death) are considered harder for Entity-GCN. We then inspect samples where our model fails while assigning highest likelihood and noticed two principal sources of failure i) a mismatch between what is written in Wikipedia and what is annotated in Wikidata, and ii) a different degree of granularity (e.g., born in “London” vs “UK” could be considered both correct by a human but not when measuring accuracy). See Table 6 in the supplement material for some reported samples.
Secondly, we study how the model performance degrades when the input graph is large. In particular, we observe a negative Pearson's correlation (-0.687) between accuracy and the number of candidate answers. However, the performance does not decrease steeply. The distribution of the number of candidates in the dataset peaks at 5 and has an average of approximately 20. Therefore, the model does not see many samples where there are a large number of candidate entities during training. Differently, we notice that as the number of nodes in the graph increases, the model performance drops but more gently (negative but closer to zero Pearson's correlation). This is important as document sets can be large in practical applications. See Figure 3 in the supplemental material for plots.
In Table 6 , we report three samples from WikiHop development set where out Entity-GCN fails. In particular, we show two instances where our model presents high confidence on the answer, and one where is not. We commented these samples explaining why our model might fail in these cases.
Related work
In previous work, BiDAF BIBREF3 , FastQA BIBREF6 , Coref-GRU BIBREF12 , MHPGM BIBREF11 , and Weaver / Jenga BIBREF10 have been applied to multi-document question answering. The first two mainly focus on single document QA and BIBREF0 adapted both of them to work with WikiHop. They process each instance of the dataset by concatenating all $d \in S_q$ in a random order adding document separator tokens. They trained using the first answer mention in the concatenated document and evaluating exact match at test time. Coref-GRU, similarly to us, encodes relations between entity mentions in the document. Instead of using graph neural network layers, as we do, they augment RNNs with jump links corresponding to pairs of corefereed mentions. MHPGM uses a multi-attention mechanism in combination with external commonsense relations to perform multiple hops of reasoning. Weaver is a deep co-encoding model that uses several alternating bi-LSTMs to process the concatenated documents and the query.
Graph neural networks have been shown successful on a number of NLP tasks BIBREF24 , BIBREF25 , BIBREF26 , including those involving document level modeling BIBREF27 . They have also been applied in the context of asking questions about knowledge contained in a knowledge base BIBREF28 . In schlichtkrull2017modeling, GCNs are used to capture reasoning chains in a knowledge base. Our work and unpublished concurrent work by BIBREF23 are the first to study graph neural networks in the context of multi-document QA. Besides differences in the architecture, BIBREF23 propose to train a combination of a graph recurrent network and an RNN encoder. We do not train any RNN document encoders in this work.
Conclusion
We designed a graph neural network that operates over a compact graph representation of a set of documents where nodes are mentions to entities and edges signal relations such as within and cross-document coreference. The model learns to answer questions by gathering evidence from different documents via a differentiable message passing algorithm that updates node representations based on their neighbourhood. Our model outperforms published results where ablations show substantial evidence in favour of multi-step reasoning. Moreover, we make the model fast by using pre-trained (contextual) embeddings.
Acknowledgments
We would like to thank Johannes Welbl for helping to test our system on WikiHop. This project is supported by SAP Innovation Center Network, ERC Starting Grant BroadSem (678254) and the Dutch Organization for Scientific Research (NWO) VIDI 639.022.518. Wilker Aziz is supported by the Dutch Organisation for Scientific Research (NWO) VICI Grant nr. 277-89-002.
Architecture
See table 5 for an outline of Entity-GCN architectural detail. Here the computational steps
ELMo embeddings are a concatenation of three 1024-dimensional vectors resulting in 3072-dimensional input vectors $\lbrace \mathbf {x}_i\rbrace _{i=1}^N$ .
For the query representation $\mathbf {q}$ , we apply 2 bi-LSTM layers of 256 and 128 hidden units to its ELMo vectors. The concatenation of the forward and backward states results in a 256-dimensional question representation.
ELMo embeddings of candidates are projected to 256-dimensional vectors, concatenated to the $\mathbf {q}$ , and further transformed with a two layers MLP of 1024 and 512 hidden units in 512-dimensional query aware entity representations $\lbrace \mathbf {\hat{x}}_i\rbrace _{i=1}^N \in \mathbb {R}^{512}$ .
All transformations $f_*$ in R-GCN-layers are affine and they do maintain the input and output dimensionality of node representations the same (512-dimensional).
Eventually, a 2-layers MLP with [256, 128] hidden units takes the concatenation between $\lbrace \mathbf {h}_i^{(L)}\rbrace _{i=1}^N$ and $\mathbf {q}$ to predict the probability that a candidate node $v_i$ may be the answer to the query $q$ (see Equation 16 ).
During preliminary trials, we experimented with different numbers of R-GCN-layers (in the range 1-7). We observed that with WikiHop, for $L \ge 3$ models reach essentially the same performance, but more layers increase the time required to train them. Besides, we observed that the gating mechanism learns to keep more and more information from the past at each layer making unnecessary to have more layers than required.
Training details
We train our models with a batch size of 32 for at most 20 epochs using the Adam optimizer BIBREF29 with $\beta _1=0.9$ , $\beta _2=0.999$ and a learning rate of $10^{-4}$ . To help against overfitting, we employ dropout (drop rate $\in {0, 0.1, 0.15, 0.2, 0.25}$ ) BIBREF30 and early-stopping on validation accuracy. We report the best results of each experiment based on accuracy on validation set. | Accuracy |
a25c1883f0a99d2b6471fed48c5121baccbbae82 | a25c1883f0a99d2b6471fed48c5121baccbbae82_0 | Q: What performance does the Entity-GCN get on WIKIHOP?
Text: Introduction
The long-standing goal of natural language understanding is the development of systems which can acquire knowledge from text collections. Fresh interest in reading comprehension tasks was sparked by the availability of large-scale datasets, such as SQuAD BIBREF1 and CNN/Daily Mail BIBREF2 , enabling end-to-end training of neural models BIBREF3 , BIBREF4 , BIBREF5 . These systems, given a text and a question, need to answer the query relying on the given document. Recently, it has been observed that most questions in these datasets do not require reasoning across the document, but they can be answered relying on information contained in a single sentence BIBREF6 . The last generation of large-scale reading comprehension datasets, such as a NarrativeQA BIBREF7 , TriviaQA BIBREF8 , and RACE BIBREF9 , have been created in such a way as to address this shortcoming and to ensure that systems relying only on local information cannot achieve competitive performance.
Even though these new datasets are challenging and require reasoning within documents, many question answering and search applications require aggregation of information across multiple documents. The WikiHop dataset BIBREF0 was explicitly created to facilitate the development of systems dealing with these scenarios. Each example in WikiHop consists of a collection of documents, a query and a set of candidate answers (Figure 1 ). Though there is no guarantee that a question cannot be answered by relying just on a single sentence, the authors ensure that it is answerable using a chain of reasoning crossing document boundaries.
Though an important practical problem, the multi-hop setting has so far received little attention. The methods reported by BIBREF0 approach the task by merely concatenating all documents into a single long text and training a standard RNN-based reading comprehension model, namely, BiDAF BIBREF3 and FastQA BIBREF6 . Document concatenation in this setting is also used in Weaver BIBREF10 and MHPGM BIBREF11 . The only published paper which goes beyond concatenation is due to BIBREF12 , where they augment RNNs with jump-links corresponding to co-reference edges. Though these edges provide a structural bias, the RNN states are still tasked with passing the information across the document and performing multi-hop reasoning.
Instead, we frame question answering as an inference problem on a graph representing the document collection. Nodes in this graph correspond to named entities in a document whereas edges encode relations between them (e.g., cross- and within-document coreference links or simply co-occurrence in a document). We assume that reasoning chains can be captured by propagating local contextual information along edges in this graph using a graph convolutional network (GCN) BIBREF13 .
The multi-document setting imposes scalability challenges. In realistic scenarios, a system needs to learn to answer a query for a given collection (e.g., Wikipedia or a domain-specific set of documents). In such scenarios one cannot afford to run expensive document encoders (e.g., RNN or transformer-like self-attention BIBREF14 ), unless the computation can be preprocessed both at train and test time. Even if (similarly to WikiHop creators) one considers a coarse-to-fine approach, where a set of potentially relevant documents is provided, re-encoding them in a query-specific way remains the bottleneck. In contrast to other proposed methods (e.g., BIBREF12 , BIBREF10 , BIBREF3 ), we avoid training expensive document encoders.
In our approach, only a small query encoder, the GCN layers and a simple feed-forward answer selection component are learned. Instead of training RNN encoders, we use contextualized embeddings (ELMo) to obtain initial (local) representations of nodes. This implies that only a lightweight computation has to be performed online, both at train and test time, whereas the rest is preprocessed. Even in the somewhat contrived WikiHop setting, where fairly small sets of candidates are provided, the model is at least 5 times faster to train than BiDAF. Interestingly, when we substitute ELMo with simple pre-trained word embeddings, Entity-GCN still performs on par with many techniques that use expensive question-aware recurrent document encoders.
Despite not using recurrent document encoders, the full Entity-GCN model achieves over 2% improvement over the best previously-published results. As our model is efficient, we also reported results of an ensemble which brings further 3.6% of improvement and only 3% below the human performance reported by BIBREF0 . Our contributions can be summarized as follows:
Method
In this section we explain our method. We first introduce the dataset we focus on, WikiHop by BIBREF0 , as well as the task abstraction. We then present the building blocks that make up our Entity-GCN model, namely, an entity graph used to relate mentions to entities within and across documents, a document encoder used to obtain representations of mentions in context, and a relational graph convolutional network that propagates information through the entity graph.
Dataset and task abstraction
The WikiHop dataset comprises of tuples $\langle q, S_q, C_q, a^\star \rangle $ where: $q$ is a query/question, $S_q$ is a set of supporting documents, $C_q$ is a set of candidate answers (all of which are entities mentioned in $S_q$ ), and $a^\star \in C_q$ is the entity that correctly answers the question. WikiHop is assembled assuming that there exists a corpus and a knowledge base (KB) related to each other. The KB contains triples $\langle s, r, o \rangle $ where $s$ is a subject entity, $o$ an object entity, and $r$ a unidirectional relation between them. BIBREF0 used Wikipedia as corpus and Wikidata BIBREF15 as KB. The KB is only used for constructing WikiHop: BIBREF0 retrieved the supporting documents $q$0 from the corpus looking at mentions of subject and object entities in the text. Note that the set $q$1 (not the KB) is provided to the QA system, and not all of the supporting documents are relevant for the query but some of them act as distractors. Queries, on the other hand, are not expressed in natural language, but instead consist of tuples $q$2 where the object entity is unknown and it has to be inferred by reading the support documents. Therefore, answering a query corresponds to finding the entity $q$3 that is the object of a tuple in the KB with subject $q$4 and relation $q$5 among the provided set of candidate answers $q$6 .
The goal is to learn a model that can identify the correct answer $a^\star $ from the set of supporting documents $S_q$ . To that end, we exploit the available supervision to train a neural network that computes scores for candidates in $C_q$ . We estimate the parameters of the architecture by maximizing the likelihood of observations. For prediction, we then output the candidate that achieves the highest probability. In the following, we present our model discussing the design decisions that enable multi-step reasoning and an efficient computation.
Reasoning on an entity graph
In an offline step, we organize the content of each training instance in a graph connecting mentions of candidate answers within and across supporting documents. For a given query $q = \langle s, r, ? \rangle $ , we identify mentions in $S_q$ of the entities in $C_q \cup \lbrace s\rbrace $ and create one node per mention. This process is based on the following heuristic:
we consider mentions spans in $S_q$ exactly matching an element of $C_q \cup \lbrace s\rbrace $ . Admittedly, this is a rather simple strategy which may suffer from low recall.
we use predictions from a coreference resolution system to add mentions of elements in $C_q \cup \lbrace s\rbrace $ beyond exact matching (including both noun phrases and anaphoric pronouns). In particular, we use the end-to-end coreference resolution by BIBREF16 .
we discard mentions which are ambiguously resolved to multiple coreference chains; this may sacrifice recall, but avoids propagating ambiguity.
To each node $v_i$ , we associate a continuous annotation $\mathbf {x}_i \in \mathbb {R}^D$ which represents an entity in the context where it was mentioned (details in Section "Node annotations" ). We then proceed to connect these mentions i) if they co-occur within the same document (we will refer to this as DOC-BASED edges), ii) if the pair of named entity mentions is identical (MATCH edges—these may connect nodes across and within documents), or iii) if they are in the same coreference chain, as predicted by the external coreference system (COREF edges). Note that MATCH edges when connecting mentions in the same document are mostly included in the set of edges predicted by the coreference system. Having the two types of edges lets us distinguish between less reliable edges provided by the coreference system and more reliable (but also more sparse) edges given by the exact-match heuristic. We treat these three types of connections as three different types of relations. See Figure 2 for an illustration. In addition to that, and to prevent having disconnected graphs, we add a fourth type of relation (COMPLEMENT edge) between any two nodes that are not connected with any of the other relations. We can think of these edges as those in the complement set of the entity graph with respect to a fully connected graph.
Our model then approaches multi-step reasoning by transforming node representations (Section "Node annotations" for details) with a differentiable message passing algorithm that propagates information through the entity graph. The algorithm is parameterized by a graph convolutional network (GCN) BIBREF13 , in particular, we employ relational-GCNs BIBREF17 , an extended version that accommodates edges of different types. In Section "Entity relational graph convolutional network" we describe the propagation rule.
Each step of the algorithm (also referred to as a hop) updates all node representations in parallel. In particular, a node is updated as a function of messages from its direct neighbours, and a message is possibly specific to a certain relation. At the end of the first step, every node is aware of every other node it connects directly to. Besides, the neighbourhood of a node may include mentions of the same entity as well as others (e.g., same-document relation), and these mentions may have occurred in different documents. Taking this idea recursively, each further step of the algorithm allows a node to indirectly interact with nodes already known to their neighbours. After $L$ layers of R-GCN, information has been propagated through paths connecting up to $L+1$ nodes.
We start with node representations $\lbrace \mathbf {h}_i^{(0)}\rbrace _{i=1}^N$ , and transform them by applying $L$ layers of R-GCN obtaining $\lbrace \mathbf {h}_i^{(L)}\rbrace _{i=1}^N$ . Together with a representation $\mathbf {q}$ of the query, we define a distribution over candidate answers and we train maximizing the likelihood of observations. The probability of selecting a candidate $c \in C_q$ as an answer is then
$$ P(c|q, C_q, S_q) \propto \exp \left(\max _{i \in \mathcal {M}_c} f_o([\mathbf {q}, \mathbf {h}^{(L)}_i]) \right)\;,$$ (Eq. 16)
where $f_o$ is a parameterized affine transformation, and $\mathcal {M}_c$ is the set of node indices such that $i\in \mathcal {M}_c$ only if node $v_i$ is a mention of $c$ . The $\max $ operator in Equation 16 is necessary to select the node with highest predicted probability since a candidate answer is realized in multiple locations via different nodes.
Node annotations
Keeping in mind we want an efficient model, we encode words in supporting documents and in the query using only a pre-trained model for contextualized word representations rather than training our own encoder. Specifically, we use ELMo BIBREF20 , a pre-trained bi-directional language model that relies on character-based input representation. ELMo representations, differently from other pre-trained word-based models (e.g., word2vec BIBREF21 or GloVe BIBREF22 ), are contextualized since each token representation depends on the entire text excerpt (i.e., the whole sentence).
We choose not to fine tune nor propagate gradients through the ELMo architecture, as it would have defied the goal of not having specialized RNN encoders. In the experiments, we will also ablate the use of ELMo showing how our model behaves using non-contextualized word representations (we use GloVe).
ELMo encodings are used to produce a set of representations $\lbrace \mathbf {x}_i\rbrace _{i=1}^N$ , where $\mathbf {x}_i \in \mathbb {R}^D$ denotes the $i$ th candidate mention in context. Note that these representations do not depend on the query yet and no trainable model was used to process the documents so far, that is, we use ELMo as a fixed pre-trained encoder. Therefore, we can pre-compute representation of mentions once and store them for later use.
ELMo encodings are used to produce a query representation $\mathbf {q} \in \mathbb {R}^K$ as well. Here, $\mathbf {q}$ is a concatenation of the final outputs from a bidirectional RNN layer trained to re-encode ELMo representations of words in the query. The vector $\mathbf {q}$ is used to compute a query-dependent representation of mentions $\lbrace \mathbf { \hat{x}}_i\rbrace _{i=1}^N$ as well as to compute a probability distribution over candidates (as in Equation 16 ). Query-dependent mention encodings $\mathbf {\hat{x}}_i = f_x(\mathbf {q}, \mathbf {x}_i)$ are generated by a trainable function $f_x$ which is parameterized by a feed-forward neural network.
Entity relational graph convolutional network
Our model uses a gated version of the original R-GCN propagation rule. At the first layer, all hidden node representation are initialized with the query-aware encodings $\mathbf {h}_i^{(0)} = \mathbf {\hat{x}}_i$ . Then, at each layer $0\le \ell \le L$ , the update message $\mathbf {u}_i^{(\ell )}$ to the $i$ th node is a sum of a transformation $f_s$ of the current node representation $\mathbf {h}^{(\ell )}_i$ and transformations of its neighbours:
$$\mathbf {u}^{(\ell )}_i = f_s(\mathbf {h}^{(\ell )}_i) + \frac{1}{|\mathcal {N}_i|} \sum _{j \in \mathcal {N}_i} \sum _{r \in \mathcal {R}_{ij}} f_r(\mathbf {h}_j^{(\ell )})\;,$$ (Eq. 22)
where $\mathcal {N}_i$ is the set of indices of nodes neighbouring the $i$ th node, $\mathcal {R}_{ij}$ is the set of edge annotations between $i$ and $j$ , and $f_r$ is a parametrized function specific to an edge type $r\in \mathcal {R}$ . Recall the available relations from Section "Ablation study" , namely, $\mathcal {R} =\lbrace $ DOC-BASED, MATCH, COREF, COMPLEMENT $\rbrace $ .
A gating mechanism regulates how much of the update message propagates to the next step. This provides the model a way to prevent completely overwriting past information. Indeed, if all necessary information to answer a question is present at a layer which is not the last, then the model should learn to stop using neighbouring information for the next steps. Gate levels are computed as
$$\mathbf {a}^{(\ell )}_i = \sigma \left( f_a\left([\mathbf {u}^{(\ell )}_i, \mathbf {h}^{(\ell )}_i ]\right) \right) \;,$$ (Eq. 23)
where $\sigma (\cdot )$ is the sigmoid function and $f_a$ a parametrized transformation. Ultimately, the updated representation is a gated combination of the previous representation and a non-linear transformation of the update message:
$$\mathbf {h}^{(\ell + 1)}_i = \phi (\mathbf {u}^{(\ell )}_i) \odot \mathbf {a}^{(\ell )}_i + \mathbf {h}^{(\ell )}_i \odot (1 - \mathbf {a}^{(\ell )}_i ) \;,$$ (Eq. 24)
where $\phi (\cdot )$ is any nonlinear function (we used $\tanh $ ) and $\odot $ stands for element-wise multiplication. All transformations $f_*$ are affine and they are not layer-dependent (since we would like to use as few parameters as possible to decrease model complexity promoting efficiency and scalability).
Experiments
In this section, we compare our method against recent work as well as preforming an ablation study using the WikiHop dataset BIBREF0 . See Appendix "Implementation and experiments details" in the supplementary material for a description of the hyper-parameters of our model and training details.
Comparison
In this experiment, we compare our Enitity-GCN against recent prior work on the same task. We present test and development results (when present) for both versions of the dataset in Table 2 . From BIBREF0 , we list an oracle based on human performance as well as two standard reading comprehension models, namely BiDAF BIBREF3 and FastQA BIBREF6 . We also compare against Coref-GRU BIBREF12 , MHPGM BIBREF11 , and Weaver BIBREF10 . Additionally, we include results of MHQA-GRN BIBREF23 , from a recent arXiv preprint describing concurrent work. They jointly train graph neural networks and recurrent encoders. We report single runs of our two best single models and an ensemble one on the unmasked test set (recall that the test set is not publicly available and the task organizers only report unmasked results) as well as both versions of the validation set.
Entity-GCN (best single model without coreference edges) outperforms all previous work by over 2% points. We additionally re-ran BiDAF baseline to compare training time: when using a single Titan X GPU, BiDAF and Entity-GCN process 12.5 and 57.8 document sets per second, respectively. Note that BIBREF0 had to use BiDAF with very small state dimensionalities (20), and smaller batch size due to the scalability issues (both memory and computation costs). We compare applying the same reductions. Eventually, we also report an ensemble of 5 independently trained models. All models are trained on the same dataset splits with different weight initializations. The ensemble prediction is obtained as $\arg \max \limits _c \prod \limits _{i=1}^5 P_i(c|q, C_q, S_q)$ from each model.
Ablation study
To help determine the sources of improvements, we perform an ablation study using the publicly available validation set (see Table 3 ). We perform two groups of ablation, one on the embedding layer, to study the effect of ELMo, and one on the edges, to study how different relations affect the overall model performance.
We argue that ELMo is crucial, since we do not rely on any other context encoder. However, it is interesting to explore how our R-GCN performs without it. Therefore, in this experiment, we replace the deep contextualized embeddings of both the query and the nodes with GloVe BIBREF22 vectors (insensitive to context). Since we do not have any component in our model that processes the documents, we expect a drop in performance. In other words, in this ablation our model tries to answer questions without reading the context at all. For example, in Figure 1 , our model would be aware that “Stockholm” and “Sweden” appear in the same document but any context words, including the ones encoding relations (e.g., “is the capital of”) will be hidden. Besides, in the masked case all mentions become `unknown' tokens with GloVe and therefore the predictions are equivalent to a random guess. Once the strong pre-trained encoder is out of the way, we also ablate the use of our R-GCN component, thus completely depriving the model from inductive biases that aim at multi-hop reasoning.
The first important observation is that replacing ELMo by GloVe (GloVe with R-GCN in Table 3 ) still yields a competitive system that ranks far above baselines from BIBREF0 and even above the Coref-GRU of BIBREF12 , in terms of accuracy on (unmasked) validation set. The second important observation is that if we then remove R-GCN (GloVe w/o R-GCN in Table 3 ), we lose 8.0 points. That is, the R-GCN component pushes the model to perform above Coref-GRU still without accessing context, but rather by updating mention representations based on their relation to other ones. These results highlight the impact of our R-GCN component.
In this experiment we investigate the effect of the different relations available in the entity graph and processed by the R-GCN module. We start off by testing our stronger encoder (i.e., ELMo) in absence of edges connecting mentions in the supporting documents (i.e., using only self-loops – No R-GCN in Table 3 ). The results suggest that WikipHop genuinely requires multihop inference, as our best model is 6.1% and 8.4% more accurate than this local model, in unmasked and masked settings, respectively. However, it also shows that ELMo representations capture predictive context features, without being explicitly trained for the task. It confirms that our goal of getting away with training expensive document encoders is a realistic one.
We then inspect our model's effectiveness in making use of the structure encoded in the graph. We start naively by fully-connecting all nodes within and across documents without distinguishing edges by type (No relation types in Table 3 ). We observe only marginal improvements with respect to ELMo alone (No R-GCN in Table 3 ) in both the unmasked and masked setting suggesting that a GCN operating over a naive entity graph would not add much to this task and a more informative graph construction and/or a more sophisticated parameterization is indeed needed.
Next, we ablate each type of relations independently, that is, we either remove connections of mentions that co-occur in the same document (DOC-BASED), connections between mentions matching exactly (MATCH), or edges predicted by the coreference system (COREF). The first thing to note is that the model makes better use of DOC-BASED connections than MATCH or COREF connections. This is mostly because i) the majority of the connections are indeed between mentions in the same document, and ii) without connecting mentions within the same document we remove important information since the model is unaware they appear closely in the document. Secondly, we notice that coreference links and complement edges seem to play a more marginal role. Though it may be surprising for coreference edges, recall that the MATCH heuristic already captures the easiest coreference cases, and for the rest the out-of-domain coreference system may not be reliable. Still, modelling all these different relations together gives our Entity-GCN a clear advantage. This is our best system evaluating on the development. Since Entity-GCN seems to gain little advantage using the coreference system, we report test results both with and without using it. Surprisingly, with coreference, we observe performance degradation on the test set. It is likely that the test documents are harder for the coreference system.
We do perform one last ablation, namely, we replace our heuristic for assigning edges and their labels by a model component that predicts them. The last row of Table 3 (Induced edges) shows model performance when edges are not predetermined but predicted. For this experiment, we use a bilinear function $f_e(\mathbf {\hat{x}}_i, \mathbf {\hat{x}}_j) = \sigma \left( \mathbf {\hat{x}}^\top _i \mathbf {W}_e \mathbf {\hat{x}}_j \right)$ that predicts the importance of a single edge connecting two nodes $i,j$ using the query-dependent representation of mentions (see Section "Node annotations" ). The performance drops below `No R-GCN' suggesting that it cannot learn these dependencies on its own.
Most results are stronger for the masked settings even though we do not apply the coreference resolution system in this setting due to masking. It is not surprising as coreferred mentions are labeled with the same identifier in the masked version, even if their original surface forms did not match ( BIBREF0 used Wikipedia links for masking). Indeed, in the masked version, an entity is always referred to via the same unique surface form (e.g., MASK1) within and across documents. In the unmasked setting, on the other hand, mentions to an entity may differ (e.g., “US” vs “United States”) and they might not be retrieved by the coreference system we are employing, making the task harder for all models. Therefore, as we rely mostly on exact matching when constructing our graph for the masked case, we are more effective in recovering coreference links on the masked rather than unmasked version.
In Figure 3 , we show how the model performance goes when the input graph is large. In particular, how Entity-GCN performs as the number of candidate answers or the number of nodes increases.
Error analysis
In this section we provide an error analysis for our best single model predictions. First of all, we look at which type of questions our model performs well or poorly. There are more than 150 query types in the validation set but we filtered the three with the best and with the worst accuracy that have at least 50 supporting documents and at least 5 candidates. We show results in Table 4 . We observe that questions regarding places (birth and death) are considered harder for Entity-GCN. We then inspect samples where our model fails while assigning highest likelihood and noticed two principal sources of failure i) a mismatch between what is written in Wikipedia and what is annotated in Wikidata, and ii) a different degree of granularity (e.g., born in “London” vs “UK” could be considered both correct by a human but not when measuring accuracy). See Table 6 in the supplement material for some reported samples.
Secondly, we study how the model performance degrades when the input graph is large. In particular, we observe a negative Pearson's correlation (-0.687) between accuracy and the number of candidate answers. However, the performance does not decrease steeply. The distribution of the number of candidates in the dataset peaks at 5 and has an average of approximately 20. Therefore, the model does not see many samples where there are a large number of candidate entities during training. Differently, we notice that as the number of nodes in the graph increases, the model performance drops but more gently (negative but closer to zero Pearson's correlation). This is important as document sets can be large in practical applications. See Figure 3 in the supplemental material for plots.
In Table 6 , we report three samples from WikiHop development set where out Entity-GCN fails. In particular, we show two instances where our model presents high confidence on the answer, and one where is not. We commented these samples explaining why our model might fail in these cases.
Related work
In previous work, BiDAF BIBREF3 , FastQA BIBREF6 , Coref-GRU BIBREF12 , MHPGM BIBREF11 , and Weaver / Jenga BIBREF10 have been applied to multi-document question answering. The first two mainly focus on single document QA and BIBREF0 adapted both of them to work with WikiHop. They process each instance of the dataset by concatenating all $d \in S_q$ in a random order adding document separator tokens. They trained using the first answer mention in the concatenated document and evaluating exact match at test time. Coref-GRU, similarly to us, encodes relations between entity mentions in the document. Instead of using graph neural network layers, as we do, they augment RNNs with jump links corresponding to pairs of corefereed mentions. MHPGM uses a multi-attention mechanism in combination with external commonsense relations to perform multiple hops of reasoning. Weaver is a deep co-encoding model that uses several alternating bi-LSTMs to process the concatenated documents and the query.
Graph neural networks have been shown successful on a number of NLP tasks BIBREF24 , BIBREF25 , BIBREF26 , including those involving document level modeling BIBREF27 . They have also been applied in the context of asking questions about knowledge contained in a knowledge base BIBREF28 . In schlichtkrull2017modeling, GCNs are used to capture reasoning chains in a knowledge base. Our work and unpublished concurrent work by BIBREF23 are the first to study graph neural networks in the context of multi-document QA. Besides differences in the architecture, BIBREF23 propose to train a combination of a graph recurrent network and an RNN encoder. We do not train any RNN document encoders in this work.
Conclusion
We designed a graph neural network that operates over a compact graph representation of a set of documents where nodes are mentions to entities and edges signal relations such as within and cross-document coreference. The model learns to answer questions by gathering evidence from different documents via a differentiable message passing algorithm that updates node representations based on their neighbourhood. Our model outperforms published results where ablations show substantial evidence in favour of multi-step reasoning. Moreover, we make the model fast by using pre-trained (contextual) embeddings.
Acknowledgments
We would like to thank Johannes Welbl for helping to test our system on WikiHop. This project is supported by SAP Innovation Center Network, ERC Starting Grant BroadSem (678254) and the Dutch Organization for Scientific Research (NWO) VIDI 639.022.518. Wilker Aziz is supported by the Dutch Organisation for Scientific Research (NWO) VICI Grant nr. 277-89-002.
Architecture
See table 5 for an outline of Entity-GCN architectural detail. Here the computational steps
ELMo embeddings are a concatenation of three 1024-dimensional vectors resulting in 3072-dimensional input vectors $\lbrace \mathbf {x}_i\rbrace _{i=1}^N$ .
For the query representation $\mathbf {q}$ , we apply 2 bi-LSTM layers of 256 and 128 hidden units to its ELMo vectors. The concatenation of the forward and backward states results in a 256-dimensional question representation.
ELMo embeddings of candidates are projected to 256-dimensional vectors, concatenated to the $\mathbf {q}$ , and further transformed with a two layers MLP of 1024 and 512 hidden units in 512-dimensional query aware entity representations $\lbrace \mathbf {\hat{x}}_i\rbrace _{i=1}^N \in \mathbb {R}^{512}$ .
All transformations $f_*$ in R-GCN-layers are affine and they do maintain the input and output dimensionality of node representations the same (512-dimensional).
Eventually, a 2-layers MLP with [256, 128] hidden units takes the concatenation between $\lbrace \mathbf {h}_i^{(L)}\rbrace _{i=1}^N$ and $\mathbf {q}$ to predict the probability that a candidate node $v_i$ may be the answer to the query $q$ (see Equation 16 ).
During preliminary trials, we experimented with different numbers of R-GCN-layers (in the range 1-7). We observed that with WikiHop, for $L \ge 3$ models reach essentially the same performance, but more layers increase the time required to train them. Besides, we observed that the gating mechanism learns to keep more and more information from the past at each layer making unnecessary to have more layers than required.
Training details
We train our models with a batch size of 32 for at most 20 epochs using the Adam optimizer BIBREF29 with $\beta _1=0.9$ , $\beta _2=0.999$ and a learning rate of $10^{-4}$ . To help against overfitting, we employ dropout (drop rate $\in {0, 0.1, 0.15, 0.2, 0.25}$ ) BIBREF30 and early-stopping on validation accuracy. We report the best results of each experiment based on accuracy on validation set. | During testing: 67.6 for single model without coreference, 66.4 for single model with coreference, 71.2 for ensemble of 5 models |
a88f8cae1f59cdc4f1f645e496d6d2ac4d9fba1b | a88f8cae1f59cdc4f1f645e496d6d2ac4d9fba1b_0 | Q: Do they evaluate only on English datasets?
Text: Introduction
Affective computing has raised a great deal of interest in the last years. Picard picard1995affective introduced it as a computing paradigm that relates to, arises from, or influences emotions, letting computers be both more effective in assisting humans and successful in making decisions.
Language, as a conceptual process, plays a key role in the perception of verbal irony and sarcasm, two well-known forms of figurative language (FL) BIBREF0 Traditionally, irony as a figure of speech can be intended as “saying something while meaning something else” BIBREF1 . A comprehensive overview of different theories of irony has been illustrated in Attardo attardo07. Understanding if irony and sarcasm are the same linguistic phenomenon or not is still an unresolved question in literature BIBREF2 . Some authors consider irony a more general form of sarcasm, while others tend to consider it a separate linguistic issue BIBREF3 , BIBREF4 . According to the theory of sarcastic irony, sarcasm and irony are very similar, but sarcasm has a specific victim who is the object of the sarcastic statement, while irony does not have such a target BIBREF5 . More commonly, the noun “sarcasm” is understood as “saying the opposite of what one is thinking”, usually with a negative intention. Henceforth, due to the different nuances of irony and sarcasm, and the multiple interpretations of these two concepts, we do not differentiate between them, and, like many researchers, e.g., BIBREF6 , we will use the term “sarcasm” to refer to both verbal irony and sarcasm.
A sarcastic sentence may include features that characterize a positive sentiment, but that insinuates a negative sentiment BIBREF7 , BIBREF8 . It is clear that sarcastic sentences are more difficult to process by an algorithm than non-sarcastic assertions; as a matter of fact, both the situation and the mental state of the speaker are factors that can determine a sarcastic content in a sentence.
A system capable of detecting sarcasm correctly would greatly improve the performance of sentiment analysis systems BIBREF9 , BIBREF10 , BIBREF6 , BIBREF11 , especially considering the big data available nowadays due to the exponential growth of social platforms. Unfortunately, sarcasm detection in written texts is a difficult task even for humans BIBREF12 .
Moreover, some people usually do not understand sarcasm, and there are sentences meant as being sarcastic by the author that are not recognized as such by the readers.
We focus our attention on the possibility of detecting sarcastic sentences automatically from written text only, and from the reader's point of view. Managing this task without any knowledge of relevant contextual features, like prosody, is very hard.
The problem of sarcasm detection has been tackled with machine learning approaches, made possible by the availability of several annotated corpora. In the literature we can find two main categories of such corpora: automatically annotated and manually annotated.
The automatically annotated corpora are usually collected from the microblogging platform Twitter BIBREF13 , BIBREF14 by exploiting the final hashtag of tweets. For instance, a tweet is labeled as sarcastic only if it ends with a hashtag such as #sarcasm or #irony. The same cue is used in Davidov, Tsur and Rappoport davidov2010semi to produce a silver standard for evaluating their model.
Manually annotated corpora are collected from a more diversified range of social media, such as Amazon reviews BIBREF15 , Reddit (Wallace et al. 2014) or online forums BIBREF16 , BIBREF17 , and then labeled by hiring people in the Amazon Mechanical Turk portal. When using crowdsourcing, the annotation procedures are complex and involve, among others, a stage for ensuring that the workers understood the task and they are performing correctly, and a quality assurance stage for removing texts for which a high discrepancy between the annotators arises.
In this work we have tackled the problem of sarcasm detection by trying to use an entirely data-driven approach, exploiting a distributional semantics representation by inducing a semantic space and then applying a set of classifiers to classify the texts as being sarcastic or not sarcastic. With “fully data-driven” we mean approaches that are capable of finding connections between input text and class labels without using any a priori knowledge about the features that characterize a sarcastic statement.
In particular, we do not define “irony” or “sarcasm”, neither use any definition. We simply rely on sets of sentences binary labeled for sarcasm detection taking for granted that the labels correctly identify a sarcastic sentence.
It is worthwhile to point out that in this work we do not create any dataset: we simply exploit the labels of datasets that have already been produced by others, trying to give a baseline for the sarcasm detection task.
The contribution of this work can be summed up in three key points:
To reach these goals, we exploit a Distributional Semantics approach, whose aim is to give a representation of words in a continuous vector space BIBREF18 , BIBREF19 , where word similarity is coded in an unsupervised manner. This representation is useful for building models with little, or no, a-priori knowledge about the task BIBREF20 .
Distributional semantics is a research field that concerns methodologies aimed at determining semantic similarities between linguistic items. The key idea is based on the hypothesis that words co-occurring in similar contexts tend to have similar meaning BIBREF21 , BIBREF22 . Distributional semantics deals with the automatic construction of semantic models induced from large unstructured textual corpora, and it exploits vector space models to represent the meaning of a word BIBREF23 . Many methods can be applied to construct distributional models. They range from the statistical models to machine learning ones BIBREF24 , BIBREF19 , BIBREF25 , BIBREF26 . Among these techniques, Latent Semantic Analysis (LSA) is a methodology for building distributional semantic spaces that extract statistical relations between words which co-occurr in a given context though the use of the Truncated Singular value decomposition (T-SVD). In this work we explored and studied the possibility of building a data-driven model in the field of sarcasm detection exploiting the well-known Latent Semantic Analysis (LSA) paradigm both in its traditional formulation given by Landauer, Foltz and Laham landauer1998introduction and by using the Truncated Singular Value Decomposition (T-SVD) as a statistical estimator as illustrated in Pilato and Vassallo pilato2015tsvd.
Both approaches have been used to create data-driven semantic spaces where documents and, generally, text chunks can be mapped.
The theory behind LSA states that the “psychological similarity between any two words is reflected in the way they co-occur in small sub-samples of language” (Landauer et al. 1998).
We have chosen to exploit the LSA paradigm since it is a well-known distributional semantics paradigm capable of modeling many human cognitive abilities; furthermore, it has many potential practical applications BIBREF27 , BIBREF18 , BIBREF28 , BIBREF29 . Moreover, it has been demonstrated in Pilato and Vassallo pilato2015tsvd that Truncated Singular Value Decomposition (T-SVD), as used in LSA, can be interpreted as a statistical estimator, giving a robust theoretical interpretation to the Latent Semantic Analysis paradigm. Many researchers have successfully applied this technique for typical Semantic Computing applications, such as natural language understanding, cognitive modeling, speech recognition, smart indexing, anti-spam filters, dialogue systems, and other Statistical Natural Language processing problems BIBREF30 , BIBREF31 , BIBREF32 . Moreover, Latent Semantic Analysis has been successfully used for inducing data-driven “conceptual” spaces BIBREF33 . For the aforementioned reasons, we have chosen this approach as a baseline for the detection of sarcasm in texts.
Furthermore, our study makes use of four machine learning methods that have been used on four manually annotated, publicly available corpora.
The experimental results show that our data-driven approach consisting of LSA followed by a classifier can establish models that outperform the published results on two of the corpora; additionally, it produces competitive results for the other corpora that we used for our evaluation.
The next section describes the state of the art in the field, Section SECREF3 describes the Semantic Representation and the Machine Learning methods used in the study. Section SECREF4 introduces the datasets used for the experiments. Section SECREF5 summarizes the experimental results, Section SECREF6 is for the final conclusions and remarks.
The code and the datasets used for the experiments are available on github.
Related works
The problem of sarcasm detection has been tackled using a wide range of supervised or semi-supervised techniques applied to corpora from different social media sources.
In the present work, we do not collect a new corpus for sarcasm detection, but sarcastic corpus annotation has received much attention in the literature. Most of the works have used unsupervised or semi-supervised approaches in order to reduce the cost of the annotation, while partially sacrificing the data quality. One of the first approaches was introduced by Tsur, Davidov and Rappoport tsur2010icwsm for a corpus extracted from Twitter and further developed in Davidov et al. davidov2010semi with a corpus consisting of Amazon reviews. This semi-supervised approach uses “YAHOO! BOSS” API web search for collecting INLINEFORM0 utterances similar to the ones in a small initial labeled seed set. It was the first work to show that automatically-crawled data are useful for the task of sarcasm detection. Most of the works have been pursued using data extracted from Twitter, as it is relatively easy to extract ironic or sarcastic tweets using the search by hashtag. In fact, in Twitter, the restricted number of characters allowed encourages to mark the ironic intent with a hashtag like #irony or #sarcasm to prevent ambiguities. The hashtag is usually removed from the tweets and used as a label for the silver standard. Moreover, the first studies on Twitter data showed that the task is quite difficult also for human beings. González-Ibánez et al. gonzalez2011identifying collected a corpus of INLINEFORM1 tweets balanced between sarcastic, positive sentiment and negative sentiment. They presented a part of the corpus to human judges, who achieved low agreement and low accuracy. Reyes et al. reyes2013multidimensional collected a corpus using 4 hashtags that identify four different categories, irony, education, humor, and politics, with INLINEFORM2 tweets each. The same corpus was used in a later work BIBREF34 . Their results suggest that detecting sarcasm in full documents is easier than in single sentences because of the presence of a context, but in both cases, it remains a difficult task also for humans that often have a low agreement. The specific case of positive sentiment and a negative situation, which is the most typical sarcastic situation, has also been analyzed BIBREF35 . In particular, authors have found that less than half of the tweets ending with the hashtag #sarcastic are recognized as sarcastic by humans after removing the hashtag. Bharti, Babu, and Jena bharti2015parsing proposed two algorithms with the goal to find, respectively, tweets with contrast in sentiment and situation, and tweets starting with interjections. They also found that the label distribution does not correlate perfectly with the hashtag distribution, e.g., only INLINEFORM3 out of INLINEFORM4 tweets ending with #sarcastic are actually sarcastic. Farias, Patti and Rosso farias16 proposed a method that uses affective content to classify sarcastic tweets, and show that it outperforms preceding methods in several Twitter benchmarks. Since classifying tweets by using only the text is a difficult task also for humans, other works proposed new methods capable of exploiting other kind of data, like the identity of the author or the thread of the tweet. Bamman and Smith bamman2015contextualized augmented the feature vectors with features describing the author of the tweet and the user to which the tweet is addressed, obtaining significant improvements in accuracy. They also found that the hashtags #sarcasm and #sarcastic are mainly used when the audience is not known. Wang, Wu, Wang and Ren wang2015twitter use a sequential classifier for classifying tweets taking into account the previous responses, thus improving the performance concerning a simple multi-class classifier.
Amir, Wallace, Lyu, Carvalho and Silva amir2016modelling used the dataset collected in Bamman et al. bamman2015contextualized (which was not completely available) for training a deep learning model that could represent users with user embeddings and this method seems to outperform the method from Bamman and colleagues. Sarcasm classification on Twitter involves different modelling techniques that perform better when taking into account the user and the thread history of a Tweet. Our work focuses on the task of classifying a single document written by a single author. Thus, we focus mainly on different kinds of datasets. Buschmeier, Cimiano and Klinger buschmeier2014impact have studied the corpus introduced in Filatova filatova2012irony by extracting a high number of features about typographic cues that can represent sarcasm, and used different classification methods obtaining results that vary significantly according to the classifier. They found that the single most important feature is the star rating of the review, and this happens because sarcastic reviews are more probable when a user did not like the product.
Wallace et al. wallace2014humans created a corpus from Reddit posts, for which they also stored context information, such as the post that is answered. The authors proposed a method that uses the bag of words and other features from previous studies for building an SVM classifier that gets very low results. Moreover, a correlation is found between posts for which the humans require the context and sarcastic posts. This can be explained by considering that the chosen sub-reddits are about religion or politics, and they are thus very prone to controversial discussions. Consequently, to understand the ironic intent of a post it is quite important to know the author position on the topic and also the posts they are answering to.
Joshi, Sharma and Bhattacharyya joshi-sharma-bhattacharyya:2015:ACL-IJCNLP used features for capturing intrinsic and extrinsic incongruity in texts and outperforms two previous methods both in tweets and in forum posts. These works represent valuable means of comparison for the present work. We show that an approach based only on distributional semantics is competitive with other approaches using more elaborated feature engineering, even when the data amount is quite small. Distributional semantics became popular in NLP thanks to the availability of good quality word embeddings BIBREF19 , and are introduced by design in deep learning models. In sarcasm detection, distributional semantics has been used to serve different roles. Ghosh, Guo, and Muresan ghosh2015sarcastic have adopted word embeddings to disambiguate a literal use of single words from a sarcastic use. Joshi, Tripathi, Patel, Bhattacharyya and Carman joshi2016word use word embeddings to compute incongruities among words using them as additional features for methods selected from the literature. Our work differs from these as we use LSA instead of word embeddings, and distributional semantics is the only kind of features we use. Ghosh and Veale ghosh2016 use LSA to extend the list of hashtags to find more sarcastic tweets on Twitter and use a deep neural network to perform the actual classification. Our work differs from theirs as we use LSA to compute the vectorial representation of documents and we do not perform tweet crawling. Poria, Cambria, Hazarika and Vij cambria2016 train a convolutional neural network to classify sarcasm in tweets. They extend the neural network with features extracted from other datasets for sentiment, emotion and personality classification, as these features are considered to be useful for the task of sarcasm detection.
Data-Driven Induction of Semantic Spaces and Traditional Classifiers
We focused our research on the role that fully data-driven models can play in detecting sarcasm. To reach this goal, we exploited the Latent Semantic Analysis paradigm both in its traditional formulation (Landauer et al. 1998) and by using the Truncated Singular Value Decomposition (T-SVD) as a statistical estimator as shown in Pilato et al. pilato2015tsvd. We have chosen to use the LSA paradigm to exploit a well-known and well-founded approach for inducing semantic spaces that have been effectively used in natural language understanding, cognitive modeling, speech recognition, smart indexing, and other statistical natural language processing problems. The sub-symbolic codings of documents obtained by the aforementioned LSA-based approaches are then used as inputs by a set of classifiers to evaluate the differences of performances obtained by using different machine learning approaches and testing them on different sarcasm-detection datasets.
The full work-flow, composed of the following steps:
does not require any expert or domain knowledge.
Preprocessing of text
The first step of preprocessing for texts is the tokenization using spaces, punctuation and special characters (e.g., $, , @) as separators. Thus one token is a sequence of alphanumeric characters or of punctuation symbols. The set of all the extracted tokens constitutes a “vocabulary” named INLINEFORM0 .
The sequences of tokens, each representing a single document in the training set, are used to generate a word-document co-occurrence raw matrix INLINEFORM0 , where each INLINEFORM1 cell contains the number of times the token INLINEFORM2 appears in the document INLINEFORM3 . Let INLINEFORM4 be the number of tokens, i.e., INLINEFORM5 , and let INLINEFORM6 be the number of documents of the corpus used for computing the matrix INLINEFORM7 ; the dimensionality of INLINEFORM8 is INLINEFORM9 .
Data driven induction of semantic spaces by means of LSA-oriented paradigms
The matrix INLINEFORM0 is used and further processed to induce proper Semantic Spaces where terms and documents can be mapped. To generate these semantic spaces, we have used both the traditional LSA algorithm (Deerwester et al. 1990, Landauer et al. 1998) and the approach which uses T-SVD as a statistical estimator as proposed in Pilato et al. pilato2015tsvd. For the sake of brevity, we call this last approach Statistical LSA to differentiate it by the Traditional LSA. It is worthwhile to point out that, in the Latent Semantic Analysis paradigm (i.e., both “general” and “statistical”), the corpus used for building the semantic space plays a key role in performances. As a matter of fact, large and heterogeneous corpora may give more noise or too much specific information from a single domain, decreasing the accuracy of the induced models BIBREF36 .
The traditional LSA is a procedure that has been used mainly for information retrieval (Deerwester et al. 1990). The previously described matrix INLINEFORM0 is used for computing a Tf-Idf (Term-Frequency Inverse-document frequency) matrix INLINEFORM1 BIBREF37 . Let INLINEFORM2 be the rank of INLINEFORM3 . The following factorization, called Singular Value Decomposition (SVD) holds for the matrix INLINEFORM4 : DISPLAYFORM0
where INLINEFORM0 is a INLINEFORM1 orthogonal matrix, INLINEFORM2 is a INLINEFORM3 orthogonal matrix and INLINEFORM4 is a INLINEFORM5 diagonal matrix, whose diagonal elements INLINEFORM6 are called singular values of INLINEFORM7 . It can be shown that the singular value decomposition of INLINEFORM8 is unique up to the order of the singular values and of the corresponding columns of INLINEFORM9 and INLINEFORM10 , so there is no loss of generality if we suppose that INLINEFORM11 are ranked in decreasing order.
Let INLINEFORM0 be an integer such that INLINEFORM1 , let INLINEFORM2 be the matrix obtained from INLINEFORM3 by removing its last INLINEFORM4 columns, INLINEFORM5 the matrix obtained from INLINEFORM6 in the same manner and INLINEFORM7 the diagonal matrix obtained from INLINEFORM8 by suppressing both its last INLINEFORM9 rows and INLINEFORM10 columns. INLINEFORM11 is the matrix containing the INLINEFORM12 -dimensional vector representation of the words and INLINEFORM13 is the matrix containing the INLINEFORM14 -dimensional vector representation of the documents. It can be shown (Deerwester et al. 1990) that the matrix: DISPLAYFORM0
is the best rank INLINEFORM0 approximation to INLINEFORM1 according to the Frobenius distance. INLINEFORM6 is called the reconstructed matrix. The process by which INLINEFORM7 is obtained from INLINEFORM8 is called Truncated Singular Value Decomposition (T-SVD). The book by Golub and Van Loan golub1996matrix provide further details about the Singular Value Decomposition technique.
The traditional Latent Semantic Analysis based on T-SVD is one of the possible methods to infer data-driven models. Furthermore, one of its major drawbacks, which is the lack of a sound statistical interpretation, has been recently overcome in Pilato et al. pilato2015tsvd, where authors have been presented a statistical explanation of this paradigm.
According to this interpretation, the T-SVD algorithm, as used in the Latent Semantic Analysis paradigm, acts as an estimator, which conveys statistically significant information from the sample to the model.
To briefly sum-up the procedure, we recall here the concepts of probability amplitude and probability distribution associated with a matrix as they have been defined in Pilato et al. pilato2015tsvd.
Let INLINEFORM0 , INLINEFORM1 be two positive integers and let INLINEFORM2 be the set of real numbers. Given a INLINEFORM3 matrix INLINEFORM4 with INLINEFORM5 , INLINEFORM6 , INLINEFORM7 where at least one of its components INLINEFORM8 is positive, we define a set INLINEFORM9 , composed of all the pairs INLINEFORM10 that identify the positive components of INLINEFORM11 , i.e.: DISPLAYFORM0
Subsequently, we define the probability amplitude associated with INLINEFORM0 , the INLINEFORM1 matrix INLINEFORM2 resulting from the mapping INLINEFORM3 : DISPLAYFORM0
whose elements INLINEFORM0 are computed as: DISPLAYFORM0
so that INLINEFORM0 it is INLINEFORM1 and INLINEFORM2 .
We define also the probability distribution associated with a matrix INLINEFORM0 the INLINEFORM1 matrix resulting from the mapping INLINEFORM2 : DISPLAYFORM0
whose elements are the squares of the elements of INLINEFORM0 , i.e. INLINEFORM1 . The method starts with a raw data matrix INLINEFORM2 consisting of positive values. In our study the raw data matrix INLINEFORM3 is the term-document co-occurrence matrix. From INLINEFORM4 a real-valued normalized matrix INLINEFORM5 is computed by dividing every element for the sum of all elements of INLINEFORM6 . DISPLAYFORM0
If we call INLINEFORM0 the matrix: DISPLAYFORM0
The matrix INLINEFORM0 can be decomposed with the SVD technique: DISPLAYFORM0
and its best rank-r decomposition INLINEFORM0 is obtained by applying the T-SVD technique, which minimizes the Frobenius distance INLINEFORM1 , given INLINEFORM2 : DISPLAYFORM0
Even if INLINEFORM0 is not a probability distribution, the computation of INLINEFORM1 makes it possible to identify, without any further addition of external information, the probability distribution we are looking for. As shown in Pilato et al. pilato2015tsvd, it theoretically suffices computing the probability amplitude associated to INLINEFORM2 , i.e. INLINEFORM3 , and consequently calculating the probability distribution INLINEFORM4 associated to INLINEFORM5 . The aforementioned Frobenius distance INLINEFORM6 constitutes an upper bound to the Hellinger distance between the sample probability INLINEFORM11 and the probability distribution estimated by the procedure.
Mapping new documents to the semantic space
Both LSA approaches illustrated in the previous subsections provide us with the three, obviously different for each approach, matrices INLINEFORM0 , INLINEFORM1 and INLINEFORM2 .
The INLINEFORM0 and the INLINEFORM1 matrices can be used for computing the vector representation of the new documents into the induced semantic space. The INLINEFORM2 matrix contains in its diagonal the singular values; INLINEFORM3 is composed by rows that represent the r-dimensional sub-symbolic, i.e., numerical, mapping in the semantic space of the tokens constituting the vocabulary INLINEFORM4 . Then, given a text chunk INLINEFORM5 , INLINEFORM6 is sub-symbolically represented by a INLINEFORM7 -dimensional word occurrence vector INLINEFORM8 , from which it is computed a vector INLINEFORM9 with two different procedures depending on which LSA paradigm has been chosen.
In the case of Traditional LSA, it is the Tf-Idf representation BIBREF38 of INLINEFORM0 by using the same parameters learned during training.
In the case of the Statistical LSA, the INLINEFORM0 vector is transformed into INLINEFORM1 similarly as the matrix INLINEFORM2 is transformed into the matrix INLINEFORM3 : DISPLAYFORM0
Once the appropriate coding of INLINEFORM0 has been computed, an r-dimensional vector INLINEFORM1 representing the sub-symbolic coding of INLINEFORM2 is then obtained from the vector INLINEFORM3 by means of the following mapping formula: DISPLAYFORM0
Supervised learning
The training and test documents are mapped into the semantic spaces induced at the previous step. These vectors, sub-symbolic coding of the documents, are therefore used as inputs to different classifiers to train or test on them. Such classifiers will finally solve a binary classification problem assigning the label 1 (sarcastic) or 0 (nonsarcastic) to a generic document. For this study we have used Support Vector Machines, Logistic Regression, Random Forests, and Gradient boosting as they represent the state of the art for most of the binary classification problems with small datasets. In the following, we recall a brief description of them.
The logistic regressor (LR) is a generalized linear model suitable for binary responses BIBREF39 . In LR the following log-linear model is adopted: DISPLAYFORM0
where INLINEFORM0 represents the probability of the success outcome. A suitable way of minimizing the so called empirical risk is the numerical estimation of the INLINEFORM1 s coefficient by a maximum likelihood procedure: DISPLAYFORM0
where INLINEFORM0 is the training set, INLINEFORM1 is the norm of the weights vector used for regularization, and can be either the INLINEFORM2 or the INLINEFORM3 norm, and INLINEFORM4 is the weight to give to the regularization factor. The function in formula EQREF33 is convex, so it can be minimized even with the simple gradient descent algorithm, but more complex algorithms can be used in order to reduce the convergence time. In this work we use the trust region Newton method proposed by Lin, Weng and Keerthy lin2008trust, as provided by the LIBLINEAR library BIBREF40 .
A kernel INLINEFORM0 is any mapping satisfying DISPLAYFORM0
where INLINEFORM0 , INLINEFORM1 are elements in the input space, INLINEFORM2 is a mapping from the input space to a new representation space INLINEFORM3 where an inner product is defined. The function INLINEFORM4 is chosen to be nonlinear, and the dimension of the feature space is taken intentionally greater than the dimension of the input space. These choices could give the chance to make the classification problem linearly separable in INLINEFORM5 . Support vector machines (SVMs), also called kernel machines BIBREF41 are binary linear classifiers that make use of kernels. They search for the optimal hyperplane INLINEFORM6 in the feature space that maximizes the geometric margin, which is the distance of the hyperplane to the nearest training data point of any class. The main advantage of SVM is that it provides a solution to the global optimization problem, thereby reducing the generalization error of the classifier. The formulation of SVM can be easily extended to build a nonlinear classifier by incorporating a kernel of the class H DISPLAYFORM0
No systematic tools have been developed to automatically identify the optimal kernel for a particular application.
Decision trees BIBREF42 are rooted trees that can be used successfully as classifiers BIBREF43 . Each node of the three represents a binary rule that splits the feature space according to the value of a predictive feature and a path from the root to leaf nodes represents a series of rules that are used to recursively divide the feature space into smaller subspaces, where a class label is assigned. The structure of the tree in terms of split nodes can be learned from data by using several approaches. Random forests BIBREF44 are an ensemble of decision trees, found using the bootstrap sampling technique on the training set. In particular, a fixed number of random samples are extracted with replacement from the training set, and each of them is used as a training set to fit a decision tree. The forest is composed by each of these decision trees, and the final predictions are made by averaging the predictions from all the individual decision trees.
Boosting is another ensemble strategy with the special purpose of improving the combination of a set of weak classifiers. These are chosen to be of very low model complexity such as the case of decision trees with a single split. The general framework of boosting sequentially adds a tree to an ensemble, the new one with the goal of correcting its predecessor. Gradient boosting BIBREF45 uses a gradient-descent like procedure to sequentially improve a tree classifier. This is done by adding to the actual classifier a new decision tree learned from the residual errors made by the predecessor. The final predictions are made by the tree classifier resulting after a fixed number of iterations of the procedure.
Datasets
We have chosen 4 corpora for our experiments, all of them are publicly available and treating the problem as a binary classification: “SarcasmCorpus” (Filatova 2012) , “IAC-Sarcastic” BIBREF46 , which is a subset of Internet Argument Corpus1.0 prepared for sarcasm detection, “irony-context” (Wallace et al. 2014), and “IAC-Sarcastic-v2” (Oraby et al. 2016), which is extracted from the second version of Internet Argument Corpus BIBREF47 . In order to provide a more complete evaluation, we also use the corpus of the shared task “Semeval2018 Task 3A” BIBREF48 .
SarcasmCorpus
Filatova filatova2012irony collected 1254 reviews from Amazon for different kinds of products, of which 437 are sarcastic, and 817 are not sarcastic. The dataset is unbalanced toward the “regular” texts, and this is due both to the policy of Amazon, which explicitly requires sincere reviews and to the peculiarity of sarcasm itself, which is used only in some cases, especially because of the difficulty for humans to recognize it over the internet.
Each review in the corpus consists of the title, author, product name, review text and number of stars, and the review is a stand-alone document referring to a single product. This corpus, like all the others considered in this work, has been entirely hand-labeled by the Amazon Mechanical Turkers, who were asked whether each review contains sarcasm in it. Each text has been presented to 5 Turkers and has been classified as sarcastic when at least three among five workers agreed. The corpus contains INLINEFORM0 distinct tokens, with INLINEFORM1 occurring only in sarcastic reviews, INLINEFORM2 occurring only in regular reviews and INLINEFORM3 occurring in both categories. Buschmeier et al. buschmeier2014impact made an interesting analysis of the corpus by collecting some statistics and publishing the only classification results that are available for it up to now. They extracted 29 task-specific features and combined them with the bag-of-words representation and multiple classifiers. The bag of words resulted to be important for the classification. In fact, for example, they get a poor 50.9% F-score value with logistic regressor without bag-of-words, which is increased to 74% by using it. This result is surely related to the difference in terms used by the two classes, but it also shows that information about the words used in the document is needed for the task.
IAC-Sarcastic
The second dataset we used is the IAC-Sarcastic sub-corpus, which consists of 1995 posts coming from 4forums.com, a classical forum where several topics are discussed. This corpus is actually extracted from the larger Internet Argument Corpus (IAC), containing INLINEFORM0 discussions, INLINEFORM1 posts and INLINEFORM2 words. In IAC there are INLINEFORM3 Quote-Response (Q-R) pairs and INLINEFORM4 three-posts chains that have been manually labeled for several HITs (Human-Intelligence Tasks) by Amazon Mechanical Turk. For each Q-R item, the Turkers were asked to evaluate the response section by considering the quote as a context. One of the HITs regarded the identification of a sarcastic response. As a result, the IAC-Sarcastic Corpus consists of 1995 responses, without any quote, with a binary label that indicates the presence of sarcasm. 998 texts are labeled as sarcastic, and 997 are not, so this is one of the rare balanced datasets for this task. To the best of our knowledge, only the work by Justo, Corcoran, Lukin, Walker, and Torres justo2014 published results on the sarcastic task of the IAC dataset, but the authors made a different sampling of the documents from the one used for IAC-Sarcastic. Thus, our results for this corpus are not comparable with the ones reported in that work.
Irony-context
A third dataset is the one collected in Wallace et al. wallace2014humans. The main goal of that study was to highlight the role of the context of a text to make irony understandable by humans. The dataset is extracted from Reddit by collecting comments from the following six sub-reddits: politics, progressive, conservative, atheism, Christianity, technology, with their respective size of 873, 573, 543, 442, 312 and 277 samples. Each comment has been labeled by three university undergraduates using a browser interface which let them see the context of the comment in the form of previous comments or related pages under request. The label of a comment was selected with a simple majority of 2 out of 3 labelers. For each comment and each labeler, they stored whether the context has been requested and if the labeler changed his mind after having seen it. This allowed the authors to study the correlation between the sarcastic label and the requests for context.
The results allowed the authors to infer that the machines would also need the context for detecting sarcasm, as their model did not predict correctly the texts for which the humans required the context. This is an important cue that should be considered while developing sarcasm detection methods, even though we do not explicitly consider the context of our method. As a result, we cannot expect to obtain high absolute results for this dataset by letting the model observe only the single text.
IAC-Sarcastic-v2
In 2016 a new version of IAC was made available (IACv2) (Abbot et al. 2016), and after some months also the sarcastic sub-corpus was released (Oraby et al. 2016), which is bigger than the first version. It consists of three sub-corpora, among which the bigger one is called “generic”, and it is made of INLINEFORM0 posts per class collected from IACv2. For the creation of this sub-corpus, the authors produced a high-precision classifier for the non-sarcastic class, which helped to filter out many non-sarcastic posts from the original corpus and lower the labeling costs. Then, to have high-quality labeling, they required a majority of 6 out of 9 sarcastic annotations to label a post as sarcastic.
To produce a more diverse corpus, they built two more corpora focused on particular rhetorical figures often associated with sarcasm: rhetorical questions and hyperboles. For both of the sub-corpora, the authors used patterns to recognize posts containing the chosen rhetorical figure from IACv2. Each of the collected posts has been subsequently shown to five AMTs for the sarcastic/not sarcastic annotation. The label is given with simple majority.
The purpose of these two focused sub-corpora is to force classifiers to find some semantic cues which can distinguish sarcastic posts even in the presence of rhetorical figures usually associated with sarcasm. In fact, the presence of hyperboles has been used before as a feature for detecting sarcasm BIBREF49 .
Semeval-2018 Task3 Corpus of Tweets
The International Workshop on Semantic Evaluation Semeval-2018 featured a shared task on verbal irony detection in tweets (Van Hee et al. 2018). The corpus contains a class-balanced training set consisting of INLINEFORM0 tweets, and a test set with 784 tweets. In the test set, only 40% of the instances are ironic. The corpus has been collected from Twitter searching for tweets with the hashtags #irony, #sarcasm and #not. The corpus has been annotated by three students in linguistics who showed a high inter-annotator agreement. After the annotation, INLINEFORM1 tweets out of INLINEFORM2 were ironic and only 604 were not. Thus, an additional set of INLINEFORM3 non-ironic tweets was added to the corpus. Finally, the corpus was split randomly in class-balanced training and test set, but an additional cleaning step for removing ambiguous sentences modified the proportion to 40% ironic.
Experimental setup
We ran three groups of experiments, to assess both the effectiveness of our approach when compared with the approaches we found in literature and its capability of extracting features that are relevant for sarcasm in a cross-domain scenario. In both cases, we denote with the word model one of the possible combinations of classic/statistical LSA and a classifier. The used classifiers are Support Vector Machine (SVM), Logistic regression (Log.Reg), Random Forest (RF) and gradient boosting (XGB).
For the first group of experiments, we evaluated the performance of each of our models in every corpus. We use 10-fold cross-validation and report the mean values of INLINEFORM0 -score, precision, and recall among all the folds. The proportion of the two classes in each fold is equal to the proportion in the whole corpus. Where applicable, we compare our results with existing results in the literature. Besides, we compare with the method presented in Poira et al. cambria2016.
The second group of experiments has been performed on the Semeval 2018 Task 3 dataset (Van Hee et al. 2018). We first find the best LSA dimensionality by 10-fold cross-validation in the training set. Then, we trained again the models in the whole dataset and evaluated them in the test set for comparison with the participants to the shared task.
The third group of experiments is inter-corpora. For each experiment, we have chosen one corpus as a training set and another one as a test set. This process is performed for all the models and all the corpora pairs. We aim to find whether sarcasm detection is domain-dependent.
Finally, in the fourth group of experiments (union experiments) we perform another 10-fold in which all the corpora are concatenated. Each fold contains samples from every corpus proportionally to the size of that corpus. The goal of this experiment is to understand whether simply adding more data, but from different domains, improves the classification performance.
The hyperparameters of the classifiers have been chosen by grid search on SarcasmCorpus with LSA dimensionality 40, and then used for all the reported experiments. We use SVM with Gaussian kernel, C value of 100, INLINEFORM0 logistic regression with penalty L1 and C=10 and decision tree with entropy loss. SVM and logistic regression both have balanced class weights to cope with unbalanced datasets.
In-corpus Experiments
In SarcasmCorpus each sample consists of a review title, a review text, a product name and the number of stars given to the product ranging from 1 to 5. Buschmeier et al. buschmeier2014impact showed that the star rating is the most discriminative feature. Thus we performed the experiment both including and not including it. In Table TABREF48 , we refer to “SarcasmCorpus” when the star rating is not used, and “SarcasmCorpus*” when it is used. We use the star rating by simply concatenating it to the document vector produced by LSA. The document vector is computed only from the review texts because in our preliminary experiments we found that the other parts are not useful for the task. Accuracy and F-score values of all classifiers for SarcasmCorpus and SarcasmCorpus* are plotted in Figures FIGREF72 and FIGREF73 , and the best F-scores, with the relative precision and recall, are reported in the two columns SarcasmCorpus and SarcasmCorpus* of Table TABREF48 . The best result from the logistic regression in SarcasmCorpus is INLINEFORM0 which represents a INLINEFORM1 % relative improvement concerning the INLINEFORM2 reported in the above-mentioned work by Buschmeier et al. buschmeier2014impact. The results from Poira et al. cambria2016 are even higher in terms of F-score, with a relative improvement of INLINEFORM3 , which is due mostly to a much higher recall.
Note that the method by Poira et al. cambria2016 uses also features extracted from other datasets for sentiment, emotion and personality classification, as these features are considered to be useful for the task of sarcasm detection. Moreover, as our goal is to propose a baseline, the training time in the order of minutes is an advantage of our model. We report such results as an upper bound considering that our model does not use additional information from external data.
The best results are obtained using the star labels. In this setting, our best-performing classifiers are better than the INLINEFORM0 F-score value reported by Buschmeier, and our best INLINEFORM1 -score of INLINEFORM2 represents a INLINEFORM3 relative improvement. In this single case of SarcasmCorpus*, the results with the Traditional LSA are all higher than their counterparts with Statistical LSA.
For IAC-Sarcastic we do not have any previously published result to compare with. The only related result is reported in Joshi et al. joshi-sharma-bhattacharyya:2015:ACL-IJCNLP, which use a corpus randomly extracted from IAC containing 752 sarcastic and 752 not sarcastic texts. They report an F-score of INLINEFORM0 (average over a 5-fold), but the text sampling procedure is not specified in the paper. Thus, we prefer to use the sarcastic selection given by the Internet Argument Corpus website which is also a bit larger (998 sarcastic and 997 non-sarcastic texts).
Accuracies and F-scores of all the classifiers at varying T-SVD size are plotted in Figure FIGREF74 , best values of F-score, precision and recall are reported in column IAC-Sarcastic of Table TABREF49 . The best result (F= INLINEFORM0 ) is lower than in SarcasmCorpus, despite IAC-Sarcastic being balanced and larger than SarcasmCorpus. With Traditional LSA the INLINEFORM1 -scores are generally slightly lower, but the precision values are higher.
The results from Poira et al. cambria2016 are significantly higher, suggesting that in this dataset the sarcasm can be detected in most cases with the linguistic features used by their network independently from the context.
For the irony-context corpus, we used the same 1949 documents selected for the experiments reported in Wallace et al. wallace2014humans. To allow fair comparisons, we used only the texts of the comments, without any contextual information.
The authors report a mean F-score over the five-fold of 0.383 by using a bag-of-words representation with 50.000 tokens, plus some other binary features that have proven useful in other works, and an SVM classifier with a linear kernel. Our results are plotted in Figure FIGREF78 and reported in column irony-context of Table TABREF49 , where it is shown how our classifiers clearly outperform the baseline. Our maximum F-score of INLINEFORM0 represents a relative improvement of 20%. Moreover, it is important to highlight the incredibly low values obtained in this corpus when compared with the results from the previous corpora. This is certainly due to the high skewness between the classes; in fact, the positive samples are just 537 over 1949 (27.5%). If we consider that in SarcasmCorpus the sarcastic texts are only 33% of the total, we suppose there are other causes. Another reason that can explain the poor results can be found in the diversity of topics, as the texts are extracted from six different forums, and the words used for sarcasm can be highly specific to a given context, both cultural and topical. In Wallace et al. wallace2014humans it is explicitly said that the request of the context from an annotator is high for the sarcastic texts. As a consequence, classifying correctly the texts without a context is difficult even for humans. Moreover, the forums from which the posts were extracted are highly controversial, as they regard politics or religion. As a consequence, it is difficult to grasp the sarcasm of a text without knowing the author's opinions.
The results with Traditional LSA are very similar to Statistical LSA, and the real surprise is the incredibly low scores obtained by the random forest and gradient boosting methods.
In this case, we wanted to compare our results against those from Oraby et al. oraby2016creating, which deal with the three sub-corpora separately. However, they are not directly comparable because at the moment in which we report these results only half of the corpus has been released, consisting of 3260 posts in the generic sub-corpus, 582 in the Hyperbole side and 850 for rhetorical questions. The three sub-corpora are all balanced.
Results computed on the three subcorpora are plotted in Figures FIGREF75 , FIGREF76 , FIGREF77 and reported in the last three columns of table TABREF50 . Despite the difference in data availability, the results are quite encouraging. In fact, we can see that our method reaches the INLINEFORM0 - score of INLINEFORM1 in the generic sub-corpus, slightly better than the previous study. Moreover, it improves over Oraby et al. (2016) also in the other two sub-corpora but using Traditional LSA.
Nonetheless, these results show that it is possible to achieve very good performance when high-quality labeled corpora are available, even with a limited number of examples.
For the CNN, we have results only in the generic sub-corpus, and this is the only case in which at least one of our models can outperform it in terms of F-score.
SemEval 2018 Task 3A
The last experiment on a single dataset was performed on the settings of SemEval 2018 Task 3A (Van Hee et al. 2018), which is a shared task on a binary classification of irony, which we introduced in Section SECREF47 .
We start by performing 10-fold cross-validation with our classifiers over varying LSA dimensionality to choose the best setting. We used the same set of hyper-parameters used for the previous experiments.
Once we have found the best setting, we train again the model with all the data and predict the classes of the test tweets. We found that we obtain the best results in cross-validation with LSA vectors of size 20, and the results are presented in Table TABREF59 . We list results for four different classifiers, namely logistic regression, support vector machine, gradient boosting and random forest. In this case, we get the best results using random forests, followed by gradient boosting. In particular, random forest obtains a F INLINEFORM0 -score of INLINEFORM1 , which is higher than the 6-th submission. It is worth noting that the submissions that we listed in the Table, except for the baseline, all use approaches based on deep learning. Compared to the unigram SVM baseline used for the shared task (row 11 in table 4), our model with the random forest is clearly better according to all the metrics, while our model with SVM is better in terms of F INLINEFORM2 score but not accuracy.
Surely the model we provide is not the best one in terms of accuracy, and showing its superiority among all the others does not represent the goal of this work, but the best performers, i.e. deep learning networks, involve an high number of parameters and high computational training cost. Moreover, there are additional interesting notes. First, the submission by BIBREF50 also makes use of deep neural networks but does not get a higher score than our best. Second, the submission by BIBREF51 is using SVMs over syntactic, semantic, and affective features, but still is not better than our best score. The models that showed a clear superiority use deep networks pre-trained on external data to extract more meaningful features. Thus, while the advantage is real, the number of parameters and the amount of data used is much higher.
Inter-corpora Experiments
The second group of experiments is aimed at finding whether the sarcasm is domain-dependent, or the knowledge acquired over one dataset can be transferred to another. We evaluate the similarity among the datasets by training a model over all the data of a corpus and using a second corpus as a test set. Our best results for every corpus pair are listed in Tables TABREF62 and TABREF63 , where the rows indicate the training set and the columns the test set. Quite interestingly, unlike the in-corpus experiments where the logistic regression works better in some cases, all the top scores that we report for these experiments are obtained by using the SVM classifier.
In Table TABREF62 we find the results for SarcasmCorpus and IAC-Sarcastic used as test sets. For the case of SarcasmCorpus, the F-scores are quite low compared to the in-corpus experiments. In fact, here we obtain the best result of only INLINEFORM0 when IAC-Sarcastic is the training set, which is much lower than the scores of about 70 that we get in the in-corpus experiments (column SarcasmCorpus in table TABREF48 ). The low results suggest us that the sarcasm conveyed by the texts in SarcasmCorpus is somehow different from what we can observe in the other corpora.
When we use IAC-Sarcastic as a test set, we can observe higher scores (column IAC-Sarcastic in table TABREF62 ), and the F-score of INLINEFORM0 that we obtain by training in IAC-Sarcastic-v2 is comparable to the INLINEFORM1 , which is the best result in the in-corpus experiments. Also, the lower result, which we obtain when training on irony-context, is quite close to the result obtained for the in-corpus experiment, and unexpected since the poor results obtained in the in-corpus experiments for irony-context (column Irony-Context in table TABREF49 ). When irony-context is the test set (first three columns of table TABREF63 ), we can observe again that the F-score obtained by training in IAC-Sarcastic-v2 is higher than the score obtained in the in-corpus experiment. Nonetheless, all the scores for this test set are lower than INLINEFORM2 with high recalls and low precisions.
When using IAC-Sarcastic-v2 as the test set (see last three columns of Table TABREF63 ) we can observe F-scores between INLINEFORM0 and INLINEFORM1 and are characterized by a high recall and lower precision. The top F1 score is obtained when using IAC-Sarcastic as a training set, which also corresponds to the highest precision. This represents a further proof in favor of the similarity of the two corpora. The top recall score of INLINEFORM2 is obtained by training on SarcasmCorpus, but the precision is much lower than the other two cases.
Overall, it is worth noting that, for all the experiments, the top results are obtained by training on either IAC-Sarcastic or IAC-Sarcastic-v2, while SarcasmCorpus is always better than irony-context. Considering that the quality of the features depends on the quality of the data and of the annotation, we suppose that the quality of the first two datasets is higher than the quality of irony-context, while the data contained in SarcasmCorpus are too different from the other corpora. A deeper analysis of the corpora can be found in the discussion (Section SECREF71 ).
Union Experiments
The last group of experiments we ran has the goal of understanding whether the combination of data coming from different sources can influence positively the final score. For this purpose, as anticipated in Section SECREF51 , we computed 10-folds of each of the four corpora used for the first group of experiments, and used as a training set the concatenation of 9 folds of every corpus, and as a validation set the remaining single folds of each corpus.
From Tables TABREF64 , TABREF65 we can observe that these results are not higher overall with respect to the inter-corpora results. The only exceptions are SarcasmCorpus, where the results are almost 20 F-score points higher than those obtained in the inter-corpora; and IAC-v2, where the gradient boosting (XGB) obtains 2 F-score points more than the top score in the inter-corpora results.
The results on SarcasmCorpus are still lower than the in-corpus results, and the scores of random forest and gradient boosting are much lower than the other two methods. This is further evidence that adding diverse data is not helpful, or is actually harmful, for classifying SarcasmCorpus.
The general trend of this block of experiments is that our classifiers are not able to leverage data from different domains in order to improve global results. In-domain data represent the best choice even if the data amount is lower.
Discussion
In this section, we discuss our results from a more general point of view. We start by briefly discussing the content of the different corpora. Then we try to relate the results of the different types of experiments. Finally, we detect the limits of our experiments for the type of documents we worked with.
The corpora we used for our experiments are characterized by high internal variability in style, as each corpus consists of texts from thousands of different authors. Despite the number of authors, there are some factors that depend on the type of text and the medium. For instance, the irony-context, IAC Sarcastic, and IAC Sarcastic v2 corpora are made of posts collected from online forums, which are mostly about politics. Most of the texts are extracted from longer arguments, and thus the style is informal and in general with aggressive tones.
In Tables TABREF67 , TABREF68 and TABREF69 we show some randomly selected samples from these corpora. As it is apparent from the samples, the posts have a target to attack, who can be another user or the subject of the discussion. Table TABREF67 shows some examples from IAC-Sarcastic. In all the examples the author attacks another user or his opinions. For instance, the first and the third sarcastic examples make sarcasm about the Bible to attack another user's religious ideas, while in the second example the author uses sarcasm to expose a fallacious position of another user and do not appear rude on his side. By contrast, the non-sarcastic examples are much more direct about their meaning. A similar pattern can be found in the examples from IAC Sarcastic v2 (table TABREF69 ). Sarcasm is again used to attack a person (first example) or his/her opinions (second example), maybe religious. The third example shows that also in this corpus some sentences are hard to classify. In this case, the information that we get is that the target has ultraconservative ideas, but it is not easy to grasp the sarcasm. The examples from irony-context (in table TABREF68 ) are much more difficult to grasp without knowing contextual information. For instance, the first sarcastic example can be either sarcastic or regular according to the political opinion of the author. It is sarcastic if the author is a Republican, it is not sarcastic (but would appear strange to write) if the author is a Democrat. The second and the third examples are hard to classify without knowing the subject of the conversation. The same issue of missing a broader context also appears in the non-sarcastic examples, and the third examples can easily be interpreted as sarcastic by humans. In SarcasmCorpus the situation is different as there is no argument ongoing, and the sarcasm is made against products that the author did not like. In this case, there are many references to the external world and the writing is more passionate in its negative stance. Some samples are shown in Table TABREF66 . The sarcastic examples in table TABREF66 all express a negative sentiment and also use negative words. Sarcasm is used within this negative reviews to attack the product in a more creative way and make the text more fun than a usual negative review. The non-sarcastic reviews, on the other side, give a description of the product and their experience with it, with regular forms of expressing the sentiment (“are also a great feature”, “It is a great little camera”). We suppose that this difference in style is the main obstacle to correct classification of SarcasmCorpus instances in the cross-corpora experiments.
We now discuss the relations among the results of the different experiments to gain some further insights into the sarcastic content of our corpora. From the in-corpus experiments, we obtain good results on SarcasmCorpus, which is the only corpus containing Amazon reviews. Unfortunately, when we train our models in a cross-corpora or all-corpora setting, our results drop dramatically, especially in the cross-corpora case. These results mean that the sarcasm in SarcasmCorpus is conveyed through features that are not present in the other corpora. This is especially true when considering that in the inter-corpora experiments, using SarcasmCorpus as a training set in all cases yields results that are only better than the ones obtained when using irony-context as a training set.
The results on irony-context show that this corpus is much more difficult to classify than the others, as it was pointed out also in the paper that presented it (Wallace et al. 2014), which highlights how the human annotators needed to read the contexts to be sure about the sarcastic posts. In the inter-corpora experiments, the results when training on irony-context are the worst for all the test sets, but only for a few points of F-score, while at first, we could expect dramatically lower results. For us, the previous are strong suggestions that the types of texts present in irony-context are similar to the ones present in IAC-Sarcastic-v2, but the quality is lower. As a consequence, this is a further proof that the datasets annotators do not consider sarcasm and irony two different linguistic phenomena.
The two versions of IAC-Sarcastic have proved to be the easiest to classify when using other corpora for training. The best result in IAC-Sarcastic is obtained in the Union experiment (see Tables TABREF64 , TABREF65 ), and thus it benefits from the higher amount of data, especially from the data from IAC-Sarcastic-v2, as can be observed from the cross-corpora results (Table TABREF62 ).
By contrast, the best results on IAC-Sarcastic-v2 are obtained with the in-corpus experiments, while all the results obtained in the inter-corpora experiments are clearly worse. Among the inter-corpora, training the model with IAC-Sarcastic results in a F_score of INLINEFORM0 , which means a relative decrement of INLINEFORM1 concerning the top score for the intra-corpus experiments of IAC-Sarcastic-v2. It is interesting to note that one cause of the decrement can also be the size of corpora, in fact, IAC-Sarcastic contains only 1995 texts, while IAC-Sarcastic-v2 contains 3260.
One final remark is about the absolute scores obtained in the in-corpus experiments. In fact, we can notice that in SarcasmCorpus the F_score can go beyond INLINEFORM0 , and up to INLINEFORM1 by adding the star rating as a feature. The high result can be explained by the peculiarity of this corpus, where sarcasm is present mostly in negative reviews, and the star label is the single best indicator of sarcasm BIBREF49 . The other corpora consist of texts that belong to a thread of forum posts. Sometimes it is reasonable to classify such posts as sarcastic or not out of context, but in many cases, it is impossible also for humans (see examples in Table TABREF68 ). In fact, the low F_score in irony-context is due to low precision, which is an indicator of high similarity between the positive and negative classes. Moreover, low precision and higher recall is a pattern that is present in most of the experiments, even if with higher absolute numbers. The combination of high recall and lower precision suggests that the dubious texts are classified as sarcastic more often than not sarcastic.
Conclusions
In this work, we have tackled the problem of automatic sarcasm detection from a data-driven point of view. More in details, we have used a set of labeled dataset and applied distributional semantics followed by some machine learning approaches in order to give a baseline for the literature in managing such a problem. We do not differentiate between sarcasm and irony because they are not so easily distinguishable even for human experts. Experiments have been carried out on four different corpora containing texts from online reviews or forums, and the corpus used for the shared task on irony detection on Twitter proposed in SemEval 2018. We have shown experimentally that some basic methods can outperform in all the datasets other methods based on bag of words and linguistic features, thus representing a solid baseline. With our experiments that train the models with one corpus and test them by using the other corpora, we have confirmed experimentally that also the annotators tend to not distinguish the distinction between irony and sarcasm. By contrast, major differences can be found according to the text domains, i.e., review vs. political forum. The domain difference can also prevent the method from taking benefits from more data when they are too diverse from the test data. As a future work, we will try to improve distributional semantics approaches with linguistic features in order to perform more fair comparisons with more recent and advanced methods. Furthermore, we will exploit more classical AI methodologies (e.g., by using ontologies, reasoners, common-sense reasoning techniques, etc.) to deduce the context, understanding the concepts expressed in a sentence, exploiting also features like hashtags and emojis to improve the overall performance of the approach. | Unanswerable |
bea60603d78baeeb6df1afb53ed08d8296b42f1e | bea60603d78baeeb6df1afb53ed08d8296b42f1e_0 | Q: What baseline models are used?
Text: Introduction
Affective computing has raised a great deal of interest in the last years. Picard picard1995affective introduced it as a computing paradigm that relates to, arises from, or influences emotions, letting computers be both more effective in assisting humans and successful in making decisions.
Language, as a conceptual process, plays a key role in the perception of verbal irony and sarcasm, two well-known forms of figurative language (FL) BIBREF0 Traditionally, irony as a figure of speech can be intended as “saying something while meaning something else” BIBREF1 . A comprehensive overview of different theories of irony has been illustrated in Attardo attardo07. Understanding if irony and sarcasm are the same linguistic phenomenon or not is still an unresolved question in literature BIBREF2 . Some authors consider irony a more general form of sarcasm, while others tend to consider it a separate linguistic issue BIBREF3 , BIBREF4 . According to the theory of sarcastic irony, sarcasm and irony are very similar, but sarcasm has a specific victim who is the object of the sarcastic statement, while irony does not have such a target BIBREF5 . More commonly, the noun “sarcasm” is understood as “saying the opposite of what one is thinking”, usually with a negative intention. Henceforth, due to the different nuances of irony and sarcasm, and the multiple interpretations of these two concepts, we do not differentiate between them, and, like many researchers, e.g., BIBREF6 , we will use the term “sarcasm” to refer to both verbal irony and sarcasm.
A sarcastic sentence may include features that characterize a positive sentiment, but that insinuates a negative sentiment BIBREF7 , BIBREF8 . It is clear that sarcastic sentences are more difficult to process by an algorithm than non-sarcastic assertions; as a matter of fact, both the situation and the mental state of the speaker are factors that can determine a sarcastic content in a sentence.
A system capable of detecting sarcasm correctly would greatly improve the performance of sentiment analysis systems BIBREF9 , BIBREF10 , BIBREF6 , BIBREF11 , especially considering the big data available nowadays due to the exponential growth of social platforms. Unfortunately, sarcasm detection in written texts is a difficult task even for humans BIBREF12 .
Moreover, some people usually do not understand sarcasm, and there are sentences meant as being sarcastic by the author that are not recognized as such by the readers.
We focus our attention on the possibility of detecting sarcastic sentences automatically from written text only, and from the reader's point of view. Managing this task without any knowledge of relevant contextual features, like prosody, is very hard.
The problem of sarcasm detection has been tackled with machine learning approaches, made possible by the availability of several annotated corpora. In the literature we can find two main categories of such corpora: automatically annotated and manually annotated.
The automatically annotated corpora are usually collected from the microblogging platform Twitter BIBREF13 , BIBREF14 by exploiting the final hashtag of tweets. For instance, a tweet is labeled as sarcastic only if it ends with a hashtag such as #sarcasm or #irony. The same cue is used in Davidov, Tsur and Rappoport davidov2010semi to produce a silver standard for evaluating their model.
Manually annotated corpora are collected from a more diversified range of social media, such as Amazon reviews BIBREF15 , Reddit (Wallace et al. 2014) or online forums BIBREF16 , BIBREF17 , and then labeled by hiring people in the Amazon Mechanical Turk portal. When using crowdsourcing, the annotation procedures are complex and involve, among others, a stage for ensuring that the workers understood the task and they are performing correctly, and a quality assurance stage for removing texts for which a high discrepancy between the annotators arises.
In this work we have tackled the problem of sarcasm detection by trying to use an entirely data-driven approach, exploiting a distributional semantics representation by inducing a semantic space and then applying a set of classifiers to classify the texts as being sarcastic or not sarcastic. With “fully data-driven” we mean approaches that are capable of finding connections between input text and class labels without using any a priori knowledge about the features that characterize a sarcastic statement.
In particular, we do not define “irony” or “sarcasm”, neither use any definition. We simply rely on sets of sentences binary labeled for sarcasm detection taking for granted that the labels correctly identify a sarcastic sentence.
It is worthwhile to point out that in this work we do not create any dataset: we simply exploit the labels of datasets that have already been produced by others, trying to give a baseline for the sarcasm detection task.
The contribution of this work can be summed up in three key points:
To reach these goals, we exploit a Distributional Semantics approach, whose aim is to give a representation of words in a continuous vector space BIBREF18 , BIBREF19 , where word similarity is coded in an unsupervised manner. This representation is useful for building models with little, or no, a-priori knowledge about the task BIBREF20 .
Distributional semantics is a research field that concerns methodologies aimed at determining semantic similarities between linguistic items. The key idea is based on the hypothesis that words co-occurring in similar contexts tend to have similar meaning BIBREF21 , BIBREF22 . Distributional semantics deals with the automatic construction of semantic models induced from large unstructured textual corpora, and it exploits vector space models to represent the meaning of a word BIBREF23 . Many methods can be applied to construct distributional models. They range from the statistical models to machine learning ones BIBREF24 , BIBREF19 , BIBREF25 , BIBREF26 . Among these techniques, Latent Semantic Analysis (LSA) is a methodology for building distributional semantic spaces that extract statistical relations between words which co-occurr in a given context though the use of the Truncated Singular value decomposition (T-SVD). In this work we explored and studied the possibility of building a data-driven model in the field of sarcasm detection exploiting the well-known Latent Semantic Analysis (LSA) paradigm both in its traditional formulation given by Landauer, Foltz and Laham landauer1998introduction and by using the Truncated Singular Value Decomposition (T-SVD) as a statistical estimator as illustrated in Pilato and Vassallo pilato2015tsvd.
Both approaches have been used to create data-driven semantic spaces where documents and, generally, text chunks can be mapped.
The theory behind LSA states that the “psychological similarity between any two words is reflected in the way they co-occur in small sub-samples of language” (Landauer et al. 1998).
We have chosen to exploit the LSA paradigm since it is a well-known distributional semantics paradigm capable of modeling many human cognitive abilities; furthermore, it has many potential practical applications BIBREF27 , BIBREF18 , BIBREF28 , BIBREF29 . Moreover, it has been demonstrated in Pilato and Vassallo pilato2015tsvd that Truncated Singular Value Decomposition (T-SVD), as used in LSA, can be interpreted as a statistical estimator, giving a robust theoretical interpretation to the Latent Semantic Analysis paradigm. Many researchers have successfully applied this technique for typical Semantic Computing applications, such as natural language understanding, cognitive modeling, speech recognition, smart indexing, anti-spam filters, dialogue systems, and other Statistical Natural Language processing problems BIBREF30 , BIBREF31 , BIBREF32 . Moreover, Latent Semantic Analysis has been successfully used for inducing data-driven “conceptual” spaces BIBREF33 . For the aforementioned reasons, we have chosen this approach as a baseline for the detection of sarcasm in texts.
Furthermore, our study makes use of four machine learning methods that have been used on four manually annotated, publicly available corpora.
The experimental results show that our data-driven approach consisting of LSA followed by a classifier can establish models that outperform the published results on two of the corpora; additionally, it produces competitive results for the other corpora that we used for our evaluation.
The next section describes the state of the art in the field, Section SECREF3 describes the Semantic Representation and the Machine Learning methods used in the study. Section SECREF4 introduces the datasets used for the experiments. Section SECREF5 summarizes the experimental results, Section SECREF6 is for the final conclusions and remarks.
The code and the datasets used for the experiments are available on github.
Related works
The problem of sarcasm detection has been tackled using a wide range of supervised or semi-supervised techniques applied to corpora from different social media sources.
In the present work, we do not collect a new corpus for sarcasm detection, but sarcastic corpus annotation has received much attention in the literature. Most of the works have used unsupervised or semi-supervised approaches in order to reduce the cost of the annotation, while partially sacrificing the data quality. One of the first approaches was introduced by Tsur, Davidov and Rappoport tsur2010icwsm for a corpus extracted from Twitter and further developed in Davidov et al. davidov2010semi with a corpus consisting of Amazon reviews. This semi-supervised approach uses “YAHOO! BOSS” API web search for collecting INLINEFORM0 utterances similar to the ones in a small initial labeled seed set. It was the first work to show that automatically-crawled data are useful for the task of sarcasm detection. Most of the works have been pursued using data extracted from Twitter, as it is relatively easy to extract ironic or sarcastic tweets using the search by hashtag. In fact, in Twitter, the restricted number of characters allowed encourages to mark the ironic intent with a hashtag like #irony or #sarcasm to prevent ambiguities. The hashtag is usually removed from the tweets and used as a label for the silver standard. Moreover, the first studies on Twitter data showed that the task is quite difficult also for human beings. González-Ibánez et al. gonzalez2011identifying collected a corpus of INLINEFORM1 tweets balanced between sarcastic, positive sentiment and negative sentiment. They presented a part of the corpus to human judges, who achieved low agreement and low accuracy. Reyes et al. reyes2013multidimensional collected a corpus using 4 hashtags that identify four different categories, irony, education, humor, and politics, with INLINEFORM2 tweets each. The same corpus was used in a later work BIBREF34 . Their results suggest that detecting sarcasm in full documents is easier than in single sentences because of the presence of a context, but in both cases, it remains a difficult task also for humans that often have a low agreement. The specific case of positive sentiment and a negative situation, which is the most typical sarcastic situation, has also been analyzed BIBREF35 . In particular, authors have found that less than half of the tweets ending with the hashtag #sarcastic are recognized as sarcastic by humans after removing the hashtag. Bharti, Babu, and Jena bharti2015parsing proposed two algorithms with the goal to find, respectively, tweets with contrast in sentiment and situation, and tweets starting with interjections. They also found that the label distribution does not correlate perfectly with the hashtag distribution, e.g., only INLINEFORM3 out of INLINEFORM4 tweets ending with #sarcastic are actually sarcastic. Farias, Patti and Rosso farias16 proposed a method that uses affective content to classify sarcastic tweets, and show that it outperforms preceding methods in several Twitter benchmarks. Since classifying tweets by using only the text is a difficult task also for humans, other works proposed new methods capable of exploiting other kind of data, like the identity of the author or the thread of the tweet. Bamman and Smith bamman2015contextualized augmented the feature vectors with features describing the author of the tweet and the user to which the tweet is addressed, obtaining significant improvements in accuracy. They also found that the hashtags #sarcasm and #sarcastic are mainly used when the audience is not known. Wang, Wu, Wang and Ren wang2015twitter use a sequential classifier for classifying tweets taking into account the previous responses, thus improving the performance concerning a simple multi-class classifier.
Amir, Wallace, Lyu, Carvalho and Silva amir2016modelling used the dataset collected in Bamman et al. bamman2015contextualized (which was not completely available) for training a deep learning model that could represent users with user embeddings and this method seems to outperform the method from Bamman and colleagues. Sarcasm classification on Twitter involves different modelling techniques that perform better when taking into account the user and the thread history of a Tweet. Our work focuses on the task of classifying a single document written by a single author. Thus, we focus mainly on different kinds of datasets. Buschmeier, Cimiano and Klinger buschmeier2014impact have studied the corpus introduced in Filatova filatova2012irony by extracting a high number of features about typographic cues that can represent sarcasm, and used different classification methods obtaining results that vary significantly according to the classifier. They found that the single most important feature is the star rating of the review, and this happens because sarcastic reviews are more probable when a user did not like the product.
Wallace et al. wallace2014humans created a corpus from Reddit posts, for which they also stored context information, such as the post that is answered. The authors proposed a method that uses the bag of words and other features from previous studies for building an SVM classifier that gets very low results. Moreover, a correlation is found between posts for which the humans require the context and sarcastic posts. This can be explained by considering that the chosen sub-reddits are about religion or politics, and they are thus very prone to controversial discussions. Consequently, to understand the ironic intent of a post it is quite important to know the author position on the topic and also the posts they are answering to.
Joshi, Sharma and Bhattacharyya joshi-sharma-bhattacharyya:2015:ACL-IJCNLP used features for capturing intrinsic and extrinsic incongruity in texts and outperforms two previous methods both in tweets and in forum posts. These works represent valuable means of comparison for the present work. We show that an approach based only on distributional semantics is competitive with other approaches using more elaborated feature engineering, even when the data amount is quite small. Distributional semantics became popular in NLP thanks to the availability of good quality word embeddings BIBREF19 , and are introduced by design in deep learning models. In sarcasm detection, distributional semantics has been used to serve different roles. Ghosh, Guo, and Muresan ghosh2015sarcastic have adopted word embeddings to disambiguate a literal use of single words from a sarcastic use. Joshi, Tripathi, Patel, Bhattacharyya and Carman joshi2016word use word embeddings to compute incongruities among words using them as additional features for methods selected from the literature. Our work differs from these as we use LSA instead of word embeddings, and distributional semantics is the only kind of features we use. Ghosh and Veale ghosh2016 use LSA to extend the list of hashtags to find more sarcastic tweets on Twitter and use a deep neural network to perform the actual classification. Our work differs from theirs as we use LSA to compute the vectorial representation of documents and we do not perform tweet crawling. Poria, Cambria, Hazarika and Vij cambria2016 train a convolutional neural network to classify sarcasm in tweets. They extend the neural network with features extracted from other datasets for sentiment, emotion and personality classification, as these features are considered to be useful for the task of sarcasm detection.
Data-Driven Induction of Semantic Spaces and Traditional Classifiers
We focused our research on the role that fully data-driven models can play in detecting sarcasm. To reach this goal, we exploited the Latent Semantic Analysis paradigm both in its traditional formulation (Landauer et al. 1998) and by using the Truncated Singular Value Decomposition (T-SVD) as a statistical estimator as shown in Pilato et al. pilato2015tsvd. We have chosen to use the LSA paradigm to exploit a well-known and well-founded approach for inducing semantic spaces that have been effectively used in natural language understanding, cognitive modeling, speech recognition, smart indexing, and other statistical natural language processing problems. The sub-symbolic codings of documents obtained by the aforementioned LSA-based approaches are then used as inputs by a set of classifiers to evaluate the differences of performances obtained by using different machine learning approaches and testing them on different sarcasm-detection datasets.
The full work-flow, composed of the following steps:
does not require any expert or domain knowledge.
Preprocessing of text
The first step of preprocessing for texts is the tokenization using spaces, punctuation and special characters (e.g., $, , @) as separators. Thus one token is a sequence of alphanumeric characters or of punctuation symbols. The set of all the extracted tokens constitutes a “vocabulary” named INLINEFORM0 .
The sequences of tokens, each representing a single document in the training set, are used to generate a word-document co-occurrence raw matrix INLINEFORM0 , where each INLINEFORM1 cell contains the number of times the token INLINEFORM2 appears in the document INLINEFORM3 . Let INLINEFORM4 be the number of tokens, i.e., INLINEFORM5 , and let INLINEFORM6 be the number of documents of the corpus used for computing the matrix INLINEFORM7 ; the dimensionality of INLINEFORM8 is INLINEFORM9 .
Data driven induction of semantic spaces by means of LSA-oriented paradigms
The matrix INLINEFORM0 is used and further processed to induce proper Semantic Spaces where terms and documents can be mapped. To generate these semantic spaces, we have used both the traditional LSA algorithm (Deerwester et al. 1990, Landauer et al. 1998) and the approach which uses T-SVD as a statistical estimator as proposed in Pilato et al. pilato2015tsvd. For the sake of brevity, we call this last approach Statistical LSA to differentiate it by the Traditional LSA. It is worthwhile to point out that, in the Latent Semantic Analysis paradigm (i.e., both “general” and “statistical”), the corpus used for building the semantic space plays a key role in performances. As a matter of fact, large and heterogeneous corpora may give more noise or too much specific information from a single domain, decreasing the accuracy of the induced models BIBREF36 .
The traditional LSA is a procedure that has been used mainly for information retrieval (Deerwester et al. 1990). The previously described matrix INLINEFORM0 is used for computing a Tf-Idf (Term-Frequency Inverse-document frequency) matrix INLINEFORM1 BIBREF37 . Let INLINEFORM2 be the rank of INLINEFORM3 . The following factorization, called Singular Value Decomposition (SVD) holds for the matrix INLINEFORM4 : DISPLAYFORM0
where INLINEFORM0 is a INLINEFORM1 orthogonal matrix, INLINEFORM2 is a INLINEFORM3 orthogonal matrix and INLINEFORM4 is a INLINEFORM5 diagonal matrix, whose diagonal elements INLINEFORM6 are called singular values of INLINEFORM7 . It can be shown that the singular value decomposition of INLINEFORM8 is unique up to the order of the singular values and of the corresponding columns of INLINEFORM9 and INLINEFORM10 , so there is no loss of generality if we suppose that INLINEFORM11 are ranked in decreasing order.
Let INLINEFORM0 be an integer such that INLINEFORM1 , let INLINEFORM2 be the matrix obtained from INLINEFORM3 by removing its last INLINEFORM4 columns, INLINEFORM5 the matrix obtained from INLINEFORM6 in the same manner and INLINEFORM7 the diagonal matrix obtained from INLINEFORM8 by suppressing both its last INLINEFORM9 rows and INLINEFORM10 columns. INLINEFORM11 is the matrix containing the INLINEFORM12 -dimensional vector representation of the words and INLINEFORM13 is the matrix containing the INLINEFORM14 -dimensional vector representation of the documents. It can be shown (Deerwester et al. 1990) that the matrix: DISPLAYFORM0
is the best rank INLINEFORM0 approximation to INLINEFORM1 according to the Frobenius distance. INLINEFORM6 is called the reconstructed matrix. The process by which INLINEFORM7 is obtained from INLINEFORM8 is called Truncated Singular Value Decomposition (T-SVD). The book by Golub and Van Loan golub1996matrix provide further details about the Singular Value Decomposition technique.
The traditional Latent Semantic Analysis based on T-SVD is one of the possible methods to infer data-driven models. Furthermore, one of its major drawbacks, which is the lack of a sound statistical interpretation, has been recently overcome in Pilato et al. pilato2015tsvd, where authors have been presented a statistical explanation of this paradigm.
According to this interpretation, the T-SVD algorithm, as used in the Latent Semantic Analysis paradigm, acts as an estimator, which conveys statistically significant information from the sample to the model.
To briefly sum-up the procedure, we recall here the concepts of probability amplitude and probability distribution associated with a matrix as they have been defined in Pilato et al. pilato2015tsvd.
Let INLINEFORM0 , INLINEFORM1 be two positive integers and let INLINEFORM2 be the set of real numbers. Given a INLINEFORM3 matrix INLINEFORM4 with INLINEFORM5 , INLINEFORM6 , INLINEFORM7 where at least one of its components INLINEFORM8 is positive, we define a set INLINEFORM9 , composed of all the pairs INLINEFORM10 that identify the positive components of INLINEFORM11 , i.e.: DISPLAYFORM0
Subsequently, we define the probability amplitude associated with INLINEFORM0 , the INLINEFORM1 matrix INLINEFORM2 resulting from the mapping INLINEFORM3 : DISPLAYFORM0
whose elements INLINEFORM0 are computed as: DISPLAYFORM0
so that INLINEFORM0 it is INLINEFORM1 and INLINEFORM2 .
We define also the probability distribution associated with a matrix INLINEFORM0 the INLINEFORM1 matrix resulting from the mapping INLINEFORM2 : DISPLAYFORM0
whose elements are the squares of the elements of INLINEFORM0 , i.e. INLINEFORM1 . The method starts with a raw data matrix INLINEFORM2 consisting of positive values. In our study the raw data matrix INLINEFORM3 is the term-document co-occurrence matrix. From INLINEFORM4 a real-valued normalized matrix INLINEFORM5 is computed by dividing every element for the sum of all elements of INLINEFORM6 . DISPLAYFORM0
If we call INLINEFORM0 the matrix: DISPLAYFORM0
The matrix INLINEFORM0 can be decomposed with the SVD technique: DISPLAYFORM0
and its best rank-r decomposition INLINEFORM0 is obtained by applying the T-SVD technique, which minimizes the Frobenius distance INLINEFORM1 , given INLINEFORM2 : DISPLAYFORM0
Even if INLINEFORM0 is not a probability distribution, the computation of INLINEFORM1 makes it possible to identify, without any further addition of external information, the probability distribution we are looking for. As shown in Pilato et al. pilato2015tsvd, it theoretically suffices computing the probability amplitude associated to INLINEFORM2 , i.e. INLINEFORM3 , and consequently calculating the probability distribution INLINEFORM4 associated to INLINEFORM5 . The aforementioned Frobenius distance INLINEFORM6 constitutes an upper bound to the Hellinger distance between the sample probability INLINEFORM11 and the probability distribution estimated by the procedure.
Mapping new documents to the semantic space
Both LSA approaches illustrated in the previous subsections provide us with the three, obviously different for each approach, matrices INLINEFORM0 , INLINEFORM1 and INLINEFORM2 .
The INLINEFORM0 and the INLINEFORM1 matrices can be used for computing the vector representation of the new documents into the induced semantic space. The INLINEFORM2 matrix contains in its diagonal the singular values; INLINEFORM3 is composed by rows that represent the r-dimensional sub-symbolic, i.e., numerical, mapping in the semantic space of the tokens constituting the vocabulary INLINEFORM4 . Then, given a text chunk INLINEFORM5 , INLINEFORM6 is sub-symbolically represented by a INLINEFORM7 -dimensional word occurrence vector INLINEFORM8 , from which it is computed a vector INLINEFORM9 with two different procedures depending on which LSA paradigm has been chosen.
In the case of Traditional LSA, it is the Tf-Idf representation BIBREF38 of INLINEFORM0 by using the same parameters learned during training.
In the case of the Statistical LSA, the INLINEFORM0 vector is transformed into INLINEFORM1 similarly as the matrix INLINEFORM2 is transformed into the matrix INLINEFORM3 : DISPLAYFORM0
Once the appropriate coding of INLINEFORM0 has been computed, an r-dimensional vector INLINEFORM1 representing the sub-symbolic coding of INLINEFORM2 is then obtained from the vector INLINEFORM3 by means of the following mapping formula: DISPLAYFORM0
Supervised learning
The training and test documents are mapped into the semantic spaces induced at the previous step. These vectors, sub-symbolic coding of the documents, are therefore used as inputs to different classifiers to train or test on them. Such classifiers will finally solve a binary classification problem assigning the label 1 (sarcastic) or 0 (nonsarcastic) to a generic document. For this study we have used Support Vector Machines, Logistic Regression, Random Forests, and Gradient boosting as they represent the state of the art for most of the binary classification problems with small datasets. In the following, we recall a brief description of them.
The logistic regressor (LR) is a generalized linear model suitable for binary responses BIBREF39 . In LR the following log-linear model is adopted: DISPLAYFORM0
where INLINEFORM0 represents the probability of the success outcome. A suitable way of minimizing the so called empirical risk is the numerical estimation of the INLINEFORM1 s coefficient by a maximum likelihood procedure: DISPLAYFORM0
where INLINEFORM0 is the training set, INLINEFORM1 is the norm of the weights vector used for regularization, and can be either the INLINEFORM2 or the INLINEFORM3 norm, and INLINEFORM4 is the weight to give to the regularization factor. The function in formula EQREF33 is convex, so it can be minimized even with the simple gradient descent algorithm, but more complex algorithms can be used in order to reduce the convergence time. In this work we use the trust region Newton method proposed by Lin, Weng and Keerthy lin2008trust, as provided by the LIBLINEAR library BIBREF40 .
A kernel INLINEFORM0 is any mapping satisfying DISPLAYFORM0
where INLINEFORM0 , INLINEFORM1 are elements in the input space, INLINEFORM2 is a mapping from the input space to a new representation space INLINEFORM3 where an inner product is defined. The function INLINEFORM4 is chosen to be nonlinear, and the dimension of the feature space is taken intentionally greater than the dimension of the input space. These choices could give the chance to make the classification problem linearly separable in INLINEFORM5 . Support vector machines (SVMs), also called kernel machines BIBREF41 are binary linear classifiers that make use of kernels. They search for the optimal hyperplane INLINEFORM6 in the feature space that maximizes the geometric margin, which is the distance of the hyperplane to the nearest training data point of any class. The main advantage of SVM is that it provides a solution to the global optimization problem, thereby reducing the generalization error of the classifier. The formulation of SVM can be easily extended to build a nonlinear classifier by incorporating a kernel of the class H DISPLAYFORM0
No systematic tools have been developed to automatically identify the optimal kernel for a particular application.
Decision trees BIBREF42 are rooted trees that can be used successfully as classifiers BIBREF43 . Each node of the three represents a binary rule that splits the feature space according to the value of a predictive feature and a path from the root to leaf nodes represents a series of rules that are used to recursively divide the feature space into smaller subspaces, where a class label is assigned. The structure of the tree in terms of split nodes can be learned from data by using several approaches. Random forests BIBREF44 are an ensemble of decision trees, found using the bootstrap sampling technique on the training set. In particular, a fixed number of random samples are extracted with replacement from the training set, and each of them is used as a training set to fit a decision tree. The forest is composed by each of these decision trees, and the final predictions are made by averaging the predictions from all the individual decision trees.
Boosting is another ensemble strategy with the special purpose of improving the combination of a set of weak classifiers. These are chosen to be of very low model complexity such as the case of decision trees with a single split. The general framework of boosting sequentially adds a tree to an ensemble, the new one with the goal of correcting its predecessor. Gradient boosting BIBREF45 uses a gradient-descent like procedure to sequentially improve a tree classifier. This is done by adding to the actual classifier a new decision tree learned from the residual errors made by the predecessor. The final predictions are made by the tree classifier resulting after a fixed number of iterations of the procedure.
Datasets
We have chosen 4 corpora for our experiments, all of them are publicly available and treating the problem as a binary classification: “SarcasmCorpus” (Filatova 2012) , “IAC-Sarcastic” BIBREF46 , which is a subset of Internet Argument Corpus1.0 prepared for sarcasm detection, “irony-context” (Wallace et al. 2014), and “IAC-Sarcastic-v2” (Oraby et al. 2016), which is extracted from the second version of Internet Argument Corpus BIBREF47 . In order to provide a more complete evaluation, we also use the corpus of the shared task “Semeval2018 Task 3A” BIBREF48 .
SarcasmCorpus
Filatova filatova2012irony collected 1254 reviews from Amazon for different kinds of products, of which 437 are sarcastic, and 817 are not sarcastic. The dataset is unbalanced toward the “regular” texts, and this is due both to the policy of Amazon, which explicitly requires sincere reviews and to the peculiarity of sarcasm itself, which is used only in some cases, especially because of the difficulty for humans to recognize it over the internet.
Each review in the corpus consists of the title, author, product name, review text and number of stars, and the review is a stand-alone document referring to a single product. This corpus, like all the others considered in this work, has been entirely hand-labeled by the Amazon Mechanical Turkers, who were asked whether each review contains sarcasm in it. Each text has been presented to 5 Turkers and has been classified as sarcastic when at least three among five workers agreed. The corpus contains INLINEFORM0 distinct tokens, with INLINEFORM1 occurring only in sarcastic reviews, INLINEFORM2 occurring only in regular reviews and INLINEFORM3 occurring in both categories. Buschmeier et al. buschmeier2014impact made an interesting analysis of the corpus by collecting some statistics and publishing the only classification results that are available for it up to now. They extracted 29 task-specific features and combined them with the bag-of-words representation and multiple classifiers. The bag of words resulted to be important for the classification. In fact, for example, they get a poor 50.9% F-score value with logistic regressor without bag-of-words, which is increased to 74% by using it. This result is surely related to the difference in terms used by the two classes, but it also shows that information about the words used in the document is needed for the task.
IAC-Sarcastic
The second dataset we used is the IAC-Sarcastic sub-corpus, which consists of 1995 posts coming from 4forums.com, a classical forum where several topics are discussed. This corpus is actually extracted from the larger Internet Argument Corpus (IAC), containing INLINEFORM0 discussions, INLINEFORM1 posts and INLINEFORM2 words. In IAC there are INLINEFORM3 Quote-Response (Q-R) pairs and INLINEFORM4 three-posts chains that have been manually labeled for several HITs (Human-Intelligence Tasks) by Amazon Mechanical Turk. For each Q-R item, the Turkers were asked to evaluate the response section by considering the quote as a context. One of the HITs regarded the identification of a sarcastic response. As a result, the IAC-Sarcastic Corpus consists of 1995 responses, without any quote, with a binary label that indicates the presence of sarcasm. 998 texts are labeled as sarcastic, and 997 are not, so this is one of the rare balanced datasets for this task. To the best of our knowledge, only the work by Justo, Corcoran, Lukin, Walker, and Torres justo2014 published results on the sarcastic task of the IAC dataset, but the authors made a different sampling of the documents from the one used for IAC-Sarcastic. Thus, our results for this corpus are not comparable with the ones reported in that work.
Irony-context
A third dataset is the one collected in Wallace et al. wallace2014humans. The main goal of that study was to highlight the role of the context of a text to make irony understandable by humans. The dataset is extracted from Reddit by collecting comments from the following six sub-reddits: politics, progressive, conservative, atheism, Christianity, technology, with their respective size of 873, 573, 543, 442, 312 and 277 samples. Each comment has been labeled by three university undergraduates using a browser interface which let them see the context of the comment in the form of previous comments or related pages under request. The label of a comment was selected with a simple majority of 2 out of 3 labelers. For each comment and each labeler, they stored whether the context has been requested and if the labeler changed his mind after having seen it. This allowed the authors to study the correlation between the sarcastic label and the requests for context.
The results allowed the authors to infer that the machines would also need the context for detecting sarcasm, as their model did not predict correctly the texts for which the humans required the context. This is an important cue that should be considered while developing sarcasm detection methods, even though we do not explicitly consider the context of our method. As a result, we cannot expect to obtain high absolute results for this dataset by letting the model observe only the single text.
IAC-Sarcastic-v2
In 2016 a new version of IAC was made available (IACv2) (Abbot et al. 2016), and after some months also the sarcastic sub-corpus was released (Oraby et al. 2016), which is bigger than the first version. It consists of three sub-corpora, among which the bigger one is called “generic”, and it is made of INLINEFORM0 posts per class collected from IACv2. For the creation of this sub-corpus, the authors produced a high-precision classifier for the non-sarcastic class, which helped to filter out many non-sarcastic posts from the original corpus and lower the labeling costs. Then, to have high-quality labeling, they required a majority of 6 out of 9 sarcastic annotations to label a post as sarcastic.
To produce a more diverse corpus, they built two more corpora focused on particular rhetorical figures often associated with sarcasm: rhetorical questions and hyperboles. For both of the sub-corpora, the authors used patterns to recognize posts containing the chosen rhetorical figure from IACv2. Each of the collected posts has been subsequently shown to five AMTs for the sarcastic/not sarcastic annotation. The label is given with simple majority.
The purpose of these two focused sub-corpora is to force classifiers to find some semantic cues which can distinguish sarcastic posts even in the presence of rhetorical figures usually associated with sarcasm. In fact, the presence of hyperboles has been used before as a feature for detecting sarcasm BIBREF49 .
Semeval-2018 Task3 Corpus of Tweets
The International Workshop on Semantic Evaluation Semeval-2018 featured a shared task on verbal irony detection in tweets (Van Hee et al. 2018). The corpus contains a class-balanced training set consisting of INLINEFORM0 tweets, and a test set with 784 tweets. In the test set, only 40% of the instances are ironic. The corpus has been collected from Twitter searching for tweets with the hashtags #irony, #sarcasm and #not. The corpus has been annotated by three students in linguistics who showed a high inter-annotator agreement. After the annotation, INLINEFORM1 tweets out of INLINEFORM2 were ironic and only 604 were not. Thus, an additional set of INLINEFORM3 non-ironic tweets was added to the corpus. Finally, the corpus was split randomly in class-balanced training and test set, but an additional cleaning step for removing ambiguous sentences modified the proportion to 40% ironic.
Experimental setup
We ran three groups of experiments, to assess both the effectiveness of our approach when compared with the approaches we found in literature and its capability of extracting features that are relevant for sarcasm in a cross-domain scenario. In both cases, we denote with the word model one of the possible combinations of classic/statistical LSA and a classifier. The used classifiers are Support Vector Machine (SVM), Logistic regression (Log.Reg), Random Forest (RF) and gradient boosting (XGB).
For the first group of experiments, we evaluated the performance of each of our models in every corpus. We use 10-fold cross-validation and report the mean values of INLINEFORM0 -score, precision, and recall among all the folds. The proportion of the two classes in each fold is equal to the proportion in the whole corpus. Where applicable, we compare our results with existing results in the literature. Besides, we compare with the method presented in Poira et al. cambria2016.
The second group of experiments has been performed on the Semeval 2018 Task 3 dataset (Van Hee et al. 2018). We first find the best LSA dimensionality by 10-fold cross-validation in the training set. Then, we trained again the models in the whole dataset and evaluated them in the test set for comparison with the participants to the shared task.
The third group of experiments is inter-corpora. For each experiment, we have chosen one corpus as a training set and another one as a test set. This process is performed for all the models and all the corpora pairs. We aim to find whether sarcasm detection is domain-dependent.
Finally, in the fourth group of experiments (union experiments) we perform another 10-fold in which all the corpora are concatenated. Each fold contains samples from every corpus proportionally to the size of that corpus. The goal of this experiment is to understand whether simply adding more data, but from different domains, improves the classification performance.
The hyperparameters of the classifiers have been chosen by grid search on SarcasmCorpus with LSA dimensionality 40, and then used for all the reported experiments. We use SVM with Gaussian kernel, C value of 100, INLINEFORM0 logistic regression with penalty L1 and C=10 and decision tree with entropy loss. SVM and logistic regression both have balanced class weights to cope with unbalanced datasets.
In-corpus Experiments
In SarcasmCorpus each sample consists of a review title, a review text, a product name and the number of stars given to the product ranging from 1 to 5. Buschmeier et al. buschmeier2014impact showed that the star rating is the most discriminative feature. Thus we performed the experiment both including and not including it. In Table TABREF48 , we refer to “SarcasmCorpus” when the star rating is not used, and “SarcasmCorpus*” when it is used. We use the star rating by simply concatenating it to the document vector produced by LSA. The document vector is computed only from the review texts because in our preliminary experiments we found that the other parts are not useful for the task. Accuracy and F-score values of all classifiers for SarcasmCorpus and SarcasmCorpus* are plotted in Figures FIGREF72 and FIGREF73 , and the best F-scores, with the relative precision and recall, are reported in the two columns SarcasmCorpus and SarcasmCorpus* of Table TABREF48 . The best result from the logistic regression in SarcasmCorpus is INLINEFORM0 which represents a INLINEFORM1 % relative improvement concerning the INLINEFORM2 reported in the above-mentioned work by Buschmeier et al. buschmeier2014impact. The results from Poira et al. cambria2016 are even higher in terms of F-score, with a relative improvement of INLINEFORM3 , which is due mostly to a much higher recall.
Note that the method by Poira et al. cambria2016 uses also features extracted from other datasets for sentiment, emotion and personality classification, as these features are considered to be useful for the task of sarcasm detection. Moreover, as our goal is to propose a baseline, the training time in the order of minutes is an advantage of our model. We report such results as an upper bound considering that our model does not use additional information from external data.
The best results are obtained using the star labels. In this setting, our best-performing classifiers are better than the INLINEFORM0 F-score value reported by Buschmeier, and our best INLINEFORM1 -score of INLINEFORM2 represents a INLINEFORM3 relative improvement. In this single case of SarcasmCorpus*, the results with the Traditional LSA are all higher than their counterparts with Statistical LSA.
For IAC-Sarcastic we do not have any previously published result to compare with. The only related result is reported in Joshi et al. joshi-sharma-bhattacharyya:2015:ACL-IJCNLP, which use a corpus randomly extracted from IAC containing 752 sarcastic and 752 not sarcastic texts. They report an F-score of INLINEFORM0 (average over a 5-fold), but the text sampling procedure is not specified in the paper. Thus, we prefer to use the sarcastic selection given by the Internet Argument Corpus website which is also a bit larger (998 sarcastic and 997 non-sarcastic texts).
Accuracies and F-scores of all the classifiers at varying T-SVD size are plotted in Figure FIGREF74 , best values of F-score, precision and recall are reported in column IAC-Sarcastic of Table TABREF49 . The best result (F= INLINEFORM0 ) is lower than in SarcasmCorpus, despite IAC-Sarcastic being balanced and larger than SarcasmCorpus. With Traditional LSA the INLINEFORM1 -scores are generally slightly lower, but the precision values are higher.
The results from Poira et al. cambria2016 are significantly higher, suggesting that in this dataset the sarcasm can be detected in most cases with the linguistic features used by their network independently from the context.
For the irony-context corpus, we used the same 1949 documents selected for the experiments reported in Wallace et al. wallace2014humans. To allow fair comparisons, we used only the texts of the comments, without any contextual information.
The authors report a mean F-score over the five-fold of 0.383 by using a bag-of-words representation with 50.000 tokens, plus some other binary features that have proven useful in other works, and an SVM classifier with a linear kernel. Our results are plotted in Figure FIGREF78 and reported in column irony-context of Table TABREF49 , where it is shown how our classifiers clearly outperform the baseline. Our maximum F-score of INLINEFORM0 represents a relative improvement of 20%. Moreover, it is important to highlight the incredibly low values obtained in this corpus when compared with the results from the previous corpora. This is certainly due to the high skewness between the classes; in fact, the positive samples are just 537 over 1949 (27.5%). If we consider that in SarcasmCorpus the sarcastic texts are only 33% of the total, we suppose there are other causes. Another reason that can explain the poor results can be found in the diversity of topics, as the texts are extracted from six different forums, and the words used for sarcasm can be highly specific to a given context, both cultural and topical. In Wallace et al. wallace2014humans it is explicitly said that the request of the context from an annotator is high for the sarcastic texts. As a consequence, classifying correctly the texts without a context is difficult even for humans. Moreover, the forums from which the posts were extracted are highly controversial, as they regard politics or religion. As a consequence, it is difficult to grasp the sarcasm of a text without knowing the author's opinions.
The results with Traditional LSA are very similar to Statistical LSA, and the real surprise is the incredibly low scores obtained by the random forest and gradient boosting methods.
In this case, we wanted to compare our results against those from Oraby et al. oraby2016creating, which deal with the three sub-corpora separately. However, they are not directly comparable because at the moment in which we report these results only half of the corpus has been released, consisting of 3260 posts in the generic sub-corpus, 582 in the Hyperbole side and 850 for rhetorical questions. The three sub-corpora are all balanced.
Results computed on the three subcorpora are plotted in Figures FIGREF75 , FIGREF76 , FIGREF77 and reported in the last three columns of table TABREF50 . Despite the difference in data availability, the results are quite encouraging. In fact, we can see that our method reaches the INLINEFORM0 - score of INLINEFORM1 in the generic sub-corpus, slightly better than the previous study. Moreover, it improves over Oraby et al. (2016) also in the other two sub-corpora but using Traditional LSA.
Nonetheless, these results show that it is possible to achieve very good performance when high-quality labeled corpora are available, even with a limited number of examples.
For the CNN, we have results only in the generic sub-corpus, and this is the only case in which at least one of our models can outperform it in terms of F-score.
SemEval 2018 Task 3A
The last experiment on a single dataset was performed on the settings of SemEval 2018 Task 3A (Van Hee et al. 2018), which is a shared task on a binary classification of irony, which we introduced in Section SECREF47 .
We start by performing 10-fold cross-validation with our classifiers over varying LSA dimensionality to choose the best setting. We used the same set of hyper-parameters used for the previous experiments.
Once we have found the best setting, we train again the model with all the data and predict the classes of the test tweets. We found that we obtain the best results in cross-validation with LSA vectors of size 20, and the results are presented in Table TABREF59 . We list results for four different classifiers, namely logistic regression, support vector machine, gradient boosting and random forest. In this case, we get the best results using random forests, followed by gradient boosting. In particular, random forest obtains a F INLINEFORM0 -score of INLINEFORM1 , which is higher than the 6-th submission. It is worth noting that the submissions that we listed in the Table, except for the baseline, all use approaches based on deep learning. Compared to the unigram SVM baseline used for the shared task (row 11 in table 4), our model with the random forest is clearly better according to all the metrics, while our model with SVM is better in terms of F INLINEFORM2 score but not accuracy.
Surely the model we provide is not the best one in terms of accuracy, and showing its superiority among all the others does not represent the goal of this work, but the best performers, i.e. deep learning networks, involve an high number of parameters and high computational training cost. Moreover, there are additional interesting notes. First, the submission by BIBREF50 also makes use of deep neural networks but does not get a higher score than our best. Second, the submission by BIBREF51 is using SVMs over syntactic, semantic, and affective features, but still is not better than our best score. The models that showed a clear superiority use deep networks pre-trained on external data to extract more meaningful features. Thus, while the advantage is real, the number of parameters and the amount of data used is much higher.
Inter-corpora Experiments
The second group of experiments is aimed at finding whether the sarcasm is domain-dependent, or the knowledge acquired over one dataset can be transferred to another. We evaluate the similarity among the datasets by training a model over all the data of a corpus and using a second corpus as a test set. Our best results for every corpus pair are listed in Tables TABREF62 and TABREF63 , where the rows indicate the training set and the columns the test set. Quite interestingly, unlike the in-corpus experiments where the logistic regression works better in some cases, all the top scores that we report for these experiments are obtained by using the SVM classifier.
In Table TABREF62 we find the results for SarcasmCorpus and IAC-Sarcastic used as test sets. For the case of SarcasmCorpus, the F-scores are quite low compared to the in-corpus experiments. In fact, here we obtain the best result of only INLINEFORM0 when IAC-Sarcastic is the training set, which is much lower than the scores of about 70 that we get in the in-corpus experiments (column SarcasmCorpus in table TABREF48 ). The low results suggest us that the sarcasm conveyed by the texts in SarcasmCorpus is somehow different from what we can observe in the other corpora.
When we use IAC-Sarcastic as a test set, we can observe higher scores (column IAC-Sarcastic in table TABREF62 ), and the F-score of INLINEFORM0 that we obtain by training in IAC-Sarcastic-v2 is comparable to the INLINEFORM1 , which is the best result in the in-corpus experiments. Also, the lower result, which we obtain when training on irony-context, is quite close to the result obtained for the in-corpus experiment, and unexpected since the poor results obtained in the in-corpus experiments for irony-context (column Irony-Context in table TABREF49 ). When irony-context is the test set (first three columns of table TABREF63 ), we can observe again that the F-score obtained by training in IAC-Sarcastic-v2 is higher than the score obtained in the in-corpus experiment. Nonetheless, all the scores for this test set are lower than INLINEFORM2 with high recalls and low precisions.
When using IAC-Sarcastic-v2 as the test set (see last three columns of Table TABREF63 ) we can observe F-scores between INLINEFORM0 and INLINEFORM1 and are characterized by a high recall and lower precision. The top F1 score is obtained when using IAC-Sarcastic as a training set, which also corresponds to the highest precision. This represents a further proof in favor of the similarity of the two corpora. The top recall score of INLINEFORM2 is obtained by training on SarcasmCorpus, but the precision is much lower than the other two cases.
Overall, it is worth noting that, for all the experiments, the top results are obtained by training on either IAC-Sarcastic or IAC-Sarcastic-v2, while SarcasmCorpus is always better than irony-context. Considering that the quality of the features depends on the quality of the data and of the annotation, we suppose that the quality of the first two datasets is higher than the quality of irony-context, while the data contained in SarcasmCorpus are too different from the other corpora. A deeper analysis of the corpora can be found in the discussion (Section SECREF71 ).
Union Experiments
The last group of experiments we ran has the goal of understanding whether the combination of data coming from different sources can influence positively the final score. For this purpose, as anticipated in Section SECREF51 , we computed 10-folds of each of the four corpora used for the first group of experiments, and used as a training set the concatenation of 9 folds of every corpus, and as a validation set the remaining single folds of each corpus.
From Tables TABREF64 , TABREF65 we can observe that these results are not higher overall with respect to the inter-corpora results. The only exceptions are SarcasmCorpus, where the results are almost 20 F-score points higher than those obtained in the inter-corpora; and IAC-v2, where the gradient boosting (XGB) obtains 2 F-score points more than the top score in the inter-corpora results.
The results on SarcasmCorpus are still lower than the in-corpus results, and the scores of random forest and gradient boosting are much lower than the other two methods. This is further evidence that adding diverse data is not helpful, or is actually harmful, for classifying SarcasmCorpus.
The general trend of this block of experiments is that our classifiers are not able to leverage data from different domains in order to improve global results. In-domain data represent the best choice even if the data amount is lower.
Discussion
In this section, we discuss our results from a more general point of view. We start by briefly discussing the content of the different corpora. Then we try to relate the results of the different types of experiments. Finally, we detect the limits of our experiments for the type of documents we worked with.
The corpora we used for our experiments are characterized by high internal variability in style, as each corpus consists of texts from thousands of different authors. Despite the number of authors, there are some factors that depend on the type of text and the medium. For instance, the irony-context, IAC Sarcastic, and IAC Sarcastic v2 corpora are made of posts collected from online forums, which are mostly about politics. Most of the texts are extracted from longer arguments, and thus the style is informal and in general with aggressive tones.
In Tables TABREF67 , TABREF68 and TABREF69 we show some randomly selected samples from these corpora. As it is apparent from the samples, the posts have a target to attack, who can be another user or the subject of the discussion. Table TABREF67 shows some examples from IAC-Sarcastic. In all the examples the author attacks another user or his opinions. For instance, the first and the third sarcastic examples make sarcasm about the Bible to attack another user's religious ideas, while in the second example the author uses sarcasm to expose a fallacious position of another user and do not appear rude on his side. By contrast, the non-sarcastic examples are much more direct about their meaning. A similar pattern can be found in the examples from IAC Sarcastic v2 (table TABREF69 ). Sarcasm is again used to attack a person (first example) or his/her opinions (second example), maybe religious. The third example shows that also in this corpus some sentences are hard to classify. In this case, the information that we get is that the target has ultraconservative ideas, but it is not easy to grasp the sarcasm. The examples from irony-context (in table TABREF68 ) are much more difficult to grasp without knowing contextual information. For instance, the first sarcastic example can be either sarcastic or regular according to the political opinion of the author. It is sarcastic if the author is a Republican, it is not sarcastic (but would appear strange to write) if the author is a Democrat. The second and the third examples are hard to classify without knowing the subject of the conversation. The same issue of missing a broader context also appears in the non-sarcastic examples, and the third examples can easily be interpreted as sarcastic by humans. In SarcasmCorpus the situation is different as there is no argument ongoing, and the sarcasm is made against products that the author did not like. In this case, there are many references to the external world and the writing is more passionate in its negative stance. Some samples are shown in Table TABREF66 . The sarcastic examples in table TABREF66 all express a negative sentiment and also use negative words. Sarcasm is used within this negative reviews to attack the product in a more creative way and make the text more fun than a usual negative review. The non-sarcastic reviews, on the other side, give a description of the product and their experience with it, with regular forms of expressing the sentiment (“are also a great feature”, “It is a great little camera”). We suppose that this difference in style is the main obstacle to correct classification of SarcasmCorpus instances in the cross-corpora experiments.
We now discuss the relations among the results of the different experiments to gain some further insights into the sarcastic content of our corpora. From the in-corpus experiments, we obtain good results on SarcasmCorpus, which is the only corpus containing Amazon reviews. Unfortunately, when we train our models in a cross-corpora or all-corpora setting, our results drop dramatically, especially in the cross-corpora case. These results mean that the sarcasm in SarcasmCorpus is conveyed through features that are not present in the other corpora. This is especially true when considering that in the inter-corpora experiments, using SarcasmCorpus as a training set in all cases yields results that are only better than the ones obtained when using irony-context as a training set.
The results on irony-context show that this corpus is much more difficult to classify than the others, as it was pointed out also in the paper that presented it (Wallace et al. 2014), which highlights how the human annotators needed to read the contexts to be sure about the sarcastic posts. In the inter-corpora experiments, the results when training on irony-context are the worst for all the test sets, but only for a few points of F-score, while at first, we could expect dramatically lower results. For us, the previous are strong suggestions that the types of texts present in irony-context are similar to the ones present in IAC-Sarcastic-v2, but the quality is lower. As a consequence, this is a further proof that the datasets annotators do not consider sarcasm and irony two different linguistic phenomena.
The two versions of IAC-Sarcastic have proved to be the easiest to classify when using other corpora for training. The best result in IAC-Sarcastic is obtained in the Union experiment (see Tables TABREF64 , TABREF65 ), and thus it benefits from the higher amount of data, especially from the data from IAC-Sarcastic-v2, as can be observed from the cross-corpora results (Table TABREF62 ).
By contrast, the best results on IAC-Sarcastic-v2 are obtained with the in-corpus experiments, while all the results obtained in the inter-corpora experiments are clearly worse. Among the inter-corpora, training the model with IAC-Sarcastic results in a F_score of INLINEFORM0 , which means a relative decrement of INLINEFORM1 concerning the top score for the intra-corpus experiments of IAC-Sarcastic-v2. It is interesting to note that one cause of the decrement can also be the size of corpora, in fact, IAC-Sarcastic contains only 1995 texts, while IAC-Sarcastic-v2 contains 3260.
One final remark is about the absolute scores obtained in the in-corpus experiments. In fact, we can notice that in SarcasmCorpus the F_score can go beyond INLINEFORM0 , and up to INLINEFORM1 by adding the star rating as a feature. The high result can be explained by the peculiarity of this corpus, where sarcasm is present mostly in negative reviews, and the star label is the single best indicator of sarcasm BIBREF49 . The other corpora consist of texts that belong to a thread of forum posts. Sometimes it is reasonable to classify such posts as sarcastic or not out of context, but in many cases, it is impossible also for humans (see examples in Table TABREF68 ). In fact, the low F_score in irony-context is due to low precision, which is an indicator of high similarity between the positive and negative classes. Moreover, low precision and higher recall is a pattern that is present in most of the experiments, even if with higher absolute numbers. The combination of high recall and lower precision suggests that the dubious texts are classified as sarcastic more often than not sarcastic.
Conclusions
In this work, we have tackled the problem of automatic sarcasm detection from a data-driven point of view. More in details, we have used a set of labeled dataset and applied distributional semantics followed by some machine learning approaches in order to give a baseline for the literature in managing such a problem. We do not differentiate between sarcasm and irony because they are not so easily distinguishable even for human experts. Experiments have been carried out on four different corpora containing texts from online reviews or forums, and the corpus used for the shared task on irony detection on Twitter proposed in SemEval 2018. We have shown experimentally that some basic methods can outperform in all the datasets other methods based on bag of words and linguistic features, thus representing a solid baseline. With our experiments that train the models with one corpus and test them by using the other corpora, we have confirmed experimentally that also the annotators tend to not distinguish the distinction between irony and sarcasm. By contrast, major differences can be found according to the text domains, i.e., review vs. political forum. The domain difference can also prevent the method from taking benefits from more data when they are too diverse from the test data. As a future work, we will try to improve distributional semantics approaches with linguistic features in order to perform more fair comparisons with more recent and advanced methods. Furthermore, we will exploit more classical AI methodologies (e.g., by using ontologies, reasoners, common-sense reasoning techniques, etc.) to deduce the context, understanding the concepts expressed in a sentence, exploiting also features like hashtags and emojis to improve the overall performance of the approach. | Unanswerable |
37db7ba2c155c2f89fc7fb51fffd7f193c103a34 | 37db7ba2c155c2f89fc7fb51fffd7f193c103a34_0 | Q: What classical machine learning algorithms are used?
Text: Introduction
Affective computing has raised a great deal of interest in the last years. Picard picard1995affective introduced it as a computing paradigm that relates to, arises from, or influences emotions, letting computers be both more effective in assisting humans and successful in making decisions.
Language, as a conceptual process, plays a key role in the perception of verbal irony and sarcasm, two well-known forms of figurative language (FL) BIBREF0 Traditionally, irony as a figure of speech can be intended as “saying something while meaning something else” BIBREF1 . A comprehensive overview of different theories of irony has been illustrated in Attardo attardo07. Understanding if irony and sarcasm are the same linguistic phenomenon or not is still an unresolved question in literature BIBREF2 . Some authors consider irony a more general form of sarcasm, while others tend to consider it a separate linguistic issue BIBREF3 , BIBREF4 . According to the theory of sarcastic irony, sarcasm and irony are very similar, but sarcasm has a specific victim who is the object of the sarcastic statement, while irony does not have such a target BIBREF5 . More commonly, the noun “sarcasm” is understood as “saying the opposite of what one is thinking”, usually with a negative intention. Henceforth, due to the different nuances of irony and sarcasm, and the multiple interpretations of these two concepts, we do not differentiate between them, and, like many researchers, e.g., BIBREF6 , we will use the term “sarcasm” to refer to both verbal irony and sarcasm.
A sarcastic sentence may include features that characterize a positive sentiment, but that insinuates a negative sentiment BIBREF7 , BIBREF8 . It is clear that sarcastic sentences are more difficult to process by an algorithm than non-sarcastic assertions; as a matter of fact, both the situation and the mental state of the speaker are factors that can determine a sarcastic content in a sentence.
A system capable of detecting sarcasm correctly would greatly improve the performance of sentiment analysis systems BIBREF9 , BIBREF10 , BIBREF6 , BIBREF11 , especially considering the big data available nowadays due to the exponential growth of social platforms. Unfortunately, sarcasm detection in written texts is a difficult task even for humans BIBREF12 .
Moreover, some people usually do not understand sarcasm, and there are sentences meant as being sarcastic by the author that are not recognized as such by the readers.
We focus our attention on the possibility of detecting sarcastic sentences automatically from written text only, and from the reader's point of view. Managing this task without any knowledge of relevant contextual features, like prosody, is very hard.
The problem of sarcasm detection has been tackled with machine learning approaches, made possible by the availability of several annotated corpora. In the literature we can find two main categories of such corpora: automatically annotated and manually annotated.
The automatically annotated corpora are usually collected from the microblogging platform Twitter BIBREF13 , BIBREF14 by exploiting the final hashtag of tweets. For instance, a tweet is labeled as sarcastic only if it ends with a hashtag such as #sarcasm or #irony. The same cue is used in Davidov, Tsur and Rappoport davidov2010semi to produce a silver standard for evaluating their model.
Manually annotated corpora are collected from a more diversified range of social media, such as Amazon reviews BIBREF15 , Reddit (Wallace et al. 2014) or online forums BIBREF16 , BIBREF17 , and then labeled by hiring people in the Amazon Mechanical Turk portal. When using crowdsourcing, the annotation procedures are complex and involve, among others, a stage for ensuring that the workers understood the task and they are performing correctly, and a quality assurance stage for removing texts for which a high discrepancy between the annotators arises.
In this work we have tackled the problem of sarcasm detection by trying to use an entirely data-driven approach, exploiting a distributional semantics representation by inducing a semantic space and then applying a set of classifiers to classify the texts as being sarcastic or not sarcastic. With “fully data-driven” we mean approaches that are capable of finding connections between input text and class labels without using any a priori knowledge about the features that characterize a sarcastic statement.
In particular, we do not define “irony” or “sarcasm”, neither use any definition. We simply rely on sets of sentences binary labeled for sarcasm detection taking for granted that the labels correctly identify a sarcastic sentence.
It is worthwhile to point out that in this work we do not create any dataset: we simply exploit the labels of datasets that have already been produced by others, trying to give a baseline for the sarcasm detection task.
The contribution of this work can be summed up in three key points:
To reach these goals, we exploit a Distributional Semantics approach, whose aim is to give a representation of words in a continuous vector space BIBREF18 , BIBREF19 , where word similarity is coded in an unsupervised manner. This representation is useful for building models with little, or no, a-priori knowledge about the task BIBREF20 .
Distributional semantics is a research field that concerns methodologies aimed at determining semantic similarities between linguistic items. The key idea is based on the hypothesis that words co-occurring in similar contexts tend to have similar meaning BIBREF21 , BIBREF22 . Distributional semantics deals with the automatic construction of semantic models induced from large unstructured textual corpora, and it exploits vector space models to represent the meaning of a word BIBREF23 . Many methods can be applied to construct distributional models. They range from the statistical models to machine learning ones BIBREF24 , BIBREF19 , BIBREF25 , BIBREF26 . Among these techniques, Latent Semantic Analysis (LSA) is a methodology for building distributional semantic spaces that extract statistical relations between words which co-occurr in a given context though the use of the Truncated Singular value decomposition (T-SVD). In this work we explored and studied the possibility of building a data-driven model in the field of sarcasm detection exploiting the well-known Latent Semantic Analysis (LSA) paradigm both in its traditional formulation given by Landauer, Foltz and Laham landauer1998introduction and by using the Truncated Singular Value Decomposition (T-SVD) as a statistical estimator as illustrated in Pilato and Vassallo pilato2015tsvd.
Both approaches have been used to create data-driven semantic spaces where documents and, generally, text chunks can be mapped.
The theory behind LSA states that the “psychological similarity between any two words is reflected in the way they co-occur in small sub-samples of language” (Landauer et al. 1998).
We have chosen to exploit the LSA paradigm since it is a well-known distributional semantics paradigm capable of modeling many human cognitive abilities; furthermore, it has many potential practical applications BIBREF27 , BIBREF18 , BIBREF28 , BIBREF29 . Moreover, it has been demonstrated in Pilato and Vassallo pilato2015tsvd that Truncated Singular Value Decomposition (T-SVD), as used in LSA, can be interpreted as a statistical estimator, giving a robust theoretical interpretation to the Latent Semantic Analysis paradigm. Many researchers have successfully applied this technique for typical Semantic Computing applications, such as natural language understanding, cognitive modeling, speech recognition, smart indexing, anti-spam filters, dialogue systems, and other Statistical Natural Language processing problems BIBREF30 , BIBREF31 , BIBREF32 . Moreover, Latent Semantic Analysis has been successfully used for inducing data-driven “conceptual” spaces BIBREF33 . For the aforementioned reasons, we have chosen this approach as a baseline for the detection of sarcasm in texts.
Furthermore, our study makes use of four machine learning methods that have been used on four manually annotated, publicly available corpora.
The experimental results show that our data-driven approach consisting of LSA followed by a classifier can establish models that outperform the published results on two of the corpora; additionally, it produces competitive results for the other corpora that we used for our evaluation.
The next section describes the state of the art in the field, Section SECREF3 describes the Semantic Representation and the Machine Learning methods used in the study. Section SECREF4 introduces the datasets used for the experiments. Section SECREF5 summarizes the experimental results, Section SECREF6 is for the final conclusions and remarks.
The code and the datasets used for the experiments are available on github.
Related works
The problem of sarcasm detection has been tackled using a wide range of supervised or semi-supervised techniques applied to corpora from different social media sources.
In the present work, we do not collect a new corpus for sarcasm detection, but sarcastic corpus annotation has received much attention in the literature. Most of the works have used unsupervised or semi-supervised approaches in order to reduce the cost of the annotation, while partially sacrificing the data quality. One of the first approaches was introduced by Tsur, Davidov and Rappoport tsur2010icwsm for a corpus extracted from Twitter and further developed in Davidov et al. davidov2010semi with a corpus consisting of Amazon reviews. This semi-supervised approach uses “YAHOO! BOSS” API web search for collecting INLINEFORM0 utterances similar to the ones in a small initial labeled seed set. It was the first work to show that automatically-crawled data are useful for the task of sarcasm detection. Most of the works have been pursued using data extracted from Twitter, as it is relatively easy to extract ironic or sarcastic tweets using the search by hashtag. In fact, in Twitter, the restricted number of characters allowed encourages to mark the ironic intent with a hashtag like #irony or #sarcasm to prevent ambiguities. The hashtag is usually removed from the tweets and used as a label for the silver standard. Moreover, the first studies on Twitter data showed that the task is quite difficult also for human beings. González-Ibánez et al. gonzalez2011identifying collected a corpus of INLINEFORM1 tweets balanced between sarcastic, positive sentiment and negative sentiment. They presented a part of the corpus to human judges, who achieved low agreement and low accuracy. Reyes et al. reyes2013multidimensional collected a corpus using 4 hashtags that identify four different categories, irony, education, humor, and politics, with INLINEFORM2 tweets each. The same corpus was used in a later work BIBREF34 . Their results suggest that detecting sarcasm in full documents is easier than in single sentences because of the presence of a context, but in both cases, it remains a difficult task also for humans that often have a low agreement. The specific case of positive sentiment and a negative situation, which is the most typical sarcastic situation, has also been analyzed BIBREF35 . In particular, authors have found that less than half of the tweets ending with the hashtag #sarcastic are recognized as sarcastic by humans after removing the hashtag. Bharti, Babu, and Jena bharti2015parsing proposed two algorithms with the goal to find, respectively, tweets with contrast in sentiment and situation, and tweets starting with interjections. They also found that the label distribution does not correlate perfectly with the hashtag distribution, e.g., only INLINEFORM3 out of INLINEFORM4 tweets ending with #sarcastic are actually sarcastic. Farias, Patti and Rosso farias16 proposed a method that uses affective content to classify sarcastic tweets, and show that it outperforms preceding methods in several Twitter benchmarks. Since classifying tweets by using only the text is a difficult task also for humans, other works proposed new methods capable of exploiting other kind of data, like the identity of the author or the thread of the tweet. Bamman and Smith bamman2015contextualized augmented the feature vectors with features describing the author of the tweet and the user to which the tweet is addressed, obtaining significant improvements in accuracy. They also found that the hashtags #sarcasm and #sarcastic are mainly used when the audience is not known. Wang, Wu, Wang and Ren wang2015twitter use a sequential classifier for classifying tweets taking into account the previous responses, thus improving the performance concerning a simple multi-class classifier.
Amir, Wallace, Lyu, Carvalho and Silva amir2016modelling used the dataset collected in Bamman et al. bamman2015contextualized (which was not completely available) for training a deep learning model that could represent users with user embeddings and this method seems to outperform the method from Bamman and colleagues. Sarcasm classification on Twitter involves different modelling techniques that perform better when taking into account the user and the thread history of a Tweet. Our work focuses on the task of classifying a single document written by a single author. Thus, we focus mainly on different kinds of datasets. Buschmeier, Cimiano and Klinger buschmeier2014impact have studied the corpus introduced in Filatova filatova2012irony by extracting a high number of features about typographic cues that can represent sarcasm, and used different classification methods obtaining results that vary significantly according to the classifier. They found that the single most important feature is the star rating of the review, and this happens because sarcastic reviews are more probable when a user did not like the product.
Wallace et al. wallace2014humans created a corpus from Reddit posts, for which they also stored context information, such as the post that is answered. The authors proposed a method that uses the bag of words and other features from previous studies for building an SVM classifier that gets very low results. Moreover, a correlation is found between posts for which the humans require the context and sarcastic posts. This can be explained by considering that the chosen sub-reddits are about religion or politics, and they are thus very prone to controversial discussions. Consequently, to understand the ironic intent of a post it is quite important to know the author position on the topic and also the posts they are answering to.
Joshi, Sharma and Bhattacharyya joshi-sharma-bhattacharyya:2015:ACL-IJCNLP used features for capturing intrinsic and extrinsic incongruity in texts and outperforms two previous methods both in tweets and in forum posts. These works represent valuable means of comparison for the present work. We show that an approach based only on distributional semantics is competitive with other approaches using more elaborated feature engineering, even when the data amount is quite small. Distributional semantics became popular in NLP thanks to the availability of good quality word embeddings BIBREF19 , and are introduced by design in deep learning models. In sarcasm detection, distributional semantics has been used to serve different roles. Ghosh, Guo, and Muresan ghosh2015sarcastic have adopted word embeddings to disambiguate a literal use of single words from a sarcastic use. Joshi, Tripathi, Patel, Bhattacharyya and Carman joshi2016word use word embeddings to compute incongruities among words using them as additional features for methods selected from the literature. Our work differs from these as we use LSA instead of word embeddings, and distributional semantics is the only kind of features we use. Ghosh and Veale ghosh2016 use LSA to extend the list of hashtags to find more sarcastic tweets on Twitter and use a deep neural network to perform the actual classification. Our work differs from theirs as we use LSA to compute the vectorial representation of documents and we do not perform tweet crawling. Poria, Cambria, Hazarika and Vij cambria2016 train a convolutional neural network to classify sarcasm in tweets. They extend the neural network with features extracted from other datasets for sentiment, emotion and personality classification, as these features are considered to be useful for the task of sarcasm detection.
Data-Driven Induction of Semantic Spaces and Traditional Classifiers
We focused our research on the role that fully data-driven models can play in detecting sarcasm. To reach this goal, we exploited the Latent Semantic Analysis paradigm both in its traditional formulation (Landauer et al. 1998) and by using the Truncated Singular Value Decomposition (T-SVD) as a statistical estimator as shown in Pilato et al. pilato2015tsvd. We have chosen to use the LSA paradigm to exploit a well-known and well-founded approach for inducing semantic spaces that have been effectively used in natural language understanding, cognitive modeling, speech recognition, smart indexing, and other statistical natural language processing problems. The sub-symbolic codings of documents obtained by the aforementioned LSA-based approaches are then used as inputs by a set of classifiers to evaluate the differences of performances obtained by using different machine learning approaches and testing them on different sarcasm-detection datasets.
The full work-flow, composed of the following steps:
does not require any expert or domain knowledge.
Preprocessing of text
The first step of preprocessing for texts is the tokenization using spaces, punctuation and special characters (e.g., $, , @) as separators. Thus one token is a sequence of alphanumeric characters or of punctuation symbols. The set of all the extracted tokens constitutes a “vocabulary” named INLINEFORM0 .
The sequences of tokens, each representing a single document in the training set, are used to generate a word-document co-occurrence raw matrix INLINEFORM0 , where each INLINEFORM1 cell contains the number of times the token INLINEFORM2 appears in the document INLINEFORM3 . Let INLINEFORM4 be the number of tokens, i.e., INLINEFORM5 , and let INLINEFORM6 be the number of documents of the corpus used for computing the matrix INLINEFORM7 ; the dimensionality of INLINEFORM8 is INLINEFORM9 .
Data driven induction of semantic spaces by means of LSA-oriented paradigms
The matrix INLINEFORM0 is used and further processed to induce proper Semantic Spaces where terms and documents can be mapped. To generate these semantic spaces, we have used both the traditional LSA algorithm (Deerwester et al. 1990, Landauer et al. 1998) and the approach which uses T-SVD as a statistical estimator as proposed in Pilato et al. pilato2015tsvd. For the sake of brevity, we call this last approach Statistical LSA to differentiate it by the Traditional LSA. It is worthwhile to point out that, in the Latent Semantic Analysis paradigm (i.e., both “general” and “statistical”), the corpus used for building the semantic space plays a key role in performances. As a matter of fact, large and heterogeneous corpora may give more noise or too much specific information from a single domain, decreasing the accuracy of the induced models BIBREF36 .
The traditional LSA is a procedure that has been used mainly for information retrieval (Deerwester et al. 1990). The previously described matrix INLINEFORM0 is used for computing a Tf-Idf (Term-Frequency Inverse-document frequency) matrix INLINEFORM1 BIBREF37 . Let INLINEFORM2 be the rank of INLINEFORM3 . The following factorization, called Singular Value Decomposition (SVD) holds for the matrix INLINEFORM4 : DISPLAYFORM0
where INLINEFORM0 is a INLINEFORM1 orthogonal matrix, INLINEFORM2 is a INLINEFORM3 orthogonal matrix and INLINEFORM4 is a INLINEFORM5 diagonal matrix, whose diagonal elements INLINEFORM6 are called singular values of INLINEFORM7 . It can be shown that the singular value decomposition of INLINEFORM8 is unique up to the order of the singular values and of the corresponding columns of INLINEFORM9 and INLINEFORM10 , so there is no loss of generality if we suppose that INLINEFORM11 are ranked in decreasing order.
Let INLINEFORM0 be an integer such that INLINEFORM1 , let INLINEFORM2 be the matrix obtained from INLINEFORM3 by removing its last INLINEFORM4 columns, INLINEFORM5 the matrix obtained from INLINEFORM6 in the same manner and INLINEFORM7 the diagonal matrix obtained from INLINEFORM8 by suppressing both its last INLINEFORM9 rows and INLINEFORM10 columns. INLINEFORM11 is the matrix containing the INLINEFORM12 -dimensional vector representation of the words and INLINEFORM13 is the matrix containing the INLINEFORM14 -dimensional vector representation of the documents. It can be shown (Deerwester et al. 1990) that the matrix: DISPLAYFORM0
is the best rank INLINEFORM0 approximation to INLINEFORM1 according to the Frobenius distance. INLINEFORM6 is called the reconstructed matrix. The process by which INLINEFORM7 is obtained from INLINEFORM8 is called Truncated Singular Value Decomposition (T-SVD). The book by Golub and Van Loan golub1996matrix provide further details about the Singular Value Decomposition technique.
The traditional Latent Semantic Analysis based on T-SVD is one of the possible methods to infer data-driven models. Furthermore, one of its major drawbacks, which is the lack of a sound statistical interpretation, has been recently overcome in Pilato et al. pilato2015tsvd, where authors have been presented a statistical explanation of this paradigm.
According to this interpretation, the T-SVD algorithm, as used in the Latent Semantic Analysis paradigm, acts as an estimator, which conveys statistically significant information from the sample to the model.
To briefly sum-up the procedure, we recall here the concepts of probability amplitude and probability distribution associated with a matrix as they have been defined in Pilato et al. pilato2015tsvd.
Let INLINEFORM0 , INLINEFORM1 be two positive integers and let INLINEFORM2 be the set of real numbers. Given a INLINEFORM3 matrix INLINEFORM4 with INLINEFORM5 , INLINEFORM6 , INLINEFORM7 where at least one of its components INLINEFORM8 is positive, we define a set INLINEFORM9 , composed of all the pairs INLINEFORM10 that identify the positive components of INLINEFORM11 , i.e.: DISPLAYFORM0
Subsequently, we define the probability amplitude associated with INLINEFORM0 , the INLINEFORM1 matrix INLINEFORM2 resulting from the mapping INLINEFORM3 : DISPLAYFORM0
whose elements INLINEFORM0 are computed as: DISPLAYFORM0
so that INLINEFORM0 it is INLINEFORM1 and INLINEFORM2 .
We define also the probability distribution associated with a matrix INLINEFORM0 the INLINEFORM1 matrix resulting from the mapping INLINEFORM2 : DISPLAYFORM0
whose elements are the squares of the elements of INLINEFORM0 , i.e. INLINEFORM1 . The method starts with a raw data matrix INLINEFORM2 consisting of positive values. In our study the raw data matrix INLINEFORM3 is the term-document co-occurrence matrix. From INLINEFORM4 a real-valued normalized matrix INLINEFORM5 is computed by dividing every element for the sum of all elements of INLINEFORM6 . DISPLAYFORM0
If we call INLINEFORM0 the matrix: DISPLAYFORM0
The matrix INLINEFORM0 can be decomposed with the SVD technique: DISPLAYFORM0
and its best rank-r decomposition INLINEFORM0 is obtained by applying the T-SVD technique, which minimizes the Frobenius distance INLINEFORM1 , given INLINEFORM2 : DISPLAYFORM0
Even if INLINEFORM0 is not a probability distribution, the computation of INLINEFORM1 makes it possible to identify, without any further addition of external information, the probability distribution we are looking for. As shown in Pilato et al. pilato2015tsvd, it theoretically suffices computing the probability amplitude associated to INLINEFORM2 , i.e. INLINEFORM3 , and consequently calculating the probability distribution INLINEFORM4 associated to INLINEFORM5 . The aforementioned Frobenius distance INLINEFORM6 constitutes an upper bound to the Hellinger distance between the sample probability INLINEFORM11 and the probability distribution estimated by the procedure.
Mapping new documents to the semantic space
Both LSA approaches illustrated in the previous subsections provide us with the three, obviously different for each approach, matrices INLINEFORM0 , INLINEFORM1 and INLINEFORM2 .
The INLINEFORM0 and the INLINEFORM1 matrices can be used for computing the vector representation of the new documents into the induced semantic space. The INLINEFORM2 matrix contains in its diagonal the singular values; INLINEFORM3 is composed by rows that represent the r-dimensional sub-symbolic, i.e., numerical, mapping in the semantic space of the tokens constituting the vocabulary INLINEFORM4 . Then, given a text chunk INLINEFORM5 , INLINEFORM6 is sub-symbolically represented by a INLINEFORM7 -dimensional word occurrence vector INLINEFORM8 , from which it is computed a vector INLINEFORM9 with two different procedures depending on which LSA paradigm has been chosen.
In the case of Traditional LSA, it is the Tf-Idf representation BIBREF38 of INLINEFORM0 by using the same parameters learned during training.
In the case of the Statistical LSA, the INLINEFORM0 vector is transformed into INLINEFORM1 similarly as the matrix INLINEFORM2 is transformed into the matrix INLINEFORM3 : DISPLAYFORM0
Once the appropriate coding of INLINEFORM0 has been computed, an r-dimensional vector INLINEFORM1 representing the sub-symbolic coding of INLINEFORM2 is then obtained from the vector INLINEFORM3 by means of the following mapping formula: DISPLAYFORM0
Supervised learning
The training and test documents are mapped into the semantic spaces induced at the previous step. These vectors, sub-symbolic coding of the documents, are therefore used as inputs to different classifiers to train or test on them. Such classifiers will finally solve a binary classification problem assigning the label 1 (sarcastic) or 0 (nonsarcastic) to a generic document. For this study we have used Support Vector Machines, Logistic Regression, Random Forests, and Gradient boosting as they represent the state of the art for most of the binary classification problems with small datasets. In the following, we recall a brief description of them.
The logistic regressor (LR) is a generalized linear model suitable for binary responses BIBREF39 . In LR the following log-linear model is adopted: DISPLAYFORM0
where INLINEFORM0 represents the probability of the success outcome. A suitable way of minimizing the so called empirical risk is the numerical estimation of the INLINEFORM1 s coefficient by a maximum likelihood procedure: DISPLAYFORM0
where INLINEFORM0 is the training set, INLINEFORM1 is the norm of the weights vector used for regularization, and can be either the INLINEFORM2 or the INLINEFORM3 norm, and INLINEFORM4 is the weight to give to the regularization factor. The function in formula EQREF33 is convex, so it can be minimized even with the simple gradient descent algorithm, but more complex algorithms can be used in order to reduce the convergence time. In this work we use the trust region Newton method proposed by Lin, Weng and Keerthy lin2008trust, as provided by the LIBLINEAR library BIBREF40 .
A kernel INLINEFORM0 is any mapping satisfying DISPLAYFORM0
where INLINEFORM0 , INLINEFORM1 are elements in the input space, INLINEFORM2 is a mapping from the input space to a new representation space INLINEFORM3 where an inner product is defined. The function INLINEFORM4 is chosen to be nonlinear, and the dimension of the feature space is taken intentionally greater than the dimension of the input space. These choices could give the chance to make the classification problem linearly separable in INLINEFORM5 . Support vector machines (SVMs), also called kernel machines BIBREF41 are binary linear classifiers that make use of kernels. They search for the optimal hyperplane INLINEFORM6 in the feature space that maximizes the geometric margin, which is the distance of the hyperplane to the nearest training data point of any class. The main advantage of SVM is that it provides a solution to the global optimization problem, thereby reducing the generalization error of the classifier. The formulation of SVM can be easily extended to build a nonlinear classifier by incorporating a kernel of the class H DISPLAYFORM0
No systematic tools have been developed to automatically identify the optimal kernel for a particular application.
Decision trees BIBREF42 are rooted trees that can be used successfully as classifiers BIBREF43 . Each node of the three represents a binary rule that splits the feature space according to the value of a predictive feature and a path from the root to leaf nodes represents a series of rules that are used to recursively divide the feature space into smaller subspaces, where a class label is assigned. The structure of the tree in terms of split nodes can be learned from data by using several approaches. Random forests BIBREF44 are an ensemble of decision trees, found using the bootstrap sampling technique on the training set. In particular, a fixed number of random samples are extracted with replacement from the training set, and each of them is used as a training set to fit a decision tree. The forest is composed by each of these decision trees, and the final predictions are made by averaging the predictions from all the individual decision trees.
Boosting is another ensemble strategy with the special purpose of improving the combination of a set of weak classifiers. These are chosen to be of very low model complexity such as the case of decision trees with a single split. The general framework of boosting sequentially adds a tree to an ensemble, the new one with the goal of correcting its predecessor. Gradient boosting BIBREF45 uses a gradient-descent like procedure to sequentially improve a tree classifier. This is done by adding to the actual classifier a new decision tree learned from the residual errors made by the predecessor. The final predictions are made by the tree classifier resulting after a fixed number of iterations of the procedure.
Datasets
We have chosen 4 corpora for our experiments, all of them are publicly available and treating the problem as a binary classification: “SarcasmCorpus” (Filatova 2012) , “IAC-Sarcastic” BIBREF46 , which is a subset of Internet Argument Corpus1.0 prepared for sarcasm detection, “irony-context” (Wallace et al. 2014), and “IAC-Sarcastic-v2” (Oraby et al. 2016), which is extracted from the second version of Internet Argument Corpus BIBREF47 . In order to provide a more complete evaluation, we also use the corpus of the shared task “Semeval2018 Task 3A” BIBREF48 .
SarcasmCorpus
Filatova filatova2012irony collected 1254 reviews from Amazon for different kinds of products, of which 437 are sarcastic, and 817 are not sarcastic. The dataset is unbalanced toward the “regular” texts, and this is due both to the policy of Amazon, which explicitly requires sincere reviews and to the peculiarity of sarcasm itself, which is used only in some cases, especially because of the difficulty for humans to recognize it over the internet.
Each review in the corpus consists of the title, author, product name, review text and number of stars, and the review is a stand-alone document referring to a single product. This corpus, like all the others considered in this work, has been entirely hand-labeled by the Amazon Mechanical Turkers, who were asked whether each review contains sarcasm in it. Each text has been presented to 5 Turkers and has been classified as sarcastic when at least three among five workers agreed. The corpus contains INLINEFORM0 distinct tokens, with INLINEFORM1 occurring only in sarcastic reviews, INLINEFORM2 occurring only in regular reviews and INLINEFORM3 occurring in both categories. Buschmeier et al. buschmeier2014impact made an interesting analysis of the corpus by collecting some statistics and publishing the only classification results that are available for it up to now. They extracted 29 task-specific features and combined them with the bag-of-words representation and multiple classifiers. The bag of words resulted to be important for the classification. In fact, for example, they get a poor 50.9% F-score value with logistic regressor without bag-of-words, which is increased to 74% by using it. This result is surely related to the difference in terms used by the two classes, but it also shows that information about the words used in the document is needed for the task.
IAC-Sarcastic
The second dataset we used is the IAC-Sarcastic sub-corpus, which consists of 1995 posts coming from 4forums.com, a classical forum where several topics are discussed. This corpus is actually extracted from the larger Internet Argument Corpus (IAC), containing INLINEFORM0 discussions, INLINEFORM1 posts and INLINEFORM2 words. In IAC there are INLINEFORM3 Quote-Response (Q-R) pairs and INLINEFORM4 three-posts chains that have been manually labeled for several HITs (Human-Intelligence Tasks) by Amazon Mechanical Turk. For each Q-R item, the Turkers were asked to evaluate the response section by considering the quote as a context. One of the HITs regarded the identification of a sarcastic response. As a result, the IAC-Sarcastic Corpus consists of 1995 responses, without any quote, with a binary label that indicates the presence of sarcasm. 998 texts are labeled as sarcastic, and 997 are not, so this is one of the rare balanced datasets for this task. To the best of our knowledge, only the work by Justo, Corcoran, Lukin, Walker, and Torres justo2014 published results on the sarcastic task of the IAC dataset, but the authors made a different sampling of the documents from the one used for IAC-Sarcastic. Thus, our results for this corpus are not comparable with the ones reported in that work.
Irony-context
A third dataset is the one collected in Wallace et al. wallace2014humans. The main goal of that study was to highlight the role of the context of a text to make irony understandable by humans. The dataset is extracted from Reddit by collecting comments from the following six sub-reddits: politics, progressive, conservative, atheism, Christianity, technology, with their respective size of 873, 573, 543, 442, 312 and 277 samples. Each comment has been labeled by three university undergraduates using a browser interface which let them see the context of the comment in the form of previous comments or related pages under request. The label of a comment was selected with a simple majority of 2 out of 3 labelers. For each comment and each labeler, they stored whether the context has been requested and if the labeler changed his mind after having seen it. This allowed the authors to study the correlation between the sarcastic label and the requests for context.
The results allowed the authors to infer that the machines would also need the context for detecting sarcasm, as their model did not predict correctly the texts for which the humans required the context. This is an important cue that should be considered while developing sarcasm detection methods, even though we do not explicitly consider the context of our method. As a result, we cannot expect to obtain high absolute results for this dataset by letting the model observe only the single text.
IAC-Sarcastic-v2
In 2016 a new version of IAC was made available (IACv2) (Abbot et al. 2016), and after some months also the sarcastic sub-corpus was released (Oraby et al. 2016), which is bigger than the first version. It consists of three sub-corpora, among which the bigger one is called “generic”, and it is made of INLINEFORM0 posts per class collected from IACv2. For the creation of this sub-corpus, the authors produced a high-precision classifier for the non-sarcastic class, which helped to filter out many non-sarcastic posts from the original corpus and lower the labeling costs. Then, to have high-quality labeling, they required a majority of 6 out of 9 sarcastic annotations to label a post as sarcastic.
To produce a more diverse corpus, they built two more corpora focused on particular rhetorical figures often associated with sarcasm: rhetorical questions and hyperboles. For both of the sub-corpora, the authors used patterns to recognize posts containing the chosen rhetorical figure from IACv2. Each of the collected posts has been subsequently shown to five AMTs for the sarcastic/not sarcastic annotation. The label is given with simple majority.
The purpose of these two focused sub-corpora is to force classifiers to find some semantic cues which can distinguish sarcastic posts even in the presence of rhetorical figures usually associated with sarcasm. In fact, the presence of hyperboles has been used before as a feature for detecting sarcasm BIBREF49 .
Semeval-2018 Task3 Corpus of Tweets
The International Workshop on Semantic Evaluation Semeval-2018 featured a shared task on verbal irony detection in tweets (Van Hee et al. 2018). The corpus contains a class-balanced training set consisting of INLINEFORM0 tweets, and a test set with 784 tweets. In the test set, only 40% of the instances are ironic. The corpus has been collected from Twitter searching for tweets with the hashtags #irony, #sarcasm and #not. The corpus has been annotated by three students in linguistics who showed a high inter-annotator agreement. After the annotation, INLINEFORM1 tweets out of INLINEFORM2 were ironic and only 604 were not. Thus, an additional set of INLINEFORM3 non-ironic tweets was added to the corpus. Finally, the corpus was split randomly in class-balanced training and test set, but an additional cleaning step for removing ambiguous sentences modified the proportion to 40% ironic.
Experimental setup
We ran three groups of experiments, to assess both the effectiveness of our approach when compared with the approaches we found in literature and its capability of extracting features that are relevant for sarcasm in a cross-domain scenario. In both cases, we denote with the word model one of the possible combinations of classic/statistical LSA and a classifier. The used classifiers are Support Vector Machine (SVM), Logistic regression (Log.Reg), Random Forest (RF) and gradient boosting (XGB).
For the first group of experiments, we evaluated the performance of each of our models in every corpus. We use 10-fold cross-validation and report the mean values of INLINEFORM0 -score, precision, and recall among all the folds. The proportion of the two classes in each fold is equal to the proportion in the whole corpus. Where applicable, we compare our results with existing results in the literature. Besides, we compare with the method presented in Poira et al. cambria2016.
The second group of experiments has been performed on the Semeval 2018 Task 3 dataset (Van Hee et al. 2018). We first find the best LSA dimensionality by 10-fold cross-validation in the training set. Then, we trained again the models in the whole dataset and evaluated them in the test set for comparison with the participants to the shared task.
The third group of experiments is inter-corpora. For each experiment, we have chosen one corpus as a training set and another one as a test set. This process is performed for all the models and all the corpora pairs. We aim to find whether sarcasm detection is domain-dependent.
Finally, in the fourth group of experiments (union experiments) we perform another 10-fold in which all the corpora are concatenated. Each fold contains samples from every corpus proportionally to the size of that corpus. The goal of this experiment is to understand whether simply adding more data, but from different domains, improves the classification performance.
The hyperparameters of the classifiers have been chosen by grid search on SarcasmCorpus with LSA dimensionality 40, and then used for all the reported experiments. We use SVM with Gaussian kernel, C value of 100, INLINEFORM0 logistic regression with penalty L1 and C=10 and decision tree with entropy loss. SVM and logistic regression both have balanced class weights to cope with unbalanced datasets.
In-corpus Experiments
In SarcasmCorpus each sample consists of a review title, a review text, a product name and the number of stars given to the product ranging from 1 to 5. Buschmeier et al. buschmeier2014impact showed that the star rating is the most discriminative feature. Thus we performed the experiment both including and not including it. In Table TABREF48 , we refer to “SarcasmCorpus” when the star rating is not used, and “SarcasmCorpus*” when it is used. We use the star rating by simply concatenating it to the document vector produced by LSA. The document vector is computed only from the review texts because in our preliminary experiments we found that the other parts are not useful for the task. Accuracy and F-score values of all classifiers for SarcasmCorpus and SarcasmCorpus* are plotted in Figures FIGREF72 and FIGREF73 , and the best F-scores, with the relative precision and recall, are reported in the two columns SarcasmCorpus and SarcasmCorpus* of Table TABREF48 . The best result from the logistic regression in SarcasmCorpus is INLINEFORM0 which represents a INLINEFORM1 % relative improvement concerning the INLINEFORM2 reported in the above-mentioned work by Buschmeier et al. buschmeier2014impact. The results from Poira et al. cambria2016 are even higher in terms of F-score, with a relative improvement of INLINEFORM3 , which is due mostly to a much higher recall.
Note that the method by Poira et al. cambria2016 uses also features extracted from other datasets for sentiment, emotion and personality classification, as these features are considered to be useful for the task of sarcasm detection. Moreover, as our goal is to propose a baseline, the training time in the order of minutes is an advantage of our model. We report such results as an upper bound considering that our model does not use additional information from external data.
The best results are obtained using the star labels. In this setting, our best-performing classifiers are better than the INLINEFORM0 F-score value reported by Buschmeier, and our best INLINEFORM1 -score of INLINEFORM2 represents a INLINEFORM3 relative improvement. In this single case of SarcasmCorpus*, the results with the Traditional LSA are all higher than their counterparts with Statistical LSA.
For IAC-Sarcastic we do not have any previously published result to compare with. The only related result is reported in Joshi et al. joshi-sharma-bhattacharyya:2015:ACL-IJCNLP, which use a corpus randomly extracted from IAC containing 752 sarcastic and 752 not sarcastic texts. They report an F-score of INLINEFORM0 (average over a 5-fold), but the text sampling procedure is not specified in the paper. Thus, we prefer to use the sarcastic selection given by the Internet Argument Corpus website which is also a bit larger (998 sarcastic and 997 non-sarcastic texts).
Accuracies and F-scores of all the classifiers at varying T-SVD size are plotted in Figure FIGREF74 , best values of F-score, precision and recall are reported in column IAC-Sarcastic of Table TABREF49 . The best result (F= INLINEFORM0 ) is lower than in SarcasmCorpus, despite IAC-Sarcastic being balanced and larger than SarcasmCorpus. With Traditional LSA the INLINEFORM1 -scores are generally slightly lower, but the precision values are higher.
The results from Poira et al. cambria2016 are significantly higher, suggesting that in this dataset the sarcasm can be detected in most cases with the linguistic features used by their network independently from the context.
For the irony-context corpus, we used the same 1949 documents selected for the experiments reported in Wallace et al. wallace2014humans. To allow fair comparisons, we used only the texts of the comments, without any contextual information.
The authors report a mean F-score over the five-fold of 0.383 by using a bag-of-words representation with 50.000 tokens, plus some other binary features that have proven useful in other works, and an SVM classifier with a linear kernel. Our results are plotted in Figure FIGREF78 and reported in column irony-context of Table TABREF49 , where it is shown how our classifiers clearly outperform the baseline. Our maximum F-score of INLINEFORM0 represents a relative improvement of 20%. Moreover, it is important to highlight the incredibly low values obtained in this corpus when compared with the results from the previous corpora. This is certainly due to the high skewness between the classes; in fact, the positive samples are just 537 over 1949 (27.5%). If we consider that in SarcasmCorpus the sarcastic texts are only 33% of the total, we suppose there are other causes. Another reason that can explain the poor results can be found in the diversity of topics, as the texts are extracted from six different forums, and the words used for sarcasm can be highly specific to a given context, both cultural and topical. In Wallace et al. wallace2014humans it is explicitly said that the request of the context from an annotator is high for the sarcastic texts. As a consequence, classifying correctly the texts without a context is difficult even for humans. Moreover, the forums from which the posts were extracted are highly controversial, as they regard politics or religion. As a consequence, it is difficult to grasp the sarcasm of a text without knowing the author's opinions.
The results with Traditional LSA are very similar to Statistical LSA, and the real surprise is the incredibly low scores obtained by the random forest and gradient boosting methods.
In this case, we wanted to compare our results against those from Oraby et al. oraby2016creating, which deal with the three sub-corpora separately. However, they are not directly comparable because at the moment in which we report these results only half of the corpus has been released, consisting of 3260 posts in the generic sub-corpus, 582 in the Hyperbole side and 850 for rhetorical questions. The three sub-corpora are all balanced.
Results computed on the three subcorpora are plotted in Figures FIGREF75 , FIGREF76 , FIGREF77 and reported in the last three columns of table TABREF50 . Despite the difference in data availability, the results are quite encouraging. In fact, we can see that our method reaches the INLINEFORM0 - score of INLINEFORM1 in the generic sub-corpus, slightly better than the previous study. Moreover, it improves over Oraby et al. (2016) also in the other two sub-corpora but using Traditional LSA.
Nonetheless, these results show that it is possible to achieve very good performance when high-quality labeled corpora are available, even with a limited number of examples.
For the CNN, we have results only in the generic sub-corpus, and this is the only case in which at least one of our models can outperform it in terms of F-score.
SemEval 2018 Task 3A
The last experiment on a single dataset was performed on the settings of SemEval 2018 Task 3A (Van Hee et al. 2018), which is a shared task on a binary classification of irony, which we introduced in Section SECREF47 .
We start by performing 10-fold cross-validation with our classifiers over varying LSA dimensionality to choose the best setting. We used the same set of hyper-parameters used for the previous experiments.
Once we have found the best setting, we train again the model with all the data and predict the classes of the test tweets. We found that we obtain the best results in cross-validation with LSA vectors of size 20, and the results are presented in Table TABREF59 . We list results for four different classifiers, namely logistic regression, support vector machine, gradient boosting and random forest. In this case, we get the best results using random forests, followed by gradient boosting. In particular, random forest obtains a F INLINEFORM0 -score of INLINEFORM1 , which is higher than the 6-th submission. It is worth noting that the submissions that we listed in the Table, except for the baseline, all use approaches based on deep learning. Compared to the unigram SVM baseline used for the shared task (row 11 in table 4), our model with the random forest is clearly better according to all the metrics, while our model with SVM is better in terms of F INLINEFORM2 score but not accuracy.
Surely the model we provide is not the best one in terms of accuracy, and showing its superiority among all the others does not represent the goal of this work, but the best performers, i.e. deep learning networks, involve an high number of parameters and high computational training cost. Moreover, there are additional interesting notes. First, the submission by BIBREF50 also makes use of deep neural networks but does not get a higher score than our best. Second, the submission by BIBREF51 is using SVMs over syntactic, semantic, and affective features, but still is not better than our best score. The models that showed a clear superiority use deep networks pre-trained on external data to extract more meaningful features. Thus, while the advantage is real, the number of parameters and the amount of data used is much higher.
Inter-corpora Experiments
The second group of experiments is aimed at finding whether the sarcasm is domain-dependent, or the knowledge acquired over one dataset can be transferred to another. We evaluate the similarity among the datasets by training a model over all the data of a corpus and using a second corpus as a test set. Our best results for every corpus pair are listed in Tables TABREF62 and TABREF63 , where the rows indicate the training set and the columns the test set. Quite interestingly, unlike the in-corpus experiments where the logistic regression works better in some cases, all the top scores that we report for these experiments are obtained by using the SVM classifier.
In Table TABREF62 we find the results for SarcasmCorpus and IAC-Sarcastic used as test sets. For the case of SarcasmCorpus, the F-scores are quite low compared to the in-corpus experiments. In fact, here we obtain the best result of only INLINEFORM0 when IAC-Sarcastic is the training set, which is much lower than the scores of about 70 that we get in the in-corpus experiments (column SarcasmCorpus in table TABREF48 ). The low results suggest us that the sarcasm conveyed by the texts in SarcasmCorpus is somehow different from what we can observe in the other corpora.
When we use IAC-Sarcastic as a test set, we can observe higher scores (column IAC-Sarcastic in table TABREF62 ), and the F-score of INLINEFORM0 that we obtain by training in IAC-Sarcastic-v2 is comparable to the INLINEFORM1 , which is the best result in the in-corpus experiments. Also, the lower result, which we obtain when training on irony-context, is quite close to the result obtained for the in-corpus experiment, and unexpected since the poor results obtained in the in-corpus experiments for irony-context (column Irony-Context in table TABREF49 ). When irony-context is the test set (first three columns of table TABREF63 ), we can observe again that the F-score obtained by training in IAC-Sarcastic-v2 is higher than the score obtained in the in-corpus experiment. Nonetheless, all the scores for this test set are lower than INLINEFORM2 with high recalls and low precisions.
When using IAC-Sarcastic-v2 as the test set (see last three columns of Table TABREF63 ) we can observe F-scores between INLINEFORM0 and INLINEFORM1 and are characterized by a high recall and lower precision. The top F1 score is obtained when using IAC-Sarcastic as a training set, which also corresponds to the highest precision. This represents a further proof in favor of the similarity of the two corpora. The top recall score of INLINEFORM2 is obtained by training on SarcasmCorpus, but the precision is much lower than the other two cases.
Overall, it is worth noting that, for all the experiments, the top results are obtained by training on either IAC-Sarcastic or IAC-Sarcastic-v2, while SarcasmCorpus is always better than irony-context. Considering that the quality of the features depends on the quality of the data and of the annotation, we suppose that the quality of the first two datasets is higher than the quality of irony-context, while the data contained in SarcasmCorpus are too different from the other corpora. A deeper analysis of the corpora can be found in the discussion (Section SECREF71 ).
Union Experiments
The last group of experiments we ran has the goal of understanding whether the combination of data coming from different sources can influence positively the final score. For this purpose, as anticipated in Section SECREF51 , we computed 10-folds of each of the four corpora used for the first group of experiments, and used as a training set the concatenation of 9 folds of every corpus, and as a validation set the remaining single folds of each corpus.
From Tables TABREF64 , TABREF65 we can observe that these results are not higher overall with respect to the inter-corpora results. The only exceptions are SarcasmCorpus, where the results are almost 20 F-score points higher than those obtained in the inter-corpora; and IAC-v2, where the gradient boosting (XGB) obtains 2 F-score points more than the top score in the inter-corpora results.
The results on SarcasmCorpus are still lower than the in-corpus results, and the scores of random forest and gradient boosting are much lower than the other two methods. This is further evidence that adding diverse data is not helpful, or is actually harmful, for classifying SarcasmCorpus.
The general trend of this block of experiments is that our classifiers are not able to leverage data from different domains in order to improve global results. In-domain data represent the best choice even if the data amount is lower.
Discussion
In this section, we discuss our results from a more general point of view. We start by briefly discussing the content of the different corpora. Then we try to relate the results of the different types of experiments. Finally, we detect the limits of our experiments for the type of documents we worked with.
The corpora we used for our experiments are characterized by high internal variability in style, as each corpus consists of texts from thousands of different authors. Despite the number of authors, there are some factors that depend on the type of text and the medium. For instance, the irony-context, IAC Sarcastic, and IAC Sarcastic v2 corpora are made of posts collected from online forums, which are mostly about politics. Most of the texts are extracted from longer arguments, and thus the style is informal and in general with aggressive tones.
In Tables TABREF67 , TABREF68 and TABREF69 we show some randomly selected samples from these corpora. As it is apparent from the samples, the posts have a target to attack, who can be another user or the subject of the discussion. Table TABREF67 shows some examples from IAC-Sarcastic. In all the examples the author attacks another user or his opinions. For instance, the first and the third sarcastic examples make sarcasm about the Bible to attack another user's religious ideas, while in the second example the author uses sarcasm to expose a fallacious position of another user and do not appear rude on his side. By contrast, the non-sarcastic examples are much more direct about their meaning. A similar pattern can be found in the examples from IAC Sarcastic v2 (table TABREF69 ). Sarcasm is again used to attack a person (first example) or his/her opinions (second example), maybe religious. The third example shows that also in this corpus some sentences are hard to classify. In this case, the information that we get is that the target has ultraconservative ideas, but it is not easy to grasp the sarcasm. The examples from irony-context (in table TABREF68 ) are much more difficult to grasp without knowing contextual information. For instance, the first sarcastic example can be either sarcastic or regular according to the political opinion of the author. It is sarcastic if the author is a Republican, it is not sarcastic (but would appear strange to write) if the author is a Democrat. The second and the third examples are hard to classify without knowing the subject of the conversation. The same issue of missing a broader context also appears in the non-sarcastic examples, and the third examples can easily be interpreted as sarcastic by humans. In SarcasmCorpus the situation is different as there is no argument ongoing, and the sarcasm is made against products that the author did not like. In this case, there are many references to the external world and the writing is more passionate in its negative stance. Some samples are shown in Table TABREF66 . The sarcastic examples in table TABREF66 all express a negative sentiment and also use negative words. Sarcasm is used within this negative reviews to attack the product in a more creative way and make the text more fun than a usual negative review. The non-sarcastic reviews, on the other side, give a description of the product and their experience with it, with regular forms of expressing the sentiment (“are also a great feature”, “It is a great little camera”). We suppose that this difference in style is the main obstacle to correct classification of SarcasmCorpus instances in the cross-corpora experiments.
We now discuss the relations among the results of the different experiments to gain some further insights into the sarcastic content of our corpora. From the in-corpus experiments, we obtain good results on SarcasmCorpus, which is the only corpus containing Amazon reviews. Unfortunately, when we train our models in a cross-corpora or all-corpora setting, our results drop dramatically, especially in the cross-corpora case. These results mean that the sarcasm in SarcasmCorpus is conveyed through features that are not present in the other corpora. This is especially true when considering that in the inter-corpora experiments, using SarcasmCorpus as a training set in all cases yields results that are only better than the ones obtained when using irony-context as a training set.
The results on irony-context show that this corpus is much more difficult to classify than the others, as it was pointed out also in the paper that presented it (Wallace et al. 2014), which highlights how the human annotators needed to read the contexts to be sure about the sarcastic posts. In the inter-corpora experiments, the results when training on irony-context are the worst for all the test sets, but only for a few points of F-score, while at first, we could expect dramatically lower results. For us, the previous are strong suggestions that the types of texts present in irony-context are similar to the ones present in IAC-Sarcastic-v2, but the quality is lower. As a consequence, this is a further proof that the datasets annotators do not consider sarcasm and irony two different linguistic phenomena.
The two versions of IAC-Sarcastic have proved to be the easiest to classify when using other corpora for training. The best result in IAC-Sarcastic is obtained in the Union experiment (see Tables TABREF64 , TABREF65 ), and thus it benefits from the higher amount of data, especially from the data from IAC-Sarcastic-v2, as can be observed from the cross-corpora results (Table TABREF62 ).
By contrast, the best results on IAC-Sarcastic-v2 are obtained with the in-corpus experiments, while all the results obtained in the inter-corpora experiments are clearly worse. Among the inter-corpora, training the model with IAC-Sarcastic results in a F_score of INLINEFORM0 , which means a relative decrement of INLINEFORM1 concerning the top score for the intra-corpus experiments of IAC-Sarcastic-v2. It is interesting to note that one cause of the decrement can also be the size of corpora, in fact, IAC-Sarcastic contains only 1995 texts, while IAC-Sarcastic-v2 contains 3260.
One final remark is about the absolute scores obtained in the in-corpus experiments. In fact, we can notice that in SarcasmCorpus the F_score can go beyond INLINEFORM0 , and up to INLINEFORM1 by adding the star rating as a feature. The high result can be explained by the peculiarity of this corpus, where sarcasm is present mostly in negative reviews, and the star label is the single best indicator of sarcasm BIBREF49 . The other corpora consist of texts that belong to a thread of forum posts. Sometimes it is reasonable to classify such posts as sarcastic or not out of context, but in many cases, it is impossible also for humans (see examples in Table TABREF68 ). In fact, the low F_score in irony-context is due to low precision, which is an indicator of high similarity between the positive and negative classes. Moreover, low precision and higher recall is a pattern that is present in most of the experiments, even if with higher absolute numbers. The combination of high recall and lower precision suggests that the dubious texts are classified as sarcastic more often than not sarcastic.
Conclusions
In this work, we have tackled the problem of automatic sarcasm detection from a data-driven point of view. More in details, we have used a set of labeled dataset and applied distributional semantics followed by some machine learning approaches in order to give a baseline for the literature in managing such a problem. We do not differentiate between sarcasm and irony because they are not so easily distinguishable even for human experts. Experiments have been carried out on four different corpora containing texts from online reviews or forums, and the corpus used for the shared task on irony detection on Twitter proposed in SemEval 2018. We have shown experimentally that some basic methods can outperform in all the datasets other methods based on bag of words and linguistic features, thus representing a solid baseline. With our experiments that train the models with one corpus and test them by using the other corpora, we have confirmed experimentally that also the annotators tend to not distinguish the distinction between irony and sarcasm. By contrast, major differences can be found according to the text domains, i.e., review vs. political forum. The domain difference can also prevent the method from taking benefits from more data when they are too diverse from the test data. As a future work, we will try to improve distributional semantics approaches with linguistic features in order to perform more fair comparisons with more recent and advanced methods. Furthermore, we will exploit more classical AI methodologies (e.g., by using ontologies, reasoners, common-sense reasoning techniques, etc.) to deduce the context, understanding the concepts expressed in a sentence, exploiting also features like hashtags and emojis to improve the overall performance of the approach. | Support Vector Machine (SVM), Logistic regression (Log.Reg), Random Forest (RF), gradient boosting (XGB) |
c2cbc2637761a2c2cf50f5f8caa248814277430e | c2cbc2637761a2c2cf50f5f8caa248814277430e_0 | Q: What are the different methods used for different corpora?
Text: Introduction
Affective computing has raised a great deal of interest in the last years. Picard picard1995affective introduced it as a computing paradigm that relates to, arises from, or influences emotions, letting computers be both more effective in assisting humans and successful in making decisions.
Language, as a conceptual process, plays a key role in the perception of verbal irony and sarcasm, two well-known forms of figurative language (FL) BIBREF0 Traditionally, irony as a figure of speech can be intended as “saying something while meaning something else” BIBREF1 . A comprehensive overview of different theories of irony has been illustrated in Attardo attardo07. Understanding if irony and sarcasm are the same linguistic phenomenon or not is still an unresolved question in literature BIBREF2 . Some authors consider irony a more general form of sarcasm, while others tend to consider it a separate linguistic issue BIBREF3 , BIBREF4 . According to the theory of sarcastic irony, sarcasm and irony are very similar, but sarcasm has a specific victim who is the object of the sarcastic statement, while irony does not have such a target BIBREF5 . More commonly, the noun “sarcasm” is understood as “saying the opposite of what one is thinking”, usually with a negative intention. Henceforth, due to the different nuances of irony and sarcasm, and the multiple interpretations of these two concepts, we do not differentiate between them, and, like many researchers, e.g., BIBREF6 , we will use the term “sarcasm” to refer to both verbal irony and sarcasm.
A sarcastic sentence may include features that characterize a positive sentiment, but that insinuates a negative sentiment BIBREF7 , BIBREF8 . It is clear that sarcastic sentences are more difficult to process by an algorithm than non-sarcastic assertions; as a matter of fact, both the situation and the mental state of the speaker are factors that can determine a sarcastic content in a sentence.
A system capable of detecting sarcasm correctly would greatly improve the performance of sentiment analysis systems BIBREF9 , BIBREF10 , BIBREF6 , BIBREF11 , especially considering the big data available nowadays due to the exponential growth of social platforms. Unfortunately, sarcasm detection in written texts is a difficult task even for humans BIBREF12 .
Moreover, some people usually do not understand sarcasm, and there are sentences meant as being sarcastic by the author that are not recognized as such by the readers.
We focus our attention on the possibility of detecting sarcastic sentences automatically from written text only, and from the reader's point of view. Managing this task without any knowledge of relevant contextual features, like prosody, is very hard.
The problem of sarcasm detection has been tackled with machine learning approaches, made possible by the availability of several annotated corpora. In the literature we can find two main categories of such corpora: automatically annotated and manually annotated.
The automatically annotated corpora are usually collected from the microblogging platform Twitter BIBREF13 , BIBREF14 by exploiting the final hashtag of tweets. For instance, a tweet is labeled as sarcastic only if it ends with a hashtag such as #sarcasm or #irony. The same cue is used in Davidov, Tsur and Rappoport davidov2010semi to produce a silver standard for evaluating their model.
Manually annotated corpora are collected from a more diversified range of social media, such as Amazon reviews BIBREF15 , Reddit (Wallace et al. 2014) or online forums BIBREF16 , BIBREF17 , and then labeled by hiring people in the Amazon Mechanical Turk portal. When using crowdsourcing, the annotation procedures are complex and involve, among others, a stage for ensuring that the workers understood the task and they are performing correctly, and a quality assurance stage for removing texts for which a high discrepancy between the annotators arises.
In this work we have tackled the problem of sarcasm detection by trying to use an entirely data-driven approach, exploiting a distributional semantics representation by inducing a semantic space and then applying a set of classifiers to classify the texts as being sarcastic or not sarcastic. With “fully data-driven” we mean approaches that are capable of finding connections between input text and class labels without using any a priori knowledge about the features that characterize a sarcastic statement.
In particular, we do not define “irony” or “sarcasm”, neither use any definition. We simply rely on sets of sentences binary labeled for sarcasm detection taking for granted that the labels correctly identify a sarcastic sentence.
It is worthwhile to point out that in this work we do not create any dataset: we simply exploit the labels of datasets that have already been produced by others, trying to give a baseline for the sarcasm detection task.
The contribution of this work can be summed up in three key points:
To reach these goals, we exploit a Distributional Semantics approach, whose aim is to give a representation of words in a continuous vector space BIBREF18 , BIBREF19 , where word similarity is coded in an unsupervised manner. This representation is useful for building models with little, or no, a-priori knowledge about the task BIBREF20 .
Distributional semantics is a research field that concerns methodologies aimed at determining semantic similarities between linguistic items. The key idea is based on the hypothesis that words co-occurring in similar contexts tend to have similar meaning BIBREF21 , BIBREF22 . Distributional semantics deals with the automatic construction of semantic models induced from large unstructured textual corpora, and it exploits vector space models to represent the meaning of a word BIBREF23 . Many methods can be applied to construct distributional models. They range from the statistical models to machine learning ones BIBREF24 , BIBREF19 , BIBREF25 , BIBREF26 . Among these techniques, Latent Semantic Analysis (LSA) is a methodology for building distributional semantic spaces that extract statistical relations between words which co-occurr in a given context though the use of the Truncated Singular value decomposition (T-SVD). In this work we explored and studied the possibility of building a data-driven model in the field of sarcasm detection exploiting the well-known Latent Semantic Analysis (LSA) paradigm both in its traditional formulation given by Landauer, Foltz and Laham landauer1998introduction and by using the Truncated Singular Value Decomposition (T-SVD) as a statistical estimator as illustrated in Pilato and Vassallo pilato2015tsvd.
Both approaches have been used to create data-driven semantic spaces where documents and, generally, text chunks can be mapped.
The theory behind LSA states that the “psychological similarity between any two words is reflected in the way they co-occur in small sub-samples of language” (Landauer et al. 1998).
We have chosen to exploit the LSA paradigm since it is a well-known distributional semantics paradigm capable of modeling many human cognitive abilities; furthermore, it has many potential practical applications BIBREF27 , BIBREF18 , BIBREF28 , BIBREF29 . Moreover, it has been demonstrated in Pilato and Vassallo pilato2015tsvd that Truncated Singular Value Decomposition (T-SVD), as used in LSA, can be interpreted as a statistical estimator, giving a robust theoretical interpretation to the Latent Semantic Analysis paradigm. Many researchers have successfully applied this technique for typical Semantic Computing applications, such as natural language understanding, cognitive modeling, speech recognition, smart indexing, anti-spam filters, dialogue systems, and other Statistical Natural Language processing problems BIBREF30 , BIBREF31 , BIBREF32 . Moreover, Latent Semantic Analysis has been successfully used for inducing data-driven “conceptual” spaces BIBREF33 . For the aforementioned reasons, we have chosen this approach as a baseline for the detection of sarcasm in texts.
Furthermore, our study makes use of four machine learning methods that have been used on four manually annotated, publicly available corpora.
The experimental results show that our data-driven approach consisting of LSA followed by a classifier can establish models that outperform the published results on two of the corpora; additionally, it produces competitive results for the other corpora that we used for our evaluation.
The next section describes the state of the art in the field, Section SECREF3 describes the Semantic Representation and the Machine Learning methods used in the study. Section SECREF4 introduces the datasets used for the experiments. Section SECREF5 summarizes the experimental results, Section SECREF6 is for the final conclusions and remarks.
The code and the datasets used for the experiments are available on github.
Related works
The problem of sarcasm detection has been tackled using a wide range of supervised or semi-supervised techniques applied to corpora from different social media sources.
In the present work, we do not collect a new corpus for sarcasm detection, but sarcastic corpus annotation has received much attention in the literature. Most of the works have used unsupervised or semi-supervised approaches in order to reduce the cost of the annotation, while partially sacrificing the data quality. One of the first approaches was introduced by Tsur, Davidov and Rappoport tsur2010icwsm for a corpus extracted from Twitter and further developed in Davidov et al. davidov2010semi with a corpus consisting of Amazon reviews. This semi-supervised approach uses “YAHOO! BOSS” API web search for collecting INLINEFORM0 utterances similar to the ones in a small initial labeled seed set. It was the first work to show that automatically-crawled data are useful for the task of sarcasm detection. Most of the works have been pursued using data extracted from Twitter, as it is relatively easy to extract ironic or sarcastic tweets using the search by hashtag. In fact, in Twitter, the restricted number of characters allowed encourages to mark the ironic intent with a hashtag like #irony or #sarcasm to prevent ambiguities. The hashtag is usually removed from the tweets and used as a label for the silver standard. Moreover, the first studies on Twitter data showed that the task is quite difficult also for human beings. González-Ibánez et al. gonzalez2011identifying collected a corpus of INLINEFORM1 tweets balanced between sarcastic, positive sentiment and negative sentiment. They presented a part of the corpus to human judges, who achieved low agreement and low accuracy. Reyes et al. reyes2013multidimensional collected a corpus using 4 hashtags that identify four different categories, irony, education, humor, and politics, with INLINEFORM2 tweets each. The same corpus was used in a later work BIBREF34 . Their results suggest that detecting sarcasm in full documents is easier than in single sentences because of the presence of a context, but in both cases, it remains a difficult task also for humans that often have a low agreement. The specific case of positive sentiment and a negative situation, which is the most typical sarcastic situation, has also been analyzed BIBREF35 . In particular, authors have found that less than half of the tweets ending with the hashtag #sarcastic are recognized as sarcastic by humans after removing the hashtag. Bharti, Babu, and Jena bharti2015parsing proposed two algorithms with the goal to find, respectively, tweets with contrast in sentiment and situation, and tweets starting with interjections. They also found that the label distribution does not correlate perfectly with the hashtag distribution, e.g., only INLINEFORM3 out of INLINEFORM4 tweets ending with #sarcastic are actually sarcastic. Farias, Patti and Rosso farias16 proposed a method that uses affective content to classify sarcastic tweets, and show that it outperforms preceding methods in several Twitter benchmarks. Since classifying tweets by using only the text is a difficult task also for humans, other works proposed new methods capable of exploiting other kind of data, like the identity of the author or the thread of the tweet. Bamman and Smith bamman2015contextualized augmented the feature vectors with features describing the author of the tweet and the user to which the tweet is addressed, obtaining significant improvements in accuracy. They also found that the hashtags #sarcasm and #sarcastic are mainly used when the audience is not known. Wang, Wu, Wang and Ren wang2015twitter use a sequential classifier for classifying tweets taking into account the previous responses, thus improving the performance concerning a simple multi-class classifier.
Amir, Wallace, Lyu, Carvalho and Silva amir2016modelling used the dataset collected in Bamman et al. bamman2015contextualized (which was not completely available) for training a deep learning model that could represent users with user embeddings and this method seems to outperform the method from Bamman and colleagues. Sarcasm classification on Twitter involves different modelling techniques that perform better when taking into account the user and the thread history of a Tweet. Our work focuses on the task of classifying a single document written by a single author. Thus, we focus mainly on different kinds of datasets. Buschmeier, Cimiano and Klinger buschmeier2014impact have studied the corpus introduced in Filatova filatova2012irony by extracting a high number of features about typographic cues that can represent sarcasm, and used different classification methods obtaining results that vary significantly according to the classifier. They found that the single most important feature is the star rating of the review, and this happens because sarcastic reviews are more probable when a user did not like the product.
Wallace et al. wallace2014humans created a corpus from Reddit posts, for which they also stored context information, such as the post that is answered. The authors proposed a method that uses the bag of words and other features from previous studies for building an SVM classifier that gets very low results. Moreover, a correlation is found between posts for which the humans require the context and sarcastic posts. This can be explained by considering that the chosen sub-reddits are about religion or politics, and they are thus very prone to controversial discussions. Consequently, to understand the ironic intent of a post it is quite important to know the author position on the topic and also the posts they are answering to.
Joshi, Sharma and Bhattacharyya joshi-sharma-bhattacharyya:2015:ACL-IJCNLP used features for capturing intrinsic and extrinsic incongruity in texts and outperforms two previous methods both in tweets and in forum posts. These works represent valuable means of comparison for the present work. We show that an approach based only on distributional semantics is competitive with other approaches using more elaborated feature engineering, even when the data amount is quite small. Distributional semantics became popular in NLP thanks to the availability of good quality word embeddings BIBREF19 , and are introduced by design in deep learning models. In sarcasm detection, distributional semantics has been used to serve different roles. Ghosh, Guo, and Muresan ghosh2015sarcastic have adopted word embeddings to disambiguate a literal use of single words from a sarcastic use. Joshi, Tripathi, Patel, Bhattacharyya and Carman joshi2016word use word embeddings to compute incongruities among words using them as additional features for methods selected from the literature. Our work differs from these as we use LSA instead of word embeddings, and distributional semantics is the only kind of features we use. Ghosh and Veale ghosh2016 use LSA to extend the list of hashtags to find more sarcastic tweets on Twitter and use a deep neural network to perform the actual classification. Our work differs from theirs as we use LSA to compute the vectorial representation of documents and we do not perform tweet crawling. Poria, Cambria, Hazarika and Vij cambria2016 train a convolutional neural network to classify sarcasm in tweets. They extend the neural network with features extracted from other datasets for sentiment, emotion and personality classification, as these features are considered to be useful for the task of sarcasm detection.
Data-Driven Induction of Semantic Spaces and Traditional Classifiers
We focused our research on the role that fully data-driven models can play in detecting sarcasm. To reach this goal, we exploited the Latent Semantic Analysis paradigm both in its traditional formulation (Landauer et al. 1998) and by using the Truncated Singular Value Decomposition (T-SVD) as a statistical estimator as shown in Pilato et al. pilato2015tsvd. We have chosen to use the LSA paradigm to exploit a well-known and well-founded approach for inducing semantic spaces that have been effectively used in natural language understanding, cognitive modeling, speech recognition, smart indexing, and other statistical natural language processing problems. The sub-symbolic codings of documents obtained by the aforementioned LSA-based approaches are then used as inputs by a set of classifiers to evaluate the differences of performances obtained by using different machine learning approaches and testing them on different sarcasm-detection datasets.
The full work-flow, composed of the following steps:
does not require any expert or domain knowledge.
Preprocessing of text
The first step of preprocessing for texts is the tokenization using spaces, punctuation and special characters (e.g., $, , @) as separators. Thus one token is a sequence of alphanumeric characters or of punctuation symbols. The set of all the extracted tokens constitutes a “vocabulary” named INLINEFORM0 .
The sequences of tokens, each representing a single document in the training set, are used to generate a word-document co-occurrence raw matrix INLINEFORM0 , where each INLINEFORM1 cell contains the number of times the token INLINEFORM2 appears in the document INLINEFORM3 . Let INLINEFORM4 be the number of tokens, i.e., INLINEFORM5 , and let INLINEFORM6 be the number of documents of the corpus used for computing the matrix INLINEFORM7 ; the dimensionality of INLINEFORM8 is INLINEFORM9 .
Data driven induction of semantic spaces by means of LSA-oriented paradigms
The matrix INLINEFORM0 is used and further processed to induce proper Semantic Spaces where terms and documents can be mapped. To generate these semantic spaces, we have used both the traditional LSA algorithm (Deerwester et al. 1990, Landauer et al. 1998) and the approach which uses T-SVD as a statistical estimator as proposed in Pilato et al. pilato2015tsvd. For the sake of brevity, we call this last approach Statistical LSA to differentiate it by the Traditional LSA. It is worthwhile to point out that, in the Latent Semantic Analysis paradigm (i.e., both “general” and “statistical”), the corpus used for building the semantic space plays a key role in performances. As a matter of fact, large and heterogeneous corpora may give more noise or too much specific information from a single domain, decreasing the accuracy of the induced models BIBREF36 .
The traditional LSA is a procedure that has been used mainly for information retrieval (Deerwester et al. 1990). The previously described matrix INLINEFORM0 is used for computing a Tf-Idf (Term-Frequency Inverse-document frequency) matrix INLINEFORM1 BIBREF37 . Let INLINEFORM2 be the rank of INLINEFORM3 . The following factorization, called Singular Value Decomposition (SVD) holds for the matrix INLINEFORM4 : DISPLAYFORM0
where INLINEFORM0 is a INLINEFORM1 orthogonal matrix, INLINEFORM2 is a INLINEFORM3 orthogonal matrix and INLINEFORM4 is a INLINEFORM5 diagonal matrix, whose diagonal elements INLINEFORM6 are called singular values of INLINEFORM7 . It can be shown that the singular value decomposition of INLINEFORM8 is unique up to the order of the singular values and of the corresponding columns of INLINEFORM9 and INLINEFORM10 , so there is no loss of generality if we suppose that INLINEFORM11 are ranked in decreasing order.
Let INLINEFORM0 be an integer such that INLINEFORM1 , let INLINEFORM2 be the matrix obtained from INLINEFORM3 by removing its last INLINEFORM4 columns, INLINEFORM5 the matrix obtained from INLINEFORM6 in the same manner and INLINEFORM7 the diagonal matrix obtained from INLINEFORM8 by suppressing both its last INLINEFORM9 rows and INLINEFORM10 columns. INLINEFORM11 is the matrix containing the INLINEFORM12 -dimensional vector representation of the words and INLINEFORM13 is the matrix containing the INLINEFORM14 -dimensional vector representation of the documents. It can be shown (Deerwester et al. 1990) that the matrix: DISPLAYFORM0
is the best rank INLINEFORM0 approximation to INLINEFORM1 according to the Frobenius distance. INLINEFORM6 is called the reconstructed matrix. The process by which INLINEFORM7 is obtained from INLINEFORM8 is called Truncated Singular Value Decomposition (T-SVD). The book by Golub and Van Loan golub1996matrix provide further details about the Singular Value Decomposition technique.
The traditional Latent Semantic Analysis based on T-SVD is one of the possible methods to infer data-driven models. Furthermore, one of its major drawbacks, which is the lack of a sound statistical interpretation, has been recently overcome in Pilato et al. pilato2015tsvd, where authors have been presented a statistical explanation of this paradigm.
According to this interpretation, the T-SVD algorithm, as used in the Latent Semantic Analysis paradigm, acts as an estimator, which conveys statistically significant information from the sample to the model.
To briefly sum-up the procedure, we recall here the concepts of probability amplitude and probability distribution associated with a matrix as they have been defined in Pilato et al. pilato2015tsvd.
Let INLINEFORM0 , INLINEFORM1 be two positive integers and let INLINEFORM2 be the set of real numbers. Given a INLINEFORM3 matrix INLINEFORM4 with INLINEFORM5 , INLINEFORM6 , INLINEFORM7 where at least one of its components INLINEFORM8 is positive, we define a set INLINEFORM9 , composed of all the pairs INLINEFORM10 that identify the positive components of INLINEFORM11 , i.e.: DISPLAYFORM0
Subsequently, we define the probability amplitude associated with INLINEFORM0 , the INLINEFORM1 matrix INLINEFORM2 resulting from the mapping INLINEFORM3 : DISPLAYFORM0
whose elements INLINEFORM0 are computed as: DISPLAYFORM0
so that INLINEFORM0 it is INLINEFORM1 and INLINEFORM2 .
We define also the probability distribution associated with a matrix INLINEFORM0 the INLINEFORM1 matrix resulting from the mapping INLINEFORM2 : DISPLAYFORM0
whose elements are the squares of the elements of INLINEFORM0 , i.e. INLINEFORM1 . The method starts with a raw data matrix INLINEFORM2 consisting of positive values. In our study the raw data matrix INLINEFORM3 is the term-document co-occurrence matrix. From INLINEFORM4 a real-valued normalized matrix INLINEFORM5 is computed by dividing every element for the sum of all elements of INLINEFORM6 . DISPLAYFORM0
If we call INLINEFORM0 the matrix: DISPLAYFORM0
The matrix INLINEFORM0 can be decomposed with the SVD technique: DISPLAYFORM0
and its best rank-r decomposition INLINEFORM0 is obtained by applying the T-SVD technique, which minimizes the Frobenius distance INLINEFORM1 , given INLINEFORM2 : DISPLAYFORM0
Even if INLINEFORM0 is not a probability distribution, the computation of INLINEFORM1 makes it possible to identify, without any further addition of external information, the probability distribution we are looking for. As shown in Pilato et al. pilato2015tsvd, it theoretically suffices computing the probability amplitude associated to INLINEFORM2 , i.e. INLINEFORM3 , and consequently calculating the probability distribution INLINEFORM4 associated to INLINEFORM5 . The aforementioned Frobenius distance INLINEFORM6 constitutes an upper bound to the Hellinger distance between the sample probability INLINEFORM11 and the probability distribution estimated by the procedure.
Mapping new documents to the semantic space
Both LSA approaches illustrated in the previous subsections provide us with the three, obviously different for each approach, matrices INLINEFORM0 , INLINEFORM1 and INLINEFORM2 .
The INLINEFORM0 and the INLINEFORM1 matrices can be used for computing the vector representation of the new documents into the induced semantic space. The INLINEFORM2 matrix contains in its diagonal the singular values; INLINEFORM3 is composed by rows that represent the r-dimensional sub-symbolic, i.e., numerical, mapping in the semantic space of the tokens constituting the vocabulary INLINEFORM4 . Then, given a text chunk INLINEFORM5 , INLINEFORM6 is sub-symbolically represented by a INLINEFORM7 -dimensional word occurrence vector INLINEFORM8 , from which it is computed a vector INLINEFORM9 with two different procedures depending on which LSA paradigm has been chosen.
In the case of Traditional LSA, it is the Tf-Idf representation BIBREF38 of INLINEFORM0 by using the same parameters learned during training.
In the case of the Statistical LSA, the INLINEFORM0 vector is transformed into INLINEFORM1 similarly as the matrix INLINEFORM2 is transformed into the matrix INLINEFORM3 : DISPLAYFORM0
Once the appropriate coding of INLINEFORM0 has been computed, an r-dimensional vector INLINEFORM1 representing the sub-symbolic coding of INLINEFORM2 is then obtained from the vector INLINEFORM3 by means of the following mapping formula: DISPLAYFORM0
Supervised learning
The training and test documents are mapped into the semantic spaces induced at the previous step. These vectors, sub-symbolic coding of the documents, are therefore used as inputs to different classifiers to train or test on them. Such classifiers will finally solve a binary classification problem assigning the label 1 (sarcastic) or 0 (nonsarcastic) to a generic document. For this study we have used Support Vector Machines, Logistic Regression, Random Forests, and Gradient boosting as they represent the state of the art for most of the binary classification problems with small datasets. In the following, we recall a brief description of them.
The logistic regressor (LR) is a generalized linear model suitable for binary responses BIBREF39 . In LR the following log-linear model is adopted: DISPLAYFORM0
where INLINEFORM0 represents the probability of the success outcome. A suitable way of minimizing the so called empirical risk is the numerical estimation of the INLINEFORM1 s coefficient by a maximum likelihood procedure: DISPLAYFORM0
where INLINEFORM0 is the training set, INLINEFORM1 is the norm of the weights vector used for regularization, and can be either the INLINEFORM2 or the INLINEFORM3 norm, and INLINEFORM4 is the weight to give to the regularization factor. The function in formula EQREF33 is convex, so it can be minimized even with the simple gradient descent algorithm, but more complex algorithms can be used in order to reduce the convergence time. In this work we use the trust region Newton method proposed by Lin, Weng and Keerthy lin2008trust, as provided by the LIBLINEAR library BIBREF40 .
A kernel INLINEFORM0 is any mapping satisfying DISPLAYFORM0
where INLINEFORM0 , INLINEFORM1 are elements in the input space, INLINEFORM2 is a mapping from the input space to a new representation space INLINEFORM3 where an inner product is defined. The function INLINEFORM4 is chosen to be nonlinear, and the dimension of the feature space is taken intentionally greater than the dimension of the input space. These choices could give the chance to make the classification problem linearly separable in INLINEFORM5 . Support vector machines (SVMs), also called kernel machines BIBREF41 are binary linear classifiers that make use of kernels. They search for the optimal hyperplane INLINEFORM6 in the feature space that maximizes the geometric margin, which is the distance of the hyperplane to the nearest training data point of any class. The main advantage of SVM is that it provides a solution to the global optimization problem, thereby reducing the generalization error of the classifier. The formulation of SVM can be easily extended to build a nonlinear classifier by incorporating a kernel of the class H DISPLAYFORM0
No systematic tools have been developed to automatically identify the optimal kernel for a particular application.
Decision trees BIBREF42 are rooted trees that can be used successfully as classifiers BIBREF43 . Each node of the three represents a binary rule that splits the feature space according to the value of a predictive feature and a path from the root to leaf nodes represents a series of rules that are used to recursively divide the feature space into smaller subspaces, where a class label is assigned. The structure of the tree in terms of split nodes can be learned from data by using several approaches. Random forests BIBREF44 are an ensemble of decision trees, found using the bootstrap sampling technique on the training set. In particular, a fixed number of random samples are extracted with replacement from the training set, and each of them is used as a training set to fit a decision tree. The forest is composed by each of these decision trees, and the final predictions are made by averaging the predictions from all the individual decision trees.
Boosting is another ensemble strategy with the special purpose of improving the combination of a set of weak classifiers. These are chosen to be of very low model complexity such as the case of decision trees with a single split. The general framework of boosting sequentially adds a tree to an ensemble, the new one with the goal of correcting its predecessor. Gradient boosting BIBREF45 uses a gradient-descent like procedure to sequentially improve a tree classifier. This is done by adding to the actual classifier a new decision tree learned from the residual errors made by the predecessor. The final predictions are made by the tree classifier resulting after a fixed number of iterations of the procedure.
Datasets
We have chosen 4 corpora for our experiments, all of them are publicly available and treating the problem as a binary classification: “SarcasmCorpus” (Filatova 2012) , “IAC-Sarcastic” BIBREF46 , which is a subset of Internet Argument Corpus1.0 prepared for sarcasm detection, “irony-context” (Wallace et al. 2014), and “IAC-Sarcastic-v2” (Oraby et al. 2016), which is extracted from the second version of Internet Argument Corpus BIBREF47 . In order to provide a more complete evaluation, we also use the corpus of the shared task “Semeval2018 Task 3A” BIBREF48 .
SarcasmCorpus
Filatova filatova2012irony collected 1254 reviews from Amazon for different kinds of products, of which 437 are sarcastic, and 817 are not sarcastic. The dataset is unbalanced toward the “regular” texts, and this is due both to the policy of Amazon, which explicitly requires sincere reviews and to the peculiarity of sarcasm itself, which is used only in some cases, especially because of the difficulty for humans to recognize it over the internet.
Each review in the corpus consists of the title, author, product name, review text and number of stars, and the review is a stand-alone document referring to a single product. This corpus, like all the others considered in this work, has been entirely hand-labeled by the Amazon Mechanical Turkers, who were asked whether each review contains sarcasm in it. Each text has been presented to 5 Turkers and has been classified as sarcastic when at least three among five workers agreed. The corpus contains INLINEFORM0 distinct tokens, with INLINEFORM1 occurring only in sarcastic reviews, INLINEFORM2 occurring only in regular reviews and INLINEFORM3 occurring in both categories. Buschmeier et al. buschmeier2014impact made an interesting analysis of the corpus by collecting some statistics and publishing the only classification results that are available for it up to now. They extracted 29 task-specific features and combined them with the bag-of-words representation and multiple classifiers. The bag of words resulted to be important for the classification. In fact, for example, they get a poor 50.9% F-score value with logistic regressor without bag-of-words, which is increased to 74% by using it. This result is surely related to the difference in terms used by the two classes, but it also shows that information about the words used in the document is needed for the task.
IAC-Sarcastic
The second dataset we used is the IAC-Sarcastic sub-corpus, which consists of 1995 posts coming from 4forums.com, a classical forum where several topics are discussed. This corpus is actually extracted from the larger Internet Argument Corpus (IAC), containing INLINEFORM0 discussions, INLINEFORM1 posts and INLINEFORM2 words. In IAC there are INLINEFORM3 Quote-Response (Q-R) pairs and INLINEFORM4 three-posts chains that have been manually labeled for several HITs (Human-Intelligence Tasks) by Amazon Mechanical Turk. For each Q-R item, the Turkers were asked to evaluate the response section by considering the quote as a context. One of the HITs regarded the identification of a sarcastic response. As a result, the IAC-Sarcastic Corpus consists of 1995 responses, without any quote, with a binary label that indicates the presence of sarcasm. 998 texts are labeled as sarcastic, and 997 are not, so this is one of the rare balanced datasets for this task. To the best of our knowledge, only the work by Justo, Corcoran, Lukin, Walker, and Torres justo2014 published results on the sarcastic task of the IAC dataset, but the authors made a different sampling of the documents from the one used for IAC-Sarcastic. Thus, our results for this corpus are not comparable with the ones reported in that work.
Irony-context
A third dataset is the one collected in Wallace et al. wallace2014humans. The main goal of that study was to highlight the role of the context of a text to make irony understandable by humans. The dataset is extracted from Reddit by collecting comments from the following six sub-reddits: politics, progressive, conservative, atheism, Christianity, technology, with their respective size of 873, 573, 543, 442, 312 and 277 samples. Each comment has been labeled by three university undergraduates using a browser interface which let them see the context of the comment in the form of previous comments or related pages under request. The label of a comment was selected with a simple majority of 2 out of 3 labelers. For each comment and each labeler, they stored whether the context has been requested and if the labeler changed his mind after having seen it. This allowed the authors to study the correlation between the sarcastic label and the requests for context.
The results allowed the authors to infer that the machines would also need the context for detecting sarcasm, as their model did not predict correctly the texts for which the humans required the context. This is an important cue that should be considered while developing sarcasm detection methods, even though we do not explicitly consider the context of our method. As a result, we cannot expect to obtain high absolute results for this dataset by letting the model observe only the single text.
IAC-Sarcastic-v2
In 2016 a new version of IAC was made available (IACv2) (Abbot et al. 2016), and after some months also the sarcastic sub-corpus was released (Oraby et al. 2016), which is bigger than the first version. It consists of three sub-corpora, among which the bigger one is called “generic”, and it is made of INLINEFORM0 posts per class collected from IACv2. For the creation of this sub-corpus, the authors produced a high-precision classifier for the non-sarcastic class, which helped to filter out many non-sarcastic posts from the original corpus and lower the labeling costs. Then, to have high-quality labeling, they required a majority of 6 out of 9 sarcastic annotations to label a post as sarcastic.
To produce a more diverse corpus, they built two more corpora focused on particular rhetorical figures often associated with sarcasm: rhetorical questions and hyperboles. For both of the sub-corpora, the authors used patterns to recognize posts containing the chosen rhetorical figure from IACv2. Each of the collected posts has been subsequently shown to five AMTs for the sarcastic/not sarcastic annotation. The label is given with simple majority.
The purpose of these two focused sub-corpora is to force classifiers to find some semantic cues which can distinguish sarcastic posts even in the presence of rhetorical figures usually associated with sarcasm. In fact, the presence of hyperboles has been used before as a feature for detecting sarcasm BIBREF49 .
Semeval-2018 Task3 Corpus of Tweets
The International Workshop on Semantic Evaluation Semeval-2018 featured a shared task on verbal irony detection in tweets (Van Hee et al. 2018). The corpus contains a class-balanced training set consisting of INLINEFORM0 tweets, and a test set with 784 tweets. In the test set, only 40% of the instances are ironic. The corpus has been collected from Twitter searching for tweets with the hashtags #irony, #sarcasm and #not. The corpus has been annotated by three students in linguistics who showed a high inter-annotator agreement. After the annotation, INLINEFORM1 tweets out of INLINEFORM2 were ironic and only 604 were not. Thus, an additional set of INLINEFORM3 non-ironic tweets was added to the corpus. Finally, the corpus was split randomly in class-balanced training and test set, but an additional cleaning step for removing ambiguous sentences modified the proportion to 40% ironic.
Experimental setup
We ran three groups of experiments, to assess both the effectiveness of our approach when compared with the approaches we found in literature and its capability of extracting features that are relevant for sarcasm in a cross-domain scenario. In both cases, we denote with the word model one of the possible combinations of classic/statistical LSA and a classifier. The used classifiers are Support Vector Machine (SVM), Logistic regression (Log.Reg), Random Forest (RF) and gradient boosting (XGB).
For the first group of experiments, we evaluated the performance of each of our models in every corpus. We use 10-fold cross-validation and report the mean values of INLINEFORM0 -score, precision, and recall among all the folds. The proportion of the two classes in each fold is equal to the proportion in the whole corpus. Where applicable, we compare our results with existing results in the literature. Besides, we compare with the method presented in Poira et al. cambria2016.
The second group of experiments has been performed on the Semeval 2018 Task 3 dataset (Van Hee et al. 2018). We first find the best LSA dimensionality by 10-fold cross-validation in the training set. Then, we trained again the models in the whole dataset and evaluated them in the test set for comparison with the participants to the shared task.
The third group of experiments is inter-corpora. For each experiment, we have chosen one corpus as a training set and another one as a test set. This process is performed for all the models and all the corpora pairs. We aim to find whether sarcasm detection is domain-dependent.
Finally, in the fourth group of experiments (union experiments) we perform another 10-fold in which all the corpora are concatenated. Each fold contains samples from every corpus proportionally to the size of that corpus. The goal of this experiment is to understand whether simply adding more data, but from different domains, improves the classification performance.
The hyperparameters of the classifiers have been chosen by grid search on SarcasmCorpus with LSA dimensionality 40, and then used for all the reported experiments. We use SVM with Gaussian kernel, C value of 100, INLINEFORM0 logistic regression with penalty L1 and C=10 and decision tree with entropy loss. SVM and logistic regression both have balanced class weights to cope with unbalanced datasets.
In-corpus Experiments
In SarcasmCorpus each sample consists of a review title, a review text, a product name and the number of stars given to the product ranging from 1 to 5. Buschmeier et al. buschmeier2014impact showed that the star rating is the most discriminative feature. Thus we performed the experiment both including and not including it. In Table TABREF48 , we refer to “SarcasmCorpus” when the star rating is not used, and “SarcasmCorpus*” when it is used. We use the star rating by simply concatenating it to the document vector produced by LSA. The document vector is computed only from the review texts because in our preliminary experiments we found that the other parts are not useful for the task. Accuracy and F-score values of all classifiers for SarcasmCorpus and SarcasmCorpus* are plotted in Figures FIGREF72 and FIGREF73 , and the best F-scores, with the relative precision and recall, are reported in the two columns SarcasmCorpus and SarcasmCorpus* of Table TABREF48 . The best result from the logistic regression in SarcasmCorpus is INLINEFORM0 which represents a INLINEFORM1 % relative improvement concerning the INLINEFORM2 reported in the above-mentioned work by Buschmeier et al. buschmeier2014impact. The results from Poira et al. cambria2016 are even higher in terms of F-score, with a relative improvement of INLINEFORM3 , which is due mostly to a much higher recall.
Note that the method by Poira et al. cambria2016 uses also features extracted from other datasets for sentiment, emotion and personality classification, as these features are considered to be useful for the task of sarcasm detection. Moreover, as our goal is to propose a baseline, the training time in the order of minutes is an advantage of our model. We report such results as an upper bound considering that our model does not use additional information from external data.
The best results are obtained using the star labels. In this setting, our best-performing classifiers are better than the INLINEFORM0 F-score value reported by Buschmeier, and our best INLINEFORM1 -score of INLINEFORM2 represents a INLINEFORM3 relative improvement. In this single case of SarcasmCorpus*, the results with the Traditional LSA are all higher than their counterparts with Statistical LSA.
For IAC-Sarcastic we do not have any previously published result to compare with. The only related result is reported in Joshi et al. joshi-sharma-bhattacharyya:2015:ACL-IJCNLP, which use a corpus randomly extracted from IAC containing 752 sarcastic and 752 not sarcastic texts. They report an F-score of INLINEFORM0 (average over a 5-fold), but the text sampling procedure is not specified in the paper. Thus, we prefer to use the sarcastic selection given by the Internet Argument Corpus website which is also a bit larger (998 sarcastic and 997 non-sarcastic texts).
Accuracies and F-scores of all the classifiers at varying T-SVD size are plotted in Figure FIGREF74 , best values of F-score, precision and recall are reported in column IAC-Sarcastic of Table TABREF49 . The best result (F= INLINEFORM0 ) is lower than in SarcasmCorpus, despite IAC-Sarcastic being balanced and larger than SarcasmCorpus. With Traditional LSA the INLINEFORM1 -scores are generally slightly lower, but the precision values are higher.
The results from Poira et al. cambria2016 are significantly higher, suggesting that in this dataset the sarcasm can be detected in most cases with the linguistic features used by their network independently from the context.
For the irony-context corpus, we used the same 1949 documents selected for the experiments reported in Wallace et al. wallace2014humans. To allow fair comparisons, we used only the texts of the comments, without any contextual information.
The authors report a mean F-score over the five-fold of 0.383 by using a bag-of-words representation with 50.000 tokens, plus some other binary features that have proven useful in other works, and an SVM classifier with a linear kernel. Our results are plotted in Figure FIGREF78 and reported in column irony-context of Table TABREF49 , where it is shown how our classifiers clearly outperform the baseline. Our maximum F-score of INLINEFORM0 represents a relative improvement of 20%. Moreover, it is important to highlight the incredibly low values obtained in this corpus when compared with the results from the previous corpora. This is certainly due to the high skewness between the classes; in fact, the positive samples are just 537 over 1949 (27.5%). If we consider that in SarcasmCorpus the sarcastic texts are only 33% of the total, we suppose there are other causes. Another reason that can explain the poor results can be found in the diversity of topics, as the texts are extracted from six different forums, and the words used for sarcasm can be highly specific to a given context, both cultural and topical. In Wallace et al. wallace2014humans it is explicitly said that the request of the context from an annotator is high for the sarcastic texts. As a consequence, classifying correctly the texts without a context is difficult even for humans. Moreover, the forums from which the posts were extracted are highly controversial, as they regard politics or religion. As a consequence, it is difficult to grasp the sarcasm of a text without knowing the author's opinions.
The results with Traditional LSA are very similar to Statistical LSA, and the real surprise is the incredibly low scores obtained by the random forest and gradient boosting methods.
In this case, we wanted to compare our results against those from Oraby et al. oraby2016creating, which deal with the three sub-corpora separately. However, they are not directly comparable because at the moment in which we report these results only half of the corpus has been released, consisting of 3260 posts in the generic sub-corpus, 582 in the Hyperbole side and 850 for rhetorical questions. The three sub-corpora are all balanced.
Results computed on the three subcorpora are plotted in Figures FIGREF75 , FIGREF76 , FIGREF77 and reported in the last three columns of table TABREF50 . Despite the difference in data availability, the results are quite encouraging. In fact, we can see that our method reaches the INLINEFORM0 - score of INLINEFORM1 in the generic sub-corpus, slightly better than the previous study. Moreover, it improves over Oraby et al. (2016) also in the other two sub-corpora but using Traditional LSA.
Nonetheless, these results show that it is possible to achieve very good performance when high-quality labeled corpora are available, even with a limited number of examples.
For the CNN, we have results only in the generic sub-corpus, and this is the only case in which at least one of our models can outperform it in terms of F-score.
SemEval 2018 Task 3A
The last experiment on a single dataset was performed on the settings of SemEval 2018 Task 3A (Van Hee et al. 2018), which is a shared task on a binary classification of irony, which we introduced in Section SECREF47 .
We start by performing 10-fold cross-validation with our classifiers over varying LSA dimensionality to choose the best setting. We used the same set of hyper-parameters used for the previous experiments.
Once we have found the best setting, we train again the model with all the data and predict the classes of the test tweets. We found that we obtain the best results in cross-validation with LSA vectors of size 20, and the results are presented in Table TABREF59 . We list results for four different classifiers, namely logistic regression, support vector machine, gradient boosting and random forest. In this case, we get the best results using random forests, followed by gradient boosting. In particular, random forest obtains a F INLINEFORM0 -score of INLINEFORM1 , which is higher than the 6-th submission. It is worth noting that the submissions that we listed in the Table, except for the baseline, all use approaches based on deep learning. Compared to the unigram SVM baseline used for the shared task (row 11 in table 4), our model with the random forest is clearly better according to all the metrics, while our model with SVM is better in terms of F INLINEFORM2 score but not accuracy.
Surely the model we provide is not the best one in terms of accuracy, and showing its superiority among all the others does not represent the goal of this work, but the best performers, i.e. deep learning networks, involve an high number of parameters and high computational training cost. Moreover, there are additional interesting notes. First, the submission by BIBREF50 also makes use of deep neural networks but does not get a higher score than our best. Second, the submission by BIBREF51 is using SVMs over syntactic, semantic, and affective features, but still is not better than our best score. The models that showed a clear superiority use deep networks pre-trained on external data to extract more meaningful features. Thus, while the advantage is real, the number of parameters and the amount of data used is much higher.
Inter-corpora Experiments
The second group of experiments is aimed at finding whether the sarcasm is domain-dependent, or the knowledge acquired over one dataset can be transferred to another. We evaluate the similarity among the datasets by training a model over all the data of a corpus and using a second corpus as a test set. Our best results for every corpus pair are listed in Tables TABREF62 and TABREF63 , where the rows indicate the training set and the columns the test set. Quite interestingly, unlike the in-corpus experiments where the logistic regression works better in some cases, all the top scores that we report for these experiments are obtained by using the SVM classifier.
In Table TABREF62 we find the results for SarcasmCorpus and IAC-Sarcastic used as test sets. For the case of SarcasmCorpus, the F-scores are quite low compared to the in-corpus experiments. In fact, here we obtain the best result of only INLINEFORM0 when IAC-Sarcastic is the training set, which is much lower than the scores of about 70 that we get in the in-corpus experiments (column SarcasmCorpus in table TABREF48 ). The low results suggest us that the sarcasm conveyed by the texts in SarcasmCorpus is somehow different from what we can observe in the other corpora.
When we use IAC-Sarcastic as a test set, we can observe higher scores (column IAC-Sarcastic in table TABREF62 ), and the F-score of INLINEFORM0 that we obtain by training in IAC-Sarcastic-v2 is comparable to the INLINEFORM1 , which is the best result in the in-corpus experiments. Also, the lower result, which we obtain when training on irony-context, is quite close to the result obtained for the in-corpus experiment, and unexpected since the poor results obtained in the in-corpus experiments for irony-context (column Irony-Context in table TABREF49 ). When irony-context is the test set (first three columns of table TABREF63 ), we can observe again that the F-score obtained by training in IAC-Sarcastic-v2 is higher than the score obtained in the in-corpus experiment. Nonetheless, all the scores for this test set are lower than INLINEFORM2 with high recalls and low precisions.
When using IAC-Sarcastic-v2 as the test set (see last three columns of Table TABREF63 ) we can observe F-scores between INLINEFORM0 and INLINEFORM1 and are characterized by a high recall and lower precision. The top F1 score is obtained when using IAC-Sarcastic as a training set, which also corresponds to the highest precision. This represents a further proof in favor of the similarity of the two corpora. The top recall score of INLINEFORM2 is obtained by training on SarcasmCorpus, but the precision is much lower than the other two cases.
Overall, it is worth noting that, for all the experiments, the top results are obtained by training on either IAC-Sarcastic or IAC-Sarcastic-v2, while SarcasmCorpus is always better than irony-context. Considering that the quality of the features depends on the quality of the data and of the annotation, we suppose that the quality of the first two datasets is higher than the quality of irony-context, while the data contained in SarcasmCorpus are too different from the other corpora. A deeper analysis of the corpora can be found in the discussion (Section SECREF71 ).
Union Experiments
The last group of experiments we ran has the goal of understanding whether the combination of data coming from different sources can influence positively the final score. For this purpose, as anticipated in Section SECREF51 , we computed 10-folds of each of the four corpora used for the first group of experiments, and used as a training set the concatenation of 9 folds of every corpus, and as a validation set the remaining single folds of each corpus.
From Tables TABREF64 , TABREF65 we can observe that these results are not higher overall with respect to the inter-corpora results. The only exceptions are SarcasmCorpus, where the results are almost 20 F-score points higher than those obtained in the inter-corpora; and IAC-v2, where the gradient boosting (XGB) obtains 2 F-score points more than the top score in the inter-corpora results.
The results on SarcasmCorpus are still lower than the in-corpus results, and the scores of random forest and gradient boosting are much lower than the other two methods. This is further evidence that adding diverse data is not helpful, or is actually harmful, for classifying SarcasmCorpus.
The general trend of this block of experiments is that our classifiers are not able to leverage data from different domains in order to improve global results. In-domain data represent the best choice even if the data amount is lower.
Discussion
In this section, we discuss our results from a more general point of view. We start by briefly discussing the content of the different corpora. Then we try to relate the results of the different types of experiments. Finally, we detect the limits of our experiments for the type of documents we worked with.
The corpora we used for our experiments are characterized by high internal variability in style, as each corpus consists of texts from thousands of different authors. Despite the number of authors, there are some factors that depend on the type of text and the medium. For instance, the irony-context, IAC Sarcastic, and IAC Sarcastic v2 corpora are made of posts collected from online forums, which are mostly about politics. Most of the texts are extracted from longer arguments, and thus the style is informal and in general with aggressive tones.
In Tables TABREF67 , TABREF68 and TABREF69 we show some randomly selected samples from these corpora. As it is apparent from the samples, the posts have a target to attack, who can be another user or the subject of the discussion. Table TABREF67 shows some examples from IAC-Sarcastic. In all the examples the author attacks another user or his opinions. For instance, the first and the third sarcastic examples make sarcasm about the Bible to attack another user's religious ideas, while in the second example the author uses sarcasm to expose a fallacious position of another user and do not appear rude on his side. By contrast, the non-sarcastic examples are much more direct about their meaning. A similar pattern can be found in the examples from IAC Sarcastic v2 (table TABREF69 ). Sarcasm is again used to attack a person (first example) or his/her opinions (second example), maybe religious. The third example shows that also in this corpus some sentences are hard to classify. In this case, the information that we get is that the target has ultraconservative ideas, but it is not easy to grasp the sarcasm. The examples from irony-context (in table TABREF68 ) are much more difficult to grasp without knowing contextual information. For instance, the first sarcastic example can be either sarcastic or regular according to the political opinion of the author. It is sarcastic if the author is a Republican, it is not sarcastic (but would appear strange to write) if the author is a Democrat. The second and the third examples are hard to classify without knowing the subject of the conversation. The same issue of missing a broader context also appears in the non-sarcastic examples, and the third examples can easily be interpreted as sarcastic by humans. In SarcasmCorpus the situation is different as there is no argument ongoing, and the sarcasm is made against products that the author did not like. In this case, there are many references to the external world and the writing is more passionate in its negative stance. Some samples are shown in Table TABREF66 . The sarcastic examples in table TABREF66 all express a negative sentiment and also use negative words. Sarcasm is used within this negative reviews to attack the product in a more creative way and make the text more fun than a usual negative review. The non-sarcastic reviews, on the other side, give a description of the product and their experience with it, with regular forms of expressing the sentiment (“are also a great feature”, “It is a great little camera”). We suppose that this difference in style is the main obstacle to correct classification of SarcasmCorpus instances in the cross-corpora experiments.
We now discuss the relations among the results of the different experiments to gain some further insights into the sarcastic content of our corpora. From the in-corpus experiments, we obtain good results on SarcasmCorpus, which is the only corpus containing Amazon reviews. Unfortunately, when we train our models in a cross-corpora or all-corpora setting, our results drop dramatically, especially in the cross-corpora case. These results mean that the sarcasm in SarcasmCorpus is conveyed through features that are not present in the other corpora. This is especially true when considering that in the inter-corpora experiments, using SarcasmCorpus as a training set in all cases yields results that are only better than the ones obtained when using irony-context as a training set.
The results on irony-context show that this corpus is much more difficult to classify than the others, as it was pointed out also in the paper that presented it (Wallace et al. 2014), which highlights how the human annotators needed to read the contexts to be sure about the sarcastic posts. In the inter-corpora experiments, the results when training on irony-context are the worst for all the test sets, but only for a few points of F-score, while at first, we could expect dramatically lower results. For us, the previous are strong suggestions that the types of texts present in irony-context are similar to the ones present in IAC-Sarcastic-v2, but the quality is lower. As a consequence, this is a further proof that the datasets annotators do not consider sarcasm and irony two different linguistic phenomena.
The two versions of IAC-Sarcastic have proved to be the easiest to classify when using other corpora for training. The best result in IAC-Sarcastic is obtained in the Union experiment (see Tables TABREF64 , TABREF65 ), and thus it benefits from the higher amount of data, especially from the data from IAC-Sarcastic-v2, as can be observed from the cross-corpora results (Table TABREF62 ).
By contrast, the best results on IAC-Sarcastic-v2 are obtained with the in-corpus experiments, while all the results obtained in the inter-corpora experiments are clearly worse. Among the inter-corpora, training the model with IAC-Sarcastic results in a F_score of INLINEFORM0 , which means a relative decrement of INLINEFORM1 concerning the top score for the intra-corpus experiments of IAC-Sarcastic-v2. It is interesting to note that one cause of the decrement can also be the size of corpora, in fact, IAC-Sarcastic contains only 1995 texts, while IAC-Sarcastic-v2 contains 3260.
One final remark is about the absolute scores obtained in the in-corpus experiments. In fact, we can notice that in SarcasmCorpus the F_score can go beyond INLINEFORM0 , and up to INLINEFORM1 by adding the star rating as a feature. The high result can be explained by the peculiarity of this corpus, where sarcasm is present mostly in negative reviews, and the star label is the single best indicator of sarcasm BIBREF49 . The other corpora consist of texts that belong to a thread of forum posts. Sometimes it is reasonable to classify such posts as sarcastic or not out of context, but in many cases, it is impossible also for humans (see examples in Table TABREF68 ). In fact, the low F_score in irony-context is due to low precision, which is an indicator of high similarity between the positive and negative classes. Moreover, low precision and higher recall is a pattern that is present in most of the experiments, even if with higher absolute numbers. The combination of high recall and lower precision suggests that the dubious texts are classified as sarcastic more often than not sarcastic.
Conclusions
In this work, we have tackled the problem of automatic sarcasm detection from a data-driven point of view. More in details, we have used a set of labeled dataset and applied distributional semantics followed by some machine learning approaches in order to give a baseline for the literature in managing such a problem. We do not differentiate between sarcasm and irony because they are not so easily distinguishable even for human experts. Experiments have been carried out on four different corpora containing texts from online reviews or forums, and the corpus used for the shared task on irony detection on Twitter proposed in SemEval 2018. We have shown experimentally that some basic methods can outperform in all the datasets other methods based on bag of words and linguistic features, thus representing a solid baseline. With our experiments that train the models with one corpus and test them by using the other corpora, we have confirmed experimentally that also the annotators tend to not distinguish the distinction between irony and sarcasm. By contrast, major differences can be found according to the text domains, i.e., review vs. political forum. The domain difference can also prevent the method from taking benefits from more data when they are too diverse from the test data. As a future work, we will try to improve distributional semantics approaches with linguistic features in order to perform more fair comparisons with more recent and advanced methods. Furthermore, we will exploit more classical AI methodologies (e.g., by using ontologies, reasoners, common-sense reasoning techniques, etc.) to deduce the context, understanding the concepts expressed in a sentence, exploiting also features like hashtags and emojis to improve the overall performance of the approach. | Support Vector Machine (SVM), Logistic regression (Log.Reg), Random Forest (RF) and gradient boosting (XGB) |
774ead7c642f9a6c59cfbf6994c07ce9c6789a35 | 774ead7c642f9a6c59cfbf6994c07ce9c6789a35_0 | Q: In which domains is sarcasm conveyed in different ways?
Text: Introduction
Affective computing has raised a great deal of interest in the last years. Picard picard1995affective introduced it as a computing paradigm that relates to, arises from, or influences emotions, letting computers be both more effective in assisting humans and successful in making decisions.
Language, as a conceptual process, plays a key role in the perception of verbal irony and sarcasm, two well-known forms of figurative language (FL) BIBREF0 Traditionally, irony as a figure of speech can be intended as “saying something while meaning something else” BIBREF1 . A comprehensive overview of different theories of irony has been illustrated in Attardo attardo07. Understanding if irony and sarcasm are the same linguistic phenomenon or not is still an unresolved question in literature BIBREF2 . Some authors consider irony a more general form of sarcasm, while others tend to consider it a separate linguistic issue BIBREF3 , BIBREF4 . According to the theory of sarcastic irony, sarcasm and irony are very similar, but sarcasm has a specific victim who is the object of the sarcastic statement, while irony does not have such a target BIBREF5 . More commonly, the noun “sarcasm” is understood as “saying the opposite of what one is thinking”, usually with a negative intention. Henceforth, due to the different nuances of irony and sarcasm, and the multiple interpretations of these two concepts, we do not differentiate between them, and, like many researchers, e.g., BIBREF6 , we will use the term “sarcasm” to refer to both verbal irony and sarcasm.
A sarcastic sentence may include features that characterize a positive sentiment, but that insinuates a negative sentiment BIBREF7 , BIBREF8 . It is clear that sarcastic sentences are more difficult to process by an algorithm than non-sarcastic assertions; as a matter of fact, both the situation and the mental state of the speaker are factors that can determine a sarcastic content in a sentence.
A system capable of detecting sarcasm correctly would greatly improve the performance of sentiment analysis systems BIBREF9 , BIBREF10 , BIBREF6 , BIBREF11 , especially considering the big data available nowadays due to the exponential growth of social platforms. Unfortunately, sarcasm detection in written texts is a difficult task even for humans BIBREF12 .
Moreover, some people usually do not understand sarcasm, and there are sentences meant as being sarcastic by the author that are not recognized as such by the readers.
We focus our attention on the possibility of detecting sarcastic sentences automatically from written text only, and from the reader's point of view. Managing this task without any knowledge of relevant contextual features, like prosody, is very hard.
The problem of sarcasm detection has been tackled with machine learning approaches, made possible by the availability of several annotated corpora. In the literature we can find two main categories of such corpora: automatically annotated and manually annotated.
The automatically annotated corpora are usually collected from the microblogging platform Twitter BIBREF13 , BIBREF14 by exploiting the final hashtag of tweets. For instance, a tweet is labeled as sarcastic only if it ends with a hashtag such as #sarcasm or #irony. The same cue is used in Davidov, Tsur and Rappoport davidov2010semi to produce a silver standard for evaluating their model.
Manually annotated corpora are collected from a more diversified range of social media, such as Amazon reviews BIBREF15 , Reddit (Wallace et al. 2014) or online forums BIBREF16 , BIBREF17 , and then labeled by hiring people in the Amazon Mechanical Turk portal. When using crowdsourcing, the annotation procedures are complex and involve, among others, a stage for ensuring that the workers understood the task and they are performing correctly, and a quality assurance stage for removing texts for which a high discrepancy between the annotators arises.
In this work we have tackled the problem of sarcasm detection by trying to use an entirely data-driven approach, exploiting a distributional semantics representation by inducing a semantic space and then applying a set of classifiers to classify the texts as being sarcastic or not sarcastic. With “fully data-driven” we mean approaches that are capable of finding connections between input text and class labels without using any a priori knowledge about the features that characterize a sarcastic statement.
In particular, we do not define “irony” or “sarcasm”, neither use any definition. We simply rely on sets of sentences binary labeled for sarcasm detection taking for granted that the labels correctly identify a sarcastic sentence.
It is worthwhile to point out that in this work we do not create any dataset: we simply exploit the labels of datasets that have already been produced by others, trying to give a baseline for the sarcasm detection task.
The contribution of this work can be summed up in three key points:
To reach these goals, we exploit a Distributional Semantics approach, whose aim is to give a representation of words in a continuous vector space BIBREF18 , BIBREF19 , where word similarity is coded in an unsupervised manner. This representation is useful for building models with little, or no, a-priori knowledge about the task BIBREF20 .
Distributional semantics is a research field that concerns methodologies aimed at determining semantic similarities between linguistic items. The key idea is based on the hypothesis that words co-occurring in similar contexts tend to have similar meaning BIBREF21 , BIBREF22 . Distributional semantics deals with the automatic construction of semantic models induced from large unstructured textual corpora, and it exploits vector space models to represent the meaning of a word BIBREF23 . Many methods can be applied to construct distributional models. They range from the statistical models to machine learning ones BIBREF24 , BIBREF19 , BIBREF25 , BIBREF26 . Among these techniques, Latent Semantic Analysis (LSA) is a methodology for building distributional semantic spaces that extract statistical relations between words which co-occurr in a given context though the use of the Truncated Singular value decomposition (T-SVD). In this work we explored and studied the possibility of building a data-driven model in the field of sarcasm detection exploiting the well-known Latent Semantic Analysis (LSA) paradigm both in its traditional formulation given by Landauer, Foltz and Laham landauer1998introduction and by using the Truncated Singular Value Decomposition (T-SVD) as a statistical estimator as illustrated in Pilato and Vassallo pilato2015tsvd.
Both approaches have been used to create data-driven semantic spaces where documents and, generally, text chunks can be mapped.
The theory behind LSA states that the “psychological similarity between any two words is reflected in the way they co-occur in small sub-samples of language” (Landauer et al. 1998).
We have chosen to exploit the LSA paradigm since it is a well-known distributional semantics paradigm capable of modeling many human cognitive abilities; furthermore, it has many potential practical applications BIBREF27 , BIBREF18 , BIBREF28 , BIBREF29 . Moreover, it has been demonstrated in Pilato and Vassallo pilato2015tsvd that Truncated Singular Value Decomposition (T-SVD), as used in LSA, can be interpreted as a statistical estimator, giving a robust theoretical interpretation to the Latent Semantic Analysis paradigm. Many researchers have successfully applied this technique for typical Semantic Computing applications, such as natural language understanding, cognitive modeling, speech recognition, smart indexing, anti-spam filters, dialogue systems, and other Statistical Natural Language processing problems BIBREF30 , BIBREF31 , BIBREF32 . Moreover, Latent Semantic Analysis has been successfully used for inducing data-driven “conceptual” spaces BIBREF33 . For the aforementioned reasons, we have chosen this approach as a baseline for the detection of sarcasm in texts.
Furthermore, our study makes use of four machine learning methods that have been used on four manually annotated, publicly available corpora.
The experimental results show that our data-driven approach consisting of LSA followed by a classifier can establish models that outperform the published results on two of the corpora; additionally, it produces competitive results for the other corpora that we used for our evaluation.
The next section describes the state of the art in the field, Section SECREF3 describes the Semantic Representation and the Machine Learning methods used in the study. Section SECREF4 introduces the datasets used for the experiments. Section SECREF5 summarizes the experimental results, Section SECREF6 is for the final conclusions and remarks.
The code and the datasets used for the experiments are available on github.
Related works
The problem of sarcasm detection has been tackled using a wide range of supervised or semi-supervised techniques applied to corpora from different social media sources.
In the present work, we do not collect a new corpus for sarcasm detection, but sarcastic corpus annotation has received much attention in the literature. Most of the works have used unsupervised or semi-supervised approaches in order to reduce the cost of the annotation, while partially sacrificing the data quality. One of the first approaches was introduced by Tsur, Davidov and Rappoport tsur2010icwsm for a corpus extracted from Twitter and further developed in Davidov et al. davidov2010semi with a corpus consisting of Amazon reviews. This semi-supervised approach uses “YAHOO! BOSS” API web search for collecting INLINEFORM0 utterances similar to the ones in a small initial labeled seed set. It was the first work to show that automatically-crawled data are useful for the task of sarcasm detection. Most of the works have been pursued using data extracted from Twitter, as it is relatively easy to extract ironic or sarcastic tweets using the search by hashtag. In fact, in Twitter, the restricted number of characters allowed encourages to mark the ironic intent with a hashtag like #irony or #sarcasm to prevent ambiguities. The hashtag is usually removed from the tweets and used as a label for the silver standard. Moreover, the first studies on Twitter data showed that the task is quite difficult also for human beings. González-Ibánez et al. gonzalez2011identifying collected a corpus of INLINEFORM1 tweets balanced between sarcastic, positive sentiment and negative sentiment. They presented a part of the corpus to human judges, who achieved low agreement and low accuracy. Reyes et al. reyes2013multidimensional collected a corpus using 4 hashtags that identify four different categories, irony, education, humor, and politics, with INLINEFORM2 tweets each. The same corpus was used in a later work BIBREF34 . Their results suggest that detecting sarcasm in full documents is easier than in single sentences because of the presence of a context, but in both cases, it remains a difficult task also for humans that often have a low agreement. The specific case of positive sentiment and a negative situation, which is the most typical sarcastic situation, has also been analyzed BIBREF35 . In particular, authors have found that less than half of the tweets ending with the hashtag #sarcastic are recognized as sarcastic by humans after removing the hashtag. Bharti, Babu, and Jena bharti2015parsing proposed two algorithms with the goal to find, respectively, tweets with contrast in sentiment and situation, and tweets starting with interjections. They also found that the label distribution does not correlate perfectly with the hashtag distribution, e.g., only INLINEFORM3 out of INLINEFORM4 tweets ending with #sarcastic are actually sarcastic. Farias, Patti and Rosso farias16 proposed a method that uses affective content to classify sarcastic tweets, and show that it outperforms preceding methods in several Twitter benchmarks. Since classifying tweets by using only the text is a difficult task also for humans, other works proposed new methods capable of exploiting other kind of data, like the identity of the author or the thread of the tweet. Bamman and Smith bamman2015contextualized augmented the feature vectors with features describing the author of the tweet and the user to which the tweet is addressed, obtaining significant improvements in accuracy. They also found that the hashtags #sarcasm and #sarcastic are mainly used when the audience is not known. Wang, Wu, Wang and Ren wang2015twitter use a sequential classifier for classifying tweets taking into account the previous responses, thus improving the performance concerning a simple multi-class classifier.
Amir, Wallace, Lyu, Carvalho and Silva amir2016modelling used the dataset collected in Bamman et al. bamman2015contextualized (which was not completely available) for training a deep learning model that could represent users with user embeddings and this method seems to outperform the method from Bamman and colleagues. Sarcasm classification on Twitter involves different modelling techniques that perform better when taking into account the user and the thread history of a Tweet. Our work focuses on the task of classifying a single document written by a single author. Thus, we focus mainly on different kinds of datasets. Buschmeier, Cimiano and Klinger buschmeier2014impact have studied the corpus introduced in Filatova filatova2012irony by extracting a high number of features about typographic cues that can represent sarcasm, and used different classification methods obtaining results that vary significantly according to the classifier. They found that the single most important feature is the star rating of the review, and this happens because sarcastic reviews are more probable when a user did not like the product.
Wallace et al. wallace2014humans created a corpus from Reddit posts, for which they also stored context information, such as the post that is answered. The authors proposed a method that uses the bag of words and other features from previous studies for building an SVM classifier that gets very low results. Moreover, a correlation is found between posts for which the humans require the context and sarcastic posts. This can be explained by considering that the chosen sub-reddits are about religion or politics, and they are thus very prone to controversial discussions. Consequently, to understand the ironic intent of a post it is quite important to know the author position on the topic and also the posts they are answering to.
Joshi, Sharma and Bhattacharyya joshi-sharma-bhattacharyya:2015:ACL-IJCNLP used features for capturing intrinsic and extrinsic incongruity in texts and outperforms two previous methods both in tweets and in forum posts. These works represent valuable means of comparison for the present work. We show that an approach based only on distributional semantics is competitive with other approaches using more elaborated feature engineering, even when the data amount is quite small. Distributional semantics became popular in NLP thanks to the availability of good quality word embeddings BIBREF19 , and are introduced by design in deep learning models. In sarcasm detection, distributional semantics has been used to serve different roles. Ghosh, Guo, and Muresan ghosh2015sarcastic have adopted word embeddings to disambiguate a literal use of single words from a sarcastic use. Joshi, Tripathi, Patel, Bhattacharyya and Carman joshi2016word use word embeddings to compute incongruities among words using them as additional features for methods selected from the literature. Our work differs from these as we use LSA instead of word embeddings, and distributional semantics is the only kind of features we use. Ghosh and Veale ghosh2016 use LSA to extend the list of hashtags to find more sarcastic tweets on Twitter and use a deep neural network to perform the actual classification. Our work differs from theirs as we use LSA to compute the vectorial representation of documents and we do not perform tweet crawling. Poria, Cambria, Hazarika and Vij cambria2016 train a convolutional neural network to classify sarcasm in tweets. They extend the neural network with features extracted from other datasets for sentiment, emotion and personality classification, as these features are considered to be useful for the task of sarcasm detection.
Data-Driven Induction of Semantic Spaces and Traditional Classifiers
We focused our research on the role that fully data-driven models can play in detecting sarcasm. To reach this goal, we exploited the Latent Semantic Analysis paradigm both in its traditional formulation (Landauer et al. 1998) and by using the Truncated Singular Value Decomposition (T-SVD) as a statistical estimator as shown in Pilato et al. pilato2015tsvd. We have chosen to use the LSA paradigm to exploit a well-known and well-founded approach for inducing semantic spaces that have been effectively used in natural language understanding, cognitive modeling, speech recognition, smart indexing, and other statistical natural language processing problems. The sub-symbolic codings of documents obtained by the aforementioned LSA-based approaches are then used as inputs by a set of classifiers to evaluate the differences of performances obtained by using different machine learning approaches and testing them on different sarcasm-detection datasets.
The full work-flow, composed of the following steps:
does not require any expert or domain knowledge.
Preprocessing of text
The first step of preprocessing for texts is the tokenization using spaces, punctuation and special characters (e.g., $, , @) as separators. Thus one token is a sequence of alphanumeric characters or of punctuation symbols. The set of all the extracted tokens constitutes a “vocabulary” named INLINEFORM0 .
The sequences of tokens, each representing a single document in the training set, are used to generate a word-document co-occurrence raw matrix INLINEFORM0 , where each INLINEFORM1 cell contains the number of times the token INLINEFORM2 appears in the document INLINEFORM3 . Let INLINEFORM4 be the number of tokens, i.e., INLINEFORM5 , and let INLINEFORM6 be the number of documents of the corpus used for computing the matrix INLINEFORM7 ; the dimensionality of INLINEFORM8 is INLINEFORM9 .
Data driven induction of semantic spaces by means of LSA-oriented paradigms
The matrix INLINEFORM0 is used and further processed to induce proper Semantic Spaces where terms and documents can be mapped. To generate these semantic spaces, we have used both the traditional LSA algorithm (Deerwester et al. 1990, Landauer et al. 1998) and the approach which uses T-SVD as a statistical estimator as proposed in Pilato et al. pilato2015tsvd. For the sake of brevity, we call this last approach Statistical LSA to differentiate it by the Traditional LSA. It is worthwhile to point out that, in the Latent Semantic Analysis paradigm (i.e., both “general” and “statistical”), the corpus used for building the semantic space plays a key role in performances. As a matter of fact, large and heterogeneous corpora may give more noise or too much specific information from a single domain, decreasing the accuracy of the induced models BIBREF36 .
The traditional LSA is a procedure that has been used mainly for information retrieval (Deerwester et al. 1990). The previously described matrix INLINEFORM0 is used for computing a Tf-Idf (Term-Frequency Inverse-document frequency) matrix INLINEFORM1 BIBREF37 . Let INLINEFORM2 be the rank of INLINEFORM3 . The following factorization, called Singular Value Decomposition (SVD) holds for the matrix INLINEFORM4 : DISPLAYFORM0
where INLINEFORM0 is a INLINEFORM1 orthogonal matrix, INLINEFORM2 is a INLINEFORM3 orthogonal matrix and INLINEFORM4 is a INLINEFORM5 diagonal matrix, whose diagonal elements INLINEFORM6 are called singular values of INLINEFORM7 . It can be shown that the singular value decomposition of INLINEFORM8 is unique up to the order of the singular values and of the corresponding columns of INLINEFORM9 and INLINEFORM10 , so there is no loss of generality if we suppose that INLINEFORM11 are ranked in decreasing order.
Let INLINEFORM0 be an integer such that INLINEFORM1 , let INLINEFORM2 be the matrix obtained from INLINEFORM3 by removing its last INLINEFORM4 columns, INLINEFORM5 the matrix obtained from INLINEFORM6 in the same manner and INLINEFORM7 the diagonal matrix obtained from INLINEFORM8 by suppressing both its last INLINEFORM9 rows and INLINEFORM10 columns. INLINEFORM11 is the matrix containing the INLINEFORM12 -dimensional vector representation of the words and INLINEFORM13 is the matrix containing the INLINEFORM14 -dimensional vector representation of the documents. It can be shown (Deerwester et al. 1990) that the matrix: DISPLAYFORM0
is the best rank INLINEFORM0 approximation to INLINEFORM1 according to the Frobenius distance. INLINEFORM6 is called the reconstructed matrix. The process by which INLINEFORM7 is obtained from INLINEFORM8 is called Truncated Singular Value Decomposition (T-SVD). The book by Golub and Van Loan golub1996matrix provide further details about the Singular Value Decomposition technique.
The traditional Latent Semantic Analysis based on T-SVD is one of the possible methods to infer data-driven models. Furthermore, one of its major drawbacks, which is the lack of a sound statistical interpretation, has been recently overcome in Pilato et al. pilato2015tsvd, where authors have been presented a statistical explanation of this paradigm.
According to this interpretation, the T-SVD algorithm, as used in the Latent Semantic Analysis paradigm, acts as an estimator, which conveys statistically significant information from the sample to the model.
To briefly sum-up the procedure, we recall here the concepts of probability amplitude and probability distribution associated with a matrix as they have been defined in Pilato et al. pilato2015tsvd.
Let INLINEFORM0 , INLINEFORM1 be two positive integers and let INLINEFORM2 be the set of real numbers. Given a INLINEFORM3 matrix INLINEFORM4 with INLINEFORM5 , INLINEFORM6 , INLINEFORM7 where at least one of its components INLINEFORM8 is positive, we define a set INLINEFORM9 , composed of all the pairs INLINEFORM10 that identify the positive components of INLINEFORM11 , i.e.: DISPLAYFORM0
Subsequently, we define the probability amplitude associated with INLINEFORM0 , the INLINEFORM1 matrix INLINEFORM2 resulting from the mapping INLINEFORM3 : DISPLAYFORM0
whose elements INLINEFORM0 are computed as: DISPLAYFORM0
so that INLINEFORM0 it is INLINEFORM1 and INLINEFORM2 .
We define also the probability distribution associated with a matrix INLINEFORM0 the INLINEFORM1 matrix resulting from the mapping INLINEFORM2 : DISPLAYFORM0
whose elements are the squares of the elements of INLINEFORM0 , i.e. INLINEFORM1 . The method starts with a raw data matrix INLINEFORM2 consisting of positive values. In our study the raw data matrix INLINEFORM3 is the term-document co-occurrence matrix. From INLINEFORM4 a real-valued normalized matrix INLINEFORM5 is computed by dividing every element for the sum of all elements of INLINEFORM6 . DISPLAYFORM0
If we call INLINEFORM0 the matrix: DISPLAYFORM0
The matrix INLINEFORM0 can be decomposed with the SVD technique: DISPLAYFORM0
and its best rank-r decomposition INLINEFORM0 is obtained by applying the T-SVD technique, which minimizes the Frobenius distance INLINEFORM1 , given INLINEFORM2 : DISPLAYFORM0
Even if INLINEFORM0 is not a probability distribution, the computation of INLINEFORM1 makes it possible to identify, without any further addition of external information, the probability distribution we are looking for. As shown in Pilato et al. pilato2015tsvd, it theoretically suffices computing the probability amplitude associated to INLINEFORM2 , i.e. INLINEFORM3 , and consequently calculating the probability distribution INLINEFORM4 associated to INLINEFORM5 . The aforementioned Frobenius distance INLINEFORM6 constitutes an upper bound to the Hellinger distance between the sample probability INLINEFORM11 and the probability distribution estimated by the procedure.
Mapping new documents to the semantic space
Both LSA approaches illustrated in the previous subsections provide us with the three, obviously different for each approach, matrices INLINEFORM0 , INLINEFORM1 and INLINEFORM2 .
The INLINEFORM0 and the INLINEFORM1 matrices can be used for computing the vector representation of the new documents into the induced semantic space. The INLINEFORM2 matrix contains in its diagonal the singular values; INLINEFORM3 is composed by rows that represent the r-dimensional sub-symbolic, i.e., numerical, mapping in the semantic space of the tokens constituting the vocabulary INLINEFORM4 . Then, given a text chunk INLINEFORM5 , INLINEFORM6 is sub-symbolically represented by a INLINEFORM7 -dimensional word occurrence vector INLINEFORM8 , from which it is computed a vector INLINEFORM9 with two different procedures depending on which LSA paradigm has been chosen.
In the case of Traditional LSA, it is the Tf-Idf representation BIBREF38 of INLINEFORM0 by using the same parameters learned during training.
In the case of the Statistical LSA, the INLINEFORM0 vector is transformed into INLINEFORM1 similarly as the matrix INLINEFORM2 is transformed into the matrix INLINEFORM3 : DISPLAYFORM0
Once the appropriate coding of INLINEFORM0 has been computed, an r-dimensional vector INLINEFORM1 representing the sub-symbolic coding of INLINEFORM2 is then obtained from the vector INLINEFORM3 by means of the following mapping formula: DISPLAYFORM0
Supervised learning
The training and test documents are mapped into the semantic spaces induced at the previous step. These vectors, sub-symbolic coding of the documents, are therefore used as inputs to different classifiers to train or test on them. Such classifiers will finally solve a binary classification problem assigning the label 1 (sarcastic) or 0 (nonsarcastic) to a generic document. For this study we have used Support Vector Machines, Logistic Regression, Random Forests, and Gradient boosting as they represent the state of the art for most of the binary classification problems with small datasets. In the following, we recall a brief description of them.
The logistic regressor (LR) is a generalized linear model suitable for binary responses BIBREF39 . In LR the following log-linear model is adopted: DISPLAYFORM0
where INLINEFORM0 represents the probability of the success outcome. A suitable way of minimizing the so called empirical risk is the numerical estimation of the INLINEFORM1 s coefficient by a maximum likelihood procedure: DISPLAYFORM0
where INLINEFORM0 is the training set, INLINEFORM1 is the norm of the weights vector used for regularization, and can be either the INLINEFORM2 or the INLINEFORM3 norm, and INLINEFORM4 is the weight to give to the regularization factor. The function in formula EQREF33 is convex, so it can be minimized even with the simple gradient descent algorithm, but more complex algorithms can be used in order to reduce the convergence time. In this work we use the trust region Newton method proposed by Lin, Weng and Keerthy lin2008trust, as provided by the LIBLINEAR library BIBREF40 .
A kernel INLINEFORM0 is any mapping satisfying DISPLAYFORM0
where INLINEFORM0 , INLINEFORM1 are elements in the input space, INLINEFORM2 is a mapping from the input space to a new representation space INLINEFORM3 where an inner product is defined. The function INLINEFORM4 is chosen to be nonlinear, and the dimension of the feature space is taken intentionally greater than the dimension of the input space. These choices could give the chance to make the classification problem linearly separable in INLINEFORM5 . Support vector machines (SVMs), also called kernel machines BIBREF41 are binary linear classifiers that make use of kernels. They search for the optimal hyperplane INLINEFORM6 in the feature space that maximizes the geometric margin, which is the distance of the hyperplane to the nearest training data point of any class. The main advantage of SVM is that it provides a solution to the global optimization problem, thereby reducing the generalization error of the classifier. The formulation of SVM can be easily extended to build a nonlinear classifier by incorporating a kernel of the class H DISPLAYFORM0
No systematic tools have been developed to automatically identify the optimal kernel for a particular application.
Decision trees BIBREF42 are rooted trees that can be used successfully as classifiers BIBREF43 . Each node of the three represents a binary rule that splits the feature space according to the value of a predictive feature and a path from the root to leaf nodes represents a series of rules that are used to recursively divide the feature space into smaller subspaces, where a class label is assigned. The structure of the tree in terms of split nodes can be learned from data by using several approaches. Random forests BIBREF44 are an ensemble of decision trees, found using the bootstrap sampling technique on the training set. In particular, a fixed number of random samples are extracted with replacement from the training set, and each of them is used as a training set to fit a decision tree. The forest is composed by each of these decision trees, and the final predictions are made by averaging the predictions from all the individual decision trees.
Boosting is another ensemble strategy with the special purpose of improving the combination of a set of weak classifiers. These are chosen to be of very low model complexity such as the case of decision trees with a single split. The general framework of boosting sequentially adds a tree to an ensemble, the new one with the goal of correcting its predecessor. Gradient boosting BIBREF45 uses a gradient-descent like procedure to sequentially improve a tree classifier. This is done by adding to the actual classifier a new decision tree learned from the residual errors made by the predecessor. The final predictions are made by the tree classifier resulting after a fixed number of iterations of the procedure.
Datasets
We have chosen 4 corpora for our experiments, all of them are publicly available and treating the problem as a binary classification: “SarcasmCorpus” (Filatova 2012) , “IAC-Sarcastic” BIBREF46 , which is a subset of Internet Argument Corpus1.0 prepared for sarcasm detection, “irony-context” (Wallace et al. 2014), and “IAC-Sarcastic-v2” (Oraby et al. 2016), which is extracted from the second version of Internet Argument Corpus BIBREF47 . In order to provide a more complete evaluation, we also use the corpus of the shared task “Semeval2018 Task 3A” BIBREF48 .
SarcasmCorpus
Filatova filatova2012irony collected 1254 reviews from Amazon for different kinds of products, of which 437 are sarcastic, and 817 are not sarcastic. The dataset is unbalanced toward the “regular” texts, and this is due both to the policy of Amazon, which explicitly requires sincere reviews and to the peculiarity of sarcasm itself, which is used only in some cases, especially because of the difficulty for humans to recognize it over the internet.
Each review in the corpus consists of the title, author, product name, review text and number of stars, and the review is a stand-alone document referring to a single product. This corpus, like all the others considered in this work, has been entirely hand-labeled by the Amazon Mechanical Turkers, who were asked whether each review contains sarcasm in it. Each text has been presented to 5 Turkers and has been classified as sarcastic when at least three among five workers agreed. The corpus contains INLINEFORM0 distinct tokens, with INLINEFORM1 occurring only in sarcastic reviews, INLINEFORM2 occurring only in regular reviews and INLINEFORM3 occurring in both categories. Buschmeier et al. buschmeier2014impact made an interesting analysis of the corpus by collecting some statistics and publishing the only classification results that are available for it up to now. They extracted 29 task-specific features and combined them with the bag-of-words representation and multiple classifiers. The bag of words resulted to be important for the classification. In fact, for example, they get a poor 50.9% F-score value with logistic regressor without bag-of-words, which is increased to 74% by using it. This result is surely related to the difference in terms used by the two classes, but it also shows that information about the words used in the document is needed for the task.
IAC-Sarcastic
The second dataset we used is the IAC-Sarcastic sub-corpus, which consists of 1995 posts coming from 4forums.com, a classical forum where several topics are discussed. This corpus is actually extracted from the larger Internet Argument Corpus (IAC), containing INLINEFORM0 discussions, INLINEFORM1 posts and INLINEFORM2 words. In IAC there are INLINEFORM3 Quote-Response (Q-R) pairs and INLINEFORM4 three-posts chains that have been manually labeled for several HITs (Human-Intelligence Tasks) by Amazon Mechanical Turk. For each Q-R item, the Turkers were asked to evaluate the response section by considering the quote as a context. One of the HITs regarded the identification of a sarcastic response. As a result, the IAC-Sarcastic Corpus consists of 1995 responses, without any quote, with a binary label that indicates the presence of sarcasm. 998 texts are labeled as sarcastic, and 997 are not, so this is one of the rare balanced datasets for this task. To the best of our knowledge, only the work by Justo, Corcoran, Lukin, Walker, and Torres justo2014 published results on the sarcastic task of the IAC dataset, but the authors made a different sampling of the documents from the one used for IAC-Sarcastic. Thus, our results for this corpus are not comparable with the ones reported in that work.
Irony-context
A third dataset is the one collected in Wallace et al. wallace2014humans. The main goal of that study was to highlight the role of the context of a text to make irony understandable by humans. The dataset is extracted from Reddit by collecting comments from the following six sub-reddits: politics, progressive, conservative, atheism, Christianity, technology, with their respective size of 873, 573, 543, 442, 312 and 277 samples. Each comment has been labeled by three university undergraduates using a browser interface which let them see the context of the comment in the form of previous comments or related pages under request. The label of a comment was selected with a simple majority of 2 out of 3 labelers. For each comment and each labeler, they stored whether the context has been requested and if the labeler changed his mind after having seen it. This allowed the authors to study the correlation between the sarcastic label and the requests for context.
The results allowed the authors to infer that the machines would also need the context for detecting sarcasm, as their model did not predict correctly the texts for which the humans required the context. This is an important cue that should be considered while developing sarcasm detection methods, even though we do not explicitly consider the context of our method. As a result, we cannot expect to obtain high absolute results for this dataset by letting the model observe only the single text.
IAC-Sarcastic-v2
In 2016 a new version of IAC was made available (IACv2) (Abbot et al. 2016), and after some months also the sarcastic sub-corpus was released (Oraby et al. 2016), which is bigger than the first version. It consists of three sub-corpora, among which the bigger one is called “generic”, and it is made of INLINEFORM0 posts per class collected from IACv2. For the creation of this sub-corpus, the authors produced a high-precision classifier for the non-sarcastic class, which helped to filter out many non-sarcastic posts from the original corpus and lower the labeling costs. Then, to have high-quality labeling, they required a majority of 6 out of 9 sarcastic annotations to label a post as sarcastic.
To produce a more diverse corpus, they built two more corpora focused on particular rhetorical figures often associated with sarcasm: rhetorical questions and hyperboles. For both of the sub-corpora, the authors used patterns to recognize posts containing the chosen rhetorical figure from IACv2. Each of the collected posts has been subsequently shown to five AMTs for the sarcastic/not sarcastic annotation. The label is given with simple majority.
The purpose of these two focused sub-corpora is to force classifiers to find some semantic cues which can distinguish sarcastic posts even in the presence of rhetorical figures usually associated with sarcasm. In fact, the presence of hyperboles has been used before as a feature for detecting sarcasm BIBREF49 .
Semeval-2018 Task3 Corpus of Tweets
The International Workshop on Semantic Evaluation Semeval-2018 featured a shared task on verbal irony detection in tweets (Van Hee et al. 2018). The corpus contains a class-balanced training set consisting of INLINEFORM0 tweets, and a test set with 784 tweets. In the test set, only 40% of the instances are ironic. The corpus has been collected from Twitter searching for tweets with the hashtags #irony, #sarcasm and #not. The corpus has been annotated by three students in linguistics who showed a high inter-annotator agreement. After the annotation, INLINEFORM1 tweets out of INLINEFORM2 were ironic and only 604 were not. Thus, an additional set of INLINEFORM3 non-ironic tweets was added to the corpus. Finally, the corpus was split randomly in class-balanced training and test set, but an additional cleaning step for removing ambiguous sentences modified the proportion to 40% ironic.
Experimental setup
We ran three groups of experiments, to assess both the effectiveness of our approach when compared with the approaches we found in literature and its capability of extracting features that are relevant for sarcasm in a cross-domain scenario. In both cases, we denote with the word model one of the possible combinations of classic/statistical LSA and a classifier. The used classifiers are Support Vector Machine (SVM), Logistic regression (Log.Reg), Random Forest (RF) and gradient boosting (XGB).
For the first group of experiments, we evaluated the performance of each of our models in every corpus. We use 10-fold cross-validation and report the mean values of INLINEFORM0 -score, precision, and recall among all the folds. The proportion of the two classes in each fold is equal to the proportion in the whole corpus. Where applicable, we compare our results with existing results in the literature. Besides, we compare with the method presented in Poira et al. cambria2016.
The second group of experiments has been performed on the Semeval 2018 Task 3 dataset (Van Hee et al. 2018). We first find the best LSA dimensionality by 10-fold cross-validation in the training set. Then, we trained again the models in the whole dataset and evaluated them in the test set for comparison with the participants to the shared task.
The third group of experiments is inter-corpora. For each experiment, we have chosen one corpus as a training set and another one as a test set. This process is performed for all the models and all the corpora pairs. We aim to find whether sarcasm detection is domain-dependent.
Finally, in the fourth group of experiments (union experiments) we perform another 10-fold in which all the corpora are concatenated. Each fold contains samples from every corpus proportionally to the size of that corpus. The goal of this experiment is to understand whether simply adding more data, but from different domains, improves the classification performance.
The hyperparameters of the classifiers have been chosen by grid search on SarcasmCorpus with LSA dimensionality 40, and then used for all the reported experiments. We use SVM with Gaussian kernel, C value of 100, INLINEFORM0 logistic regression with penalty L1 and C=10 and decision tree with entropy loss. SVM and logistic regression both have balanced class weights to cope with unbalanced datasets.
In-corpus Experiments
In SarcasmCorpus each sample consists of a review title, a review text, a product name and the number of stars given to the product ranging from 1 to 5. Buschmeier et al. buschmeier2014impact showed that the star rating is the most discriminative feature. Thus we performed the experiment both including and not including it. In Table TABREF48 , we refer to “SarcasmCorpus” when the star rating is not used, and “SarcasmCorpus*” when it is used. We use the star rating by simply concatenating it to the document vector produced by LSA. The document vector is computed only from the review texts because in our preliminary experiments we found that the other parts are not useful for the task. Accuracy and F-score values of all classifiers for SarcasmCorpus and SarcasmCorpus* are plotted in Figures FIGREF72 and FIGREF73 , and the best F-scores, with the relative precision and recall, are reported in the two columns SarcasmCorpus and SarcasmCorpus* of Table TABREF48 . The best result from the logistic regression in SarcasmCorpus is INLINEFORM0 which represents a INLINEFORM1 % relative improvement concerning the INLINEFORM2 reported in the above-mentioned work by Buschmeier et al. buschmeier2014impact. The results from Poira et al. cambria2016 are even higher in terms of F-score, with a relative improvement of INLINEFORM3 , which is due mostly to a much higher recall.
Note that the method by Poira et al. cambria2016 uses also features extracted from other datasets for sentiment, emotion and personality classification, as these features are considered to be useful for the task of sarcasm detection. Moreover, as our goal is to propose a baseline, the training time in the order of minutes is an advantage of our model. We report such results as an upper bound considering that our model does not use additional information from external data.
The best results are obtained using the star labels. In this setting, our best-performing classifiers are better than the INLINEFORM0 F-score value reported by Buschmeier, and our best INLINEFORM1 -score of INLINEFORM2 represents a INLINEFORM3 relative improvement. In this single case of SarcasmCorpus*, the results with the Traditional LSA are all higher than their counterparts with Statistical LSA.
For IAC-Sarcastic we do not have any previously published result to compare with. The only related result is reported in Joshi et al. joshi-sharma-bhattacharyya:2015:ACL-IJCNLP, which use a corpus randomly extracted from IAC containing 752 sarcastic and 752 not sarcastic texts. They report an F-score of INLINEFORM0 (average over a 5-fold), but the text sampling procedure is not specified in the paper. Thus, we prefer to use the sarcastic selection given by the Internet Argument Corpus website which is also a bit larger (998 sarcastic and 997 non-sarcastic texts).
Accuracies and F-scores of all the classifiers at varying T-SVD size are plotted in Figure FIGREF74 , best values of F-score, precision and recall are reported in column IAC-Sarcastic of Table TABREF49 . The best result (F= INLINEFORM0 ) is lower than in SarcasmCorpus, despite IAC-Sarcastic being balanced and larger than SarcasmCorpus. With Traditional LSA the INLINEFORM1 -scores are generally slightly lower, but the precision values are higher.
The results from Poira et al. cambria2016 are significantly higher, suggesting that in this dataset the sarcasm can be detected in most cases with the linguistic features used by their network independently from the context.
For the irony-context corpus, we used the same 1949 documents selected for the experiments reported in Wallace et al. wallace2014humans. To allow fair comparisons, we used only the texts of the comments, without any contextual information.
The authors report a mean F-score over the five-fold of 0.383 by using a bag-of-words representation with 50.000 tokens, plus some other binary features that have proven useful in other works, and an SVM classifier with a linear kernel. Our results are plotted in Figure FIGREF78 and reported in column irony-context of Table TABREF49 , where it is shown how our classifiers clearly outperform the baseline. Our maximum F-score of INLINEFORM0 represents a relative improvement of 20%. Moreover, it is important to highlight the incredibly low values obtained in this corpus when compared with the results from the previous corpora. This is certainly due to the high skewness between the classes; in fact, the positive samples are just 537 over 1949 (27.5%). If we consider that in SarcasmCorpus the sarcastic texts are only 33% of the total, we suppose there are other causes. Another reason that can explain the poor results can be found in the diversity of topics, as the texts are extracted from six different forums, and the words used for sarcasm can be highly specific to a given context, both cultural and topical. In Wallace et al. wallace2014humans it is explicitly said that the request of the context from an annotator is high for the sarcastic texts. As a consequence, classifying correctly the texts without a context is difficult even for humans. Moreover, the forums from which the posts were extracted are highly controversial, as they regard politics or religion. As a consequence, it is difficult to grasp the sarcasm of a text without knowing the author's opinions.
The results with Traditional LSA are very similar to Statistical LSA, and the real surprise is the incredibly low scores obtained by the random forest and gradient boosting methods.
In this case, we wanted to compare our results against those from Oraby et al. oraby2016creating, which deal with the three sub-corpora separately. However, they are not directly comparable because at the moment in which we report these results only half of the corpus has been released, consisting of 3260 posts in the generic sub-corpus, 582 in the Hyperbole side and 850 for rhetorical questions. The three sub-corpora are all balanced.
Results computed on the three subcorpora are plotted in Figures FIGREF75 , FIGREF76 , FIGREF77 and reported in the last three columns of table TABREF50 . Despite the difference in data availability, the results are quite encouraging. In fact, we can see that our method reaches the INLINEFORM0 - score of INLINEFORM1 in the generic sub-corpus, slightly better than the previous study. Moreover, it improves over Oraby et al. (2016) also in the other two sub-corpora but using Traditional LSA.
Nonetheless, these results show that it is possible to achieve very good performance when high-quality labeled corpora are available, even with a limited number of examples.
For the CNN, we have results only in the generic sub-corpus, and this is the only case in which at least one of our models can outperform it in terms of F-score.
SemEval 2018 Task 3A
The last experiment on a single dataset was performed on the settings of SemEval 2018 Task 3A (Van Hee et al. 2018), which is a shared task on a binary classification of irony, which we introduced in Section SECREF47 .
We start by performing 10-fold cross-validation with our classifiers over varying LSA dimensionality to choose the best setting. We used the same set of hyper-parameters used for the previous experiments.
Once we have found the best setting, we train again the model with all the data and predict the classes of the test tweets. We found that we obtain the best results in cross-validation with LSA vectors of size 20, and the results are presented in Table TABREF59 . We list results for four different classifiers, namely logistic regression, support vector machine, gradient boosting and random forest. In this case, we get the best results using random forests, followed by gradient boosting. In particular, random forest obtains a F INLINEFORM0 -score of INLINEFORM1 , which is higher than the 6-th submission. It is worth noting that the submissions that we listed in the Table, except for the baseline, all use approaches based on deep learning. Compared to the unigram SVM baseline used for the shared task (row 11 in table 4), our model with the random forest is clearly better according to all the metrics, while our model with SVM is better in terms of F INLINEFORM2 score but not accuracy.
Surely the model we provide is not the best one in terms of accuracy, and showing its superiority among all the others does not represent the goal of this work, but the best performers, i.e. deep learning networks, involve an high number of parameters and high computational training cost. Moreover, there are additional interesting notes. First, the submission by BIBREF50 also makes use of deep neural networks but does not get a higher score than our best. Second, the submission by BIBREF51 is using SVMs over syntactic, semantic, and affective features, but still is not better than our best score. The models that showed a clear superiority use deep networks pre-trained on external data to extract more meaningful features. Thus, while the advantage is real, the number of parameters and the amount of data used is much higher.
Inter-corpora Experiments
The second group of experiments is aimed at finding whether the sarcasm is domain-dependent, or the knowledge acquired over one dataset can be transferred to another. We evaluate the similarity among the datasets by training a model over all the data of a corpus and using a second corpus as a test set. Our best results for every corpus pair are listed in Tables TABREF62 and TABREF63 , where the rows indicate the training set and the columns the test set. Quite interestingly, unlike the in-corpus experiments where the logistic regression works better in some cases, all the top scores that we report for these experiments are obtained by using the SVM classifier.
In Table TABREF62 we find the results for SarcasmCorpus and IAC-Sarcastic used as test sets. For the case of SarcasmCorpus, the F-scores are quite low compared to the in-corpus experiments. In fact, here we obtain the best result of only INLINEFORM0 when IAC-Sarcastic is the training set, which is much lower than the scores of about 70 that we get in the in-corpus experiments (column SarcasmCorpus in table TABREF48 ). The low results suggest us that the sarcasm conveyed by the texts in SarcasmCorpus is somehow different from what we can observe in the other corpora.
When we use IAC-Sarcastic as a test set, we can observe higher scores (column IAC-Sarcastic in table TABREF62 ), and the F-score of INLINEFORM0 that we obtain by training in IAC-Sarcastic-v2 is comparable to the INLINEFORM1 , which is the best result in the in-corpus experiments. Also, the lower result, which we obtain when training on irony-context, is quite close to the result obtained for the in-corpus experiment, and unexpected since the poor results obtained in the in-corpus experiments for irony-context (column Irony-Context in table TABREF49 ). When irony-context is the test set (first three columns of table TABREF63 ), we can observe again that the F-score obtained by training in IAC-Sarcastic-v2 is higher than the score obtained in the in-corpus experiment. Nonetheless, all the scores for this test set are lower than INLINEFORM2 with high recalls and low precisions.
When using IAC-Sarcastic-v2 as the test set (see last three columns of Table TABREF63 ) we can observe F-scores between INLINEFORM0 and INLINEFORM1 and are characterized by a high recall and lower precision. The top F1 score is obtained when using IAC-Sarcastic as a training set, which also corresponds to the highest precision. This represents a further proof in favor of the similarity of the two corpora. The top recall score of INLINEFORM2 is obtained by training on SarcasmCorpus, but the precision is much lower than the other two cases.
Overall, it is worth noting that, for all the experiments, the top results are obtained by training on either IAC-Sarcastic or IAC-Sarcastic-v2, while SarcasmCorpus is always better than irony-context. Considering that the quality of the features depends on the quality of the data and of the annotation, we suppose that the quality of the first two datasets is higher than the quality of irony-context, while the data contained in SarcasmCorpus are too different from the other corpora. A deeper analysis of the corpora can be found in the discussion (Section SECREF71 ).
Union Experiments
The last group of experiments we ran has the goal of understanding whether the combination of data coming from different sources can influence positively the final score. For this purpose, as anticipated in Section SECREF51 , we computed 10-folds of each of the four corpora used for the first group of experiments, and used as a training set the concatenation of 9 folds of every corpus, and as a validation set the remaining single folds of each corpus.
From Tables TABREF64 , TABREF65 we can observe that these results are not higher overall with respect to the inter-corpora results. The only exceptions are SarcasmCorpus, where the results are almost 20 F-score points higher than those obtained in the inter-corpora; and IAC-v2, where the gradient boosting (XGB) obtains 2 F-score points more than the top score in the inter-corpora results.
The results on SarcasmCorpus are still lower than the in-corpus results, and the scores of random forest and gradient boosting are much lower than the other two methods. This is further evidence that adding diverse data is not helpful, or is actually harmful, for classifying SarcasmCorpus.
The general trend of this block of experiments is that our classifiers are not able to leverage data from different domains in order to improve global results. In-domain data represent the best choice even if the data amount is lower.
Discussion
In this section, we discuss our results from a more general point of view. We start by briefly discussing the content of the different corpora. Then we try to relate the results of the different types of experiments. Finally, we detect the limits of our experiments for the type of documents we worked with.
The corpora we used for our experiments are characterized by high internal variability in style, as each corpus consists of texts from thousands of different authors. Despite the number of authors, there are some factors that depend on the type of text and the medium. For instance, the irony-context, IAC Sarcastic, and IAC Sarcastic v2 corpora are made of posts collected from online forums, which are mostly about politics. Most of the texts are extracted from longer arguments, and thus the style is informal and in general with aggressive tones.
In Tables TABREF67 , TABREF68 and TABREF69 we show some randomly selected samples from these corpora. As it is apparent from the samples, the posts have a target to attack, who can be another user or the subject of the discussion. Table TABREF67 shows some examples from IAC-Sarcastic. In all the examples the author attacks another user or his opinions. For instance, the first and the third sarcastic examples make sarcasm about the Bible to attack another user's religious ideas, while in the second example the author uses sarcasm to expose a fallacious position of another user and do not appear rude on his side. By contrast, the non-sarcastic examples are much more direct about their meaning. A similar pattern can be found in the examples from IAC Sarcastic v2 (table TABREF69 ). Sarcasm is again used to attack a person (first example) or his/her opinions (second example), maybe religious. The third example shows that also in this corpus some sentences are hard to classify. In this case, the information that we get is that the target has ultraconservative ideas, but it is not easy to grasp the sarcasm. The examples from irony-context (in table TABREF68 ) are much more difficult to grasp without knowing contextual information. For instance, the first sarcastic example can be either sarcastic or regular according to the political opinion of the author. It is sarcastic if the author is a Republican, it is not sarcastic (but would appear strange to write) if the author is a Democrat. The second and the third examples are hard to classify without knowing the subject of the conversation. The same issue of missing a broader context also appears in the non-sarcastic examples, and the third examples can easily be interpreted as sarcastic by humans. In SarcasmCorpus the situation is different as there is no argument ongoing, and the sarcasm is made against products that the author did not like. In this case, there are many references to the external world and the writing is more passionate in its negative stance. Some samples are shown in Table TABREF66 . The sarcastic examples in table TABREF66 all express a negative sentiment and also use negative words. Sarcasm is used within this negative reviews to attack the product in a more creative way and make the text more fun than a usual negative review. The non-sarcastic reviews, on the other side, give a description of the product and their experience with it, with regular forms of expressing the sentiment (“are also a great feature”, “It is a great little camera”). We suppose that this difference in style is the main obstacle to correct classification of SarcasmCorpus instances in the cross-corpora experiments.
We now discuss the relations among the results of the different experiments to gain some further insights into the sarcastic content of our corpora. From the in-corpus experiments, we obtain good results on SarcasmCorpus, which is the only corpus containing Amazon reviews. Unfortunately, when we train our models in a cross-corpora or all-corpora setting, our results drop dramatically, especially in the cross-corpora case. These results mean that the sarcasm in SarcasmCorpus is conveyed through features that are not present in the other corpora. This is especially true when considering that in the inter-corpora experiments, using SarcasmCorpus as a training set in all cases yields results that are only better than the ones obtained when using irony-context as a training set.
The results on irony-context show that this corpus is much more difficult to classify than the others, as it was pointed out also in the paper that presented it (Wallace et al. 2014), which highlights how the human annotators needed to read the contexts to be sure about the sarcastic posts. In the inter-corpora experiments, the results when training on irony-context are the worst for all the test sets, but only for a few points of F-score, while at first, we could expect dramatically lower results. For us, the previous are strong suggestions that the types of texts present in irony-context are similar to the ones present in IAC-Sarcastic-v2, but the quality is lower. As a consequence, this is a further proof that the datasets annotators do not consider sarcasm and irony two different linguistic phenomena.
The two versions of IAC-Sarcastic have proved to be the easiest to classify when using other corpora for training. The best result in IAC-Sarcastic is obtained in the Union experiment (see Tables TABREF64 , TABREF65 ), and thus it benefits from the higher amount of data, especially from the data from IAC-Sarcastic-v2, as can be observed from the cross-corpora results (Table TABREF62 ).
By contrast, the best results on IAC-Sarcastic-v2 are obtained with the in-corpus experiments, while all the results obtained in the inter-corpora experiments are clearly worse. Among the inter-corpora, training the model with IAC-Sarcastic results in a F_score of INLINEFORM0 , which means a relative decrement of INLINEFORM1 concerning the top score for the intra-corpus experiments of IAC-Sarcastic-v2. It is interesting to note that one cause of the decrement can also be the size of corpora, in fact, IAC-Sarcastic contains only 1995 texts, while IAC-Sarcastic-v2 contains 3260.
One final remark is about the absolute scores obtained in the in-corpus experiments. In fact, we can notice that in SarcasmCorpus the F_score can go beyond INLINEFORM0 , and up to INLINEFORM1 by adding the star rating as a feature. The high result can be explained by the peculiarity of this corpus, where sarcasm is present mostly in negative reviews, and the star label is the single best indicator of sarcasm BIBREF49 . The other corpora consist of texts that belong to a thread of forum posts. Sometimes it is reasonable to classify such posts as sarcastic or not out of context, but in many cases, it is impossible also for humans (see examples in Table TABREF68 ). In fact, the low F_score in irony-context is due to low precision, which is an indicator of high similarity between the positive and negative classes. Moreover, low precision and higher recall is a pattern that is present in most of the experiments, even if with higher absolute numbers. The combination of high recall and lower precision suggests that the dubious texts are classified as sarcastic more often than not sarcastic.
Conclusions
In this work, we have tackled the problem of automatic sarcasm detection from a data-driven point of view. More in details, we have used a set of labeled dataset and applied distributional semantics followed by some machine learning approaches in order to give a baseline for the literature in managing such a problem. We do not differentiate between sarcasm and irony because they are not so easily distinguishable even for human experts. Experiments have been carried out on four different corpora containing texts from online reviews or forums, and the corpus used for the shared task on irony detection on Twitter proposed in SemEval 2018. We have shown experimentally that some basic methods can outperform in all the datasets other methods based on bag of words and linguistic features, thus representing a solid baseline. With our experiments that train the models with one corpus and test them by using the other corpora, we have confirmed experimentally that also the annotators tend to not distinguish the distinction between irony and sarcasm. By contrast, major differences can be found according to the text domains, i.e., review vs. political forum. The domain difference can also prevent the method from taking benefits from more data when they are too diverse from the test data. As a future work, we will try to improve distributional semantics approaches with linguistic features in order to perform more fair comparisons with more recent and advanced methods. Furthermore, we will exploit more classical AI methodologies (e.g., by using ontologies, reasoners, common-sense reasoning techniques, etc.) to deduce the context, understanding the concepts expressed in a sentence, exploiting also features like hashtags and emojis to improve the overall performance of the approach. | Amazon reviews |
d86c7faf5a61d73a19397a4afa2d53206839b8ad | d86c7faf5a61d73a19397a4afa2d53206839b8ad_0 | Q: What modalities are being used in different datasets?
Text: Introduction
Humans communicate using a highly complex structure of multimodal signals. We employ three modalities in a coordinated manner to convey our intentions: language modality (words, phrases and sentences), vision modality (gestures and expressions), and acoustic modality (paralinguistics and changes in vocal tones) BIBREF0 . Understanding this multimodal communication is natural for humans; we do it subconsciously in the cerebrum of our brains everyday. However, giving Artificial Intelligence (AI) the capability to understand this form of communication the same way humans do, by incorporating all involved modalities, is a fundamental research challenge. Giving AI the capability to understand human communication narrows the gap in computers' understanding of humans and opens new horizons for the creation of many intelligent entities.
The coordination between the different modalities in human communication introduces view-specific and cross-view dynamics BIBREF1 . View-specific dynamics refer to dynamics within each modality independent of other modalities. For example, the arrangement of words in a sentence according to the generative grammar of the language (language modality) or the activation of facial muscles for the presentation of a smile (vision modality). Cross-view dynamics refer to dynamics between modalities and are divided into synchronous and asynchronous categories. An example of synchronous cross-view dynamics is the simultaneous co-occurrence of a smile with a positive sentence and an example of asynchronous cross-view dynamics is the delayed occurrence of a laughter after the end of sentence. For machines to understand human communication, they must be able to understand these view-specific and cross-view dynamics.
To model these dual dynamics in human communication, we propose a novel deep recurrent neural model called the Multi-attention Recurrent Network (MARN). MARN is distinguishable from previous approaches in that it explicitly accounts for both view-specific and cross-view dynamics in the network architecture and continuously models both dynamics through time. In MARN, view-specific dynamics within each modality are modeled using a Long-short Term Hybrid Memory (LSTHM) assigned to that modality. The hybrid memory allows each modality's LSTHM to store important cross-view dynamics related to that modality. Cross-view dynamics are discovered at each recurrence time-step using a specific neural component called the Multi-attention Block (MAB). The MAB is capable of simultaneously finding multiple cross-view dynamics in each recurrence timestep. The MARN resembles the mechanism of our brains for understanding communication, where different regions independently process and understand different modalities BIBREF2 , BIBREF3 – our LSTHM – and are connected together using neural links for multimodal information integration BIBREF4 – our MAB. We benchmark MARN by evaluating its understanding of different aspects of human communication covering sentiment of speech, emotions conveyed by the speaker and displayed speaker traits. We perform extensive experiments on 16 different attributes related to human communication on public multimodal datasets. Our approach shows state-of-the-art performance in modeling human communication for all datasets.
Related Work
Modeling multimodal human communication has been studied previously. Past approaches can be categorized as follows:
Non-temporal Models: Studies have focused on simplifying the temporal aspect of cross-view dynamics BIBREF5 , BIBREF6 , BIBREF7 in order to model co-occurrences of information across the modalities. In these models, each modality is summarized in a representation by collapsing the time dimension, such as averaging the modality information through time BIBREF8 . While these models are successful in understanding co-occurrences, the lack of temporal modeling is a major flaw as these models cannot deal with multiple contradictory evidences, eg. if a smile and frown happen together in an utterance. Furthermore, these approaches cannot accurately model long sequences since the representation over long periods of time become less informative.
Early Fusion: Approaches have used multimodal input feature concatenation instead of modeling view-specific and cross-view dynamics explicitly. In other words, these approaches rely on generic models (such as Support Vector Machines or deep neural networks) to learn both view-specific and cross-view dynamics without any specific model design. This concatenation technique is known as early fusion BIBREF9 , BIBREF10 . Often, these early fusion approaches remove the time factor as well BIBREF11 , BIBREF0 . We additionally compare to a stronger recurrent baseline that uses early fusion while maintaining the factor of time. A shortcoming of these models is the lack of detailed modeling for view-specific dynamics, which in turn affects the modeling of cross-view dynamics, as well as causing overfitting on input data BIBREF12 .
Late Fusion: Late fusion methods learn different models for each modality and combine the outputs using decision voting BIBREF13 , BIBREF14 . While these methods are generally strong in modeling view-specific dynamics, they have shortcomings for cross-view dynamics since these inter-modality dynamics are normally more complex than a decision vote. As an example of this shortcoming, if a model is trained for sentiment analysis using the vision modality and predicts negative sentiment, late fusion models have no access to whether this negative sentiment was due to a frowning face or a disgusted face.
Multi-view Learning: Extensions of Hidden Markov Models BIBREF15 and Hidden Conditional Random Fields BIBREF16 , BIBREF17 have been proposed for learning from multiple different views (modalities) BIBREF18 , BIBREF19 . Extensions of LSTMs have also been proposed in a multi-view setting BIBREF20 .
MARN is different from the first category since we model both view-specific and cross-view dynamics. It is differs from the second and third category since we explicitly model view-specific dynamics using a LSTHM for each modality as well as cross-view dynamics using the MAB. Finally, MARN is different from the fourth category since it explicitly models view-specific dynamics and proposes more advanced temporal modeling of cross-view dynamics.
MARN Model
In this section we outline our pipeline for human communication comprehension: the Multi-attention Recurrent Network (MARN). MARN has two key components: Long-short Term Hybrid Memory and Multi-attention Block. Long-short Term Hybrid Memory (LSTHM) is an extension of the Long-short Term Memory (LSTM) by reformulating the memory component to carry hybrid information. LSTHM is intrinsically designed for multimodal setups and each modality is assigned a unique LSTHM. LSTHM has a hybrid memory that stores view-specific dynamics of its assigned modality and cross-view dynamics related to its assigned modality. The component that discovers cross-view dynamics across different modalities is called the Multi-attention Block (MAB). The MAB first uses information from hidden states of all LSTHMs at a timestep to regress coefficients to outline the multiple existing cross-view dynamics among them. It then weights the output dimensions based on these coefficients and learns a neural cross-view dynamics code for LSTHMs to update their hybrid memories. Figure 1 shows the overview of the MARN. MARN is differentiable end-to-end which allows the model to be learned efficiently using gradient decent approaches. In the next subsection, we first outline the Long-short Term Hybrid Memory. We then proceed to outline the Multi-attention Block and describe how the two components are integrated in the MARN.
Long-short Term Hybrid Memory
Long-short Term Memory (LSTM) networks have been among the most successful models in learning from sequential data BIBREF21 . The most important component of the LSTM is a memory which stores a representation of its input through time. In the LSTHM model, we seek to build a memory mechanism for each modality which in addition to storing view-specific dynamics, is also able to store the cross-view dynamics that are important for that modality. This allows the memory to function in a hybrid manner.
The Long-short Term Hybrid Memory is formulated in Algorithm 1. Given a set of $M$ modalities in the domain of the data, subsequently $M$ LSTHMs are built in the MARN pipeline. For each modality $m \in M$ , the input to the $m$ th LSTHM is of the form $\mathbf {X}^m=\lbrace {x}_{1}^m, {x}_{2}^m, {x}_{3}^m, \cdots , {x}_{T}^m \ ; {x}_{t}^m \in \mathbb {R}^{d_{in}^m} \rbrace $ , where ${x}^m_{t}$ is the input at time $t$ and $d^m_{in}$ is the dimensionality of the input of modality $m$ . For example if $m=l \textrm {(language)}$ , we can use word vectors with $M$0 at each time step $M$1 . $M$2 is the dimensionality of the memory for modality $M$3 . $M$4 is the (hard-)sigmoid activation function and $M$5 is the tangent hyperbolic activation function. $M$6 denotes vector concatenation and $M$7 denotes element-wise multiplication. Similar to the LSTM, $M$8 is the input gate, $M$9 is the forget gate, and $m \in M$0 is the output gate. $m \in M$1 is the proposed update to the hybrid memory $m \in M$2 at time $m \in M$3 . $m \in M$4 is the time distributed output of each modality.
The neural cross-view dynamics code $z_{t}$ is the output of the Multi-attention Block at the previous time-step and is discussed in detail in next subsection. This neural cross-view dynamics code $z_{t}$ is passed to each of the individual LSTHMs and is the hybrid factor, allowing each individual LSTHM to carry cross-view dynamics that it finds related to its modality. The set of weights $W^m_*$ , $U^m_*$ and $V^m_*$ respectively map the input of LSTHM $x^m_t$ , output of LSTHM $h^m_t$ , and neural cross-view dynamics code $z_{t}$ to each LSTHM memory space using affine transformations.
[b!] Multi-attention Recurrent Network (MARN), Long-short Term Hybrid Memory (LSTHM) and Multi-attention Block (MAB) Formulation [1] $\textrm {MARN}$ $\mathbf {X}^m$ $c_0,h_0,z_0 \leftarrow \mathbf {0}$ $t = 1, ..., T$ : $h_t \leftarrow \textrm {LSTHM\_Step} (\bigcup _{m \in M} \lbrace x^m_t\rbrace , z_{t-1})$ $z_t \leftarrow \textrm {MAB\_Step} (h_t)$ $h_T, z_T$
$\textrm {LSTHM\_Step}$ $\bigcup _{m \in M} \lbrace {x}^m_t\rbrace $ , $z_{t-1}$ $m \in M$ : $\triangleleft $ for all the $M$ modalities $i_t^m \leftarrow \sigma (W_i^m\ x^m_t+U^m_i\ h^m_{t-1}+V^m_i\ z_{t-1}+b^m_{i})$ $f^m_t \leftarrow \sigma (W^m_{f}\ x^m_t + U^m_{f}\ h^m_{t-1} + V^m_f\ z_{t-1}+b^m_{f})$ $o^k_t \leftarrow \sigma (W^m_{o}\ x^m_t + U^m_{o}\ h^m_{t-1} + V^m_o\ z_{t-1}+b^m_{o})$ $\bar{c}_t^m \leftarrow W_{\bar{c}}^m\ x^m_t + U_{\bar{c}}^m\ h^m_{t-1} + V_{\bar{c}}^m\ z_{t-1} + b^m_{\bar{c}}$ $\bigcup _{m \in M} \lbrace {x}^m_t\rbrace $0 $\bigcup _{m \in M} \lbrace {x}^m_t\rbrace $1 $\bigcup _{m \in M} \lbrace {x}^m_t\rbrace $2
$h_t$
$\textrm {MAB\_Step}$ $h_t$ $a_t \leftarrow \mathcal {A}(h_t; \theta _{\mathcal {A}})$ $\triangleleft $ $K$ output coefficients $\widetilde{h}_t \leftarrow a_t \odot \langle \Uparrow _K h_t \rangle $ $m \in M$ : $\triangleleft $ calculate cross-view dynamics $s^m_{t} \leftarrow \mathcal {C}_m (\widetilde{h}^m_{t}; \theta _{\mathcal {C}_m})$ $s_t \leftarrow \bigoplus _{m \in M} s^m_{t}$ $h_t$0
$z_t$
Multi-attention Block
At each timestamp $t$ , various cross-view dynamics across the modalities can occur simultaneously. For example, the first instance can be the connection between a smile and positive phrase both happening at time $t$ . A second instance can be the occurrence of the same smile at time $t$ being connected to an excited voice at time $t-4$ , that was carried to time $t$ using the audio LSTHM memory. In both of these examples, cross-view dynamics exist at time $t$ . Therefore, not only do cross-view dynamics span across various modalities, they are scattered across time forming asynchronous cross-view dynamics.
The Multi-attention Block is a network that can capture multiple different, possibly asynchronous, cross-view dynamics and encode all of them in a neural cross-view dynamics code $z_t$ . In the most important step of the Multi-attention Block, different dimensions of LSTHM outputs $h^m_t$ are assigned attention coefficients according to whether or not they form cross-view dynamics. These attention coefficients will be high if the dimension contributes to formation of a cross-view dynamics and low if they are irrelevant. The coefficient assignment is performed multiple times due to the existence of possibly multiple such cross-view dynamics across the outputs of LSTHM. The Multi-attention Block is formulated in Algorithm 1. We assume a maximum of $K$ cross-view dynamics to be present at each timestamp $t$ . To obtain the $K$ attention coefficients, $K$ softmax distributions are assigned to the concatenated LSTHM memories using a deep neural network $\mathcal {A} : \mathbb {R}^{{d_{mem}}} \mapsto \mathbb {R}^{K \times {d_{mem}}}$ with ${d_{mem}} = \sum _{m \in M} {d^{m}_{mem}}$ . At each timestep $t$ , the output of LSTHM is the set $\lbrace h^m_t : m \in M, h^m_t \in \mathbb {R}^{{d^m_{mem}}}\rbrace $ . $h^m_t$0 takes the concatenation of LSTHM outputs $h^m_t$1 as input and outputs a set of $h^m_t$2 attentions $h^m_t$3 with $h^m_t$4 , $h^m_t$5 . $h^m_t$6 has a softmax layer at top of the network which takes the softmax activation along each one of the $h^m_t$7 dimensions of its output $h^m_t$8 . As a result, $h^m_t$9 which forms a probability distribution over the output dimensions. $K$0 is then broadcasted (from $K$1 to $K$2 ) and element-wise multiplied by the $K$3 to produce attended outputs $K$4 , $K$5 . $K$6 denotes broadcasting by parameter $K$7 .
The first dimension of $\widetilde{h}_t$ contains information needed for the first cross-view dynamic highlighted using $a^1_t$ , the second dimension of $\widetilde{h}_t$ contains information for the second cross-view dynamic using $a^2_t$ , and so on until $K$ . $\widetilde{h}_t$ is high dimensional but ideally considered sparse due to presence of dimensions with zero value after element-wise multiplication with attentions. Therefore, $\widetilde{h}_t$ is split into $m$ different parts – one for each modality $m$ – and undergoes dimensionality reduction using $\mathcal {C}_m : \mathbb {R}^{K \times {d^m_{mem}}} \mapsto \mathbb {R}^{{d^m_{local}}}, \forall m \in M$ with $a^1_t$0 as the target low dimension of each modality split in $a^1_t$1 . The set of networks $a^1_t$2 maps the attended outputs of each modality $a^1_t$3 to the same vector space. This dimensionality reduction produces a dense code $a^1_t$4 for the $a^1_t$5 times attended dimensions of each modality. Finally, the set of all $a^1_t$6 attended modality outputs, $a^1_t$7 , are passed into a deep neural network $a^1_t$8 to generate the neural cross-view dynamics code $a^1_t$9 at time $\widetilde{h}_t$0 .
Experimental Methodology
In this paper we benchmark MARN's understanding of human communication on three tasks: 1) multimodal sentiment analysis, 2) multimodal speaker traits recognition and 3) multimodal emotion recognition. We perform experimentations on six publicly available datasets and compare the performance of MARN with the performance of state-of-the-art approaches on the same datasets. To ensure generalization of the model, all the datasets are split into train, validation and test sets that include no identical speakers between sets, i.e. all the speakers in the test set are different from the train and validation sets. All models are re-trained on the same train/validation/test splits. To train the MARN for different tasks, the final outputs $h_T$ and neural cross-view dynamics code $z_T$ are the inputs to another deep neural network that performs classification (categorical cross-entropy loss function) or regression (mean squared error loss function). The code, hyperparameters and instruction on data splits are publicly available at https://github.com/A2Zadeh/MARN.
Following is the description of different benchmarks.
Multimodal Sentiment Analysis
CMU-MOSI The CMU-MOSI dataset BIBREF11 is a collection of 2199 opinion video clips. Each opinion video is annotated with sentiment in the range [-3,3]. There are 1284 segments in the train set, 229 in the validation set and 686 in the test set.
ICT-MMMO The ICT-MMMO dataset BIBREF7 consists of online social review videos that encompass a strong diversity in how people express opinions, annotated at the video level for sentiment. The dataset contains 340 multimodal review videos, of which 220 are used for training, 40 for validation and 80 for testing.
YouTube The YouTube dataset BIBREF0 contains videos from the social media web site YouTube that span a wide range of product reviews and opinion videos. Out of 46 videos, 30 are used for training, 5 for validation and 11 for testing.
MOUD To show that MARN is generalizable to other languages, we perform experimentation on the MOUD dataset BIBREF22 which consists of product review videos in Spanish. Each video consists of multiple segments labeled to display positive, negative or neutral sentiment. Out of 79 videos in the dataset, 49 are used for training, 10 for validation and 20 for testing.
Multimodal Speaker Trait Recognition
POM Persuasion Opinion Multimodal (POM) dataset BIBREF23 contains movie review videos annotated for the following speaker traits: confidence, passion, dominance, credibility, entertaining, reserved, trusting, relaxed, nervous, humorous and persuasive. 903 videos were split into 600 were for training, 100 for validation and 203 for testing.
Multimodal Emotion Recognition
IEMOCAP The IEMOCAP dataset BIBREF24 consists of 151 videos of recorded dialogues, with 2 speakers per session for a total of 302 videos across the dataset. Each segment is annotated for the presence of 9 emotions (angry, excited, fear, sad, surprised, frustrated, happy, disappointed and neutral) as well as valence, arousal and dominance. The dataset is recorded across 5 sessions with 5 pairs of speakers. To ensure speaker independent learning, the dataset is split at the level of sessions: training is performed on 3 sessions (6 distinct speakers) while validation and testing are each performed on 1 session (2 distinct speakers).
Multimodal Computational Descriptors
All the datasets consist of videos where only one speaker is in front of the camera. The descriptors we used for each of the modalities are as follows:
Language All the datasets provide manual transcriptions. We use pre-trained word embeddings (glove.840B.300d) BIBREF25 to convert the transcripts of videos into a sequence of word vectors. The dimension of the word vectors is 300.
Vision Facet BIBREF26 is used to extract a set of features including per-frame basic and advanced emotions and facial action units as indicators of facial muscle movement.
Acoustic We use COVAREP BIBREF27 to extract low level acoustic features including 12 Mel-frequency cepstral coefficients (MFCCs), pitch tracking and voiced/unvoiced segmenting features, glottal source parameters, peak slope parameters and maxima dispersion quotients.
Modality Alignment To reach the same time alignment between different modalities we choose the granularity of the input to be at the level of words. The words are aligned with audio using P2FA BIBREF28 to get their exact utterance times. Time step $t$ represents the $t$ th spoken word in the transcript. We treat speech pause as a word with vector values of all zero across dimensions. The visual and acoustic modalities follow the same granularity. We use expected feature values across the entire word for vision and acoustic since they are extracted at a higher frequency (30 Hz for vision and 100 Hz for acoustic).
Comparison Metrics
Different datasets in our experiments have different labels. For binary classification and multiclass classification we report accuracy A $^C$ where $C$ denotes the number of classes, and F1 score. For regression we report Mean Absolute Error MAE and Pearson's correlation $r$ . For all the metrics, higher values denote better performance, except MAE where lower values denote better performance.
Baseline Models
We compare the performance of our MARN to the following state-of-the-art models in multimodal sentiment analysis, speaker trait recognition, and emotion recognition. All baselines are trained for datasets for complete comparison.
TFN (Tensor Fusion Network) BIBREF1 explicitly models view-specific and cross-view dynamics by creating a multi-dimensional tensor that captures unimodal, bimodal and trimodal interactions across three modalities. It is the current state of the art for CMU-MOSI dataset.
BC-LSTM (Bidirectional Contextual LSTM) BIBREF5 is a model for context-dependent sentiment analysis and emotion recognition, currently state of the art on the IEMOCAP and MOUD datasets.
MV-LSTM (Multi-View LSTM) BIBREF20 is a recurrent model that designates special regions inside one LSTM to different views of the data.
C-MKL (Convolutional Neural Network (CNN) with Multiple Kernel Learning) BIBREF29 is a model which uses a CNN for visual feature extraction and multiple kernel learning for prediction.
THMM (Tri-modal Hidden Markov Model) BIBREF0 performs early fusion of the modalities by concatenation and uses a HMM for classification.
SVM (Support Vector Machine) BIBREF30 a SVM is trained on the concatenated multimodal features for classification or regression BIBREF11 , BIBREF22 , BIBREF23 . To compare to another strong non-neural baseline we use RF (Random Forest) BIBREF31 using similar multimodal inputs.
SAL-CNN (Selective Additive Learning CNN) BIBREF9 is a model that attempts to prevent identity-dependent information from being learned by using Gaussian corruption introduced to the neuron outputs.
EF-HCRF: (Hidden Conditional Random Field) BIBREF16 uses a HCRF to learn a set of latent variables conditioned on the concatenated input at each time step. We also implement the following variations: 1) EF-LDHCRF (Latent Discriminative HCRFs) BIBREF17 are a class of models that learn hidden states in a CRF using a latent code between observed concatenated input and hidden output. 2) MV-HCRF: Multi-view HCRF BIBREF18 is an extension of the HCRF for Multi-view data, explicitly capturing view-shared and view specific sub-structures. 3) MV-LDHCRF: is a variation of the MV-HCRF model that uses LDHCRF instead of HCRF. 4) EF-HSSHCRF: (Hierarchical Sequence Summarization HCRF) BIBREF19 is a layered model that uses HCRFs with latent variables to learn hidden spatio-temporal dynamics. 5) MV-HSSHCRF: further extends EF-HSSHCRF by performing Multi-view hierarchical sequence summary representation. The best performing early fusion model is reported as EF-HCRF $_{(\star )}$ while the best multi-view model is reported as MV-HCRF $_{(\star )}$ , where $\star \in \lbrace \textrm {h, l, s}\rbrace $ to represent HCRF, LDCRF and HSSCRF respectively.
DF (Deep Fusion) BIBREF14 is a model that trains one deep model for each modality and performs decision voting on the output of each modality network.
EF-LSTM (Early Fusion LSTM) concatenates the inputs from different modalities at each time-step and uses that as the input to a single LSTM. We also implement the Stacked, (EF-SLSTM) Bidirectional (EF-BLSTM) and Stacked Bidirectional (EF-SBLSTM) LSTMs for stronger baselines. The best performing model is reported as EF-LSTM $_{(\star )}$ , $\star \in \lbrace \textrm {-, s, b, sb}\rbrace $ denoting vanilla, stacked, bidirectional and stacked bidirectional LSTMs respectively.
Majority performs majority voting for classification tasks, and predicts the expected label for regression tasks. This baseline is useful as a lower bound of model performance.
Human performance is calculated for CMU-MOSI dataset which offers per annotator results. This is the accuracy of human performance in a one-vs-rest classification/regression.
Finally, MARN indicates our proposed model. Additionally, the modified baseline MARN (no MAB) removes the MAB and learns no dense cross-view dynamics code $z$ . This model can be seen as three disjoint LSTMs and is used to investigate the importance of modeling temporal cross-view dynamics. The next modified baseline MARN (no $\mathcal {A}$ ) removes the $\mathcal {A}$ deep network and sets all $K$ attention coefficients $a^k_t = 1$ ( $h^k_t = \tilde{h}^k_t$ ). This comparison shows whether explicitly outlining the cross-view dynamics using the attention coefficients is required. For MARN and MARN (no $\mathcal {A}$ ), $K$ is treated as a hyperparamter and the best value of $K$ is indicated in parenthesis next to the best reported result.
Results on CMU-MOSI dataset
We summarize the results on the CMU-MOSI dataset in Table 1 . We are able to achieve new state-of-the-art results for this dataset in all the metrics using the MARN. This highlights our model's capability in understanding sentiment aspect of multimodal communication.
Results on ICT-MMMO, YouTube, MOUD Datasets
We achieve state-of-the-art performance with significant improvement over all the comparison metrics for two English sentiment analysis datasets. Table 2 shows the comparison of our MARN with state-of-the-art approaches for ICT-MMMO dataset as well as the comparison for YouTube dataset. To assess the generalization of the MARN to speakers communicating in different languages, we compare with state-of-the-art approaches for sentiment analysis on MOUD, with opinion utterance video clips in Spanish. The final third of Table 2 shows these results where we also achieve significant improvement over state-of-the-art approaches.
Results on POM Dataset
We experiment on speaker traits recognition based on observed multimodal communicative behaviors. Table 3 shows the performance of the MARN on POM dataset, where it achieves state-of-the-art accuracies on all 11 speaker trait recognition tasks including persuasiveness and credibility.
Results on IEMOCAP Dataset
Our results for multimodal emotion recognition on IEMOCAP dataset are reported in Table 4 . Our approach achieves state-of-the-art performance in emotion recognition: both emotion classification as well as continuous emotion regression except for the case of correlation in dominance which our results are competitive but not state of the art.
Discussion
Our experiments indicate outstanding performance of MARN in modeling various attributes related to human communication. In this section, we aim to better understand different characteristics of our model.
Properties of Attentions
To better understand the effects of attentions, we pose four fundamental research questions (RQ) in this section as RQ1: MARN (no MAB): whether the cross-view dynamics are helpful. RQ2: MARN (no $\mathcal {A}$ ): whether the attention coefficients are needed. RQ3: MARN: whether one attention is enough to extract all cross-view dynamics. RQ4: whether different tasks and datasets require different numbers of attentions.
RQ1: MARN (no MAB) model can only learn simple rules among modalities such as decision voting or simple co-occurrence rules such as Tensor Fusion baseline. Across all datasets, MARN (no MAB) is outperformed by MARN. This indicates that continuous modeling of cross-view dynamics is crucial in understanding human communication.
RQ2: Whether or not the presence of the coefficients $a_t$ are crucial is an important research question. From the results tables, we notice that the MARN (no $\mathcal {A}$ ) baseline severely under-performs compared to MARN. This supports the importance of the attentions in the MAB. Without these attentions, MARN is not able to accurately model the cross-view dynamics.
RQ3: In our experiments the MARN with only one attention (like conventional attention models) under-performs compared to the models with multiple attentions. One could argue that the models with more attentions have more parameters, and as a result their better performance may not be due to better modeling of cross-view dynamics, but rather due to more parameters. However we performed extensive grid search on the number of parameters in MARN with one attention. Increasing the number of parameters further (by increasing dense layers, LSTHM cellsizes etc.) did not improve performance. This indicates that the better performance of MARN with multiple attentions is not due to the higher number of parameters but rather due to better modeling of cross-view dynamics.
RQ4: Different tasks and datasets require different number of attentions. This is highly dependent on each dataset's nature and the underlying interconnections between modalities.
Visualization of Attentions
We visually display how each attention is sensitive to different dimensions of LSTHM outputs in Figure 3 . Each column of the figure denoted by $a^k$ shows the behavior of the $k$ th attention on a sample video from CMU-MOSI. The left side of $a^k$ is $t=1$ and the right side is $t=20$ , since the sequence has 20 words. The $y$ axis shows what modality the dimension belongs to. Dark blue means high coefficients and red means low coefficients. Our observations (O) are detailed below:
O1: By comparing each of the attentions together, they show diversity on which dimensions they are sensitive to, indicating that each attention is sensitive to different cross-view dynamics.
O2: Some attention coefficients are not active (always red) throughout time. These dimensions carry only view-specific dynamics needed by that modality and not other modalities. Hence, they are not needed for cross-view dynamics and will carry no weight in their formation.
O3: Attentions change their behaviors across time. For some coefficients, these changes are more drastic than the others. We suspect that the less drastic the change in an attention dimension over time, the higher the chances of that dimension being part of multiple cross-view dynamics. Thus more attentions activate this important dimension.
O4: Some attentions focus on cross-view dynamics that involve only two modalities. For example, in $a^3$ , the audio modality has no dark blue dimensions, while in $a^1$ all the modalities have dark blue dimensions. The attentions seem to have residual effects. $a^1$ shows activations over a broad set of variables while $a^4$ shows activation for fewer sets, indicating that attentions could learn to act in a complementary way.
Conclusion
In this paper we modeled multimodal human communication using a novel neural approach called the Multi-attention Recurrent Network (MARN). Our approach is designed to model both view-specific dynamics as well as cross-view dynamics continuously through time. View-specific dynamics are modeled using a Long-short Term Hybrid Memory (LSTHM) for each modality. Various cross-view dynamics are identified at each time-step using the Multi-attention Block (MAB) which outputs a multimodal neural code for the hybrid memory of LSTHM. MARN achieves state-of-the-art results in 6 publicly available datasets and across 16 different attributes related to understanding human communication.
Acknowledgements
This project was partially supported by Oculus research grant. We thank the reviewers for their valuable feedback. | Language, Vision, Acoustic |
082bc58e1a2a65fc1afec4064a51e4c785674fd7 | 082bc58e1a2a65fc1afec4064a51e4c785674fd7_0 | Q: What is the difference between Long-short Term Hybrid Memory and LSTMs?
Text: Introduction
Humans communicate using a highly complex structure of multimodal signals. We employ three modalities in a coordinated manner to convey our intentions: language modality (words, phrases and sentences), vision modality (gestures and expressions), and acoustic modality (paralinguistics and changes in vocal tones) BIBREF0 . Understanding this multimodal communication is natural for humans; we do it subconsciously in the cerebrum of our brains everyday. However, giving Artificial Intelligence (AI) the capability to understand this form of communication the same way humans do, by incorporating all involved modalities, is a fundamental research challenge. Giving AI the capability to understand human communication narrows the gap in computers' understanding of humans and opens new horizons for the creation of many intelligent entities.
The coordination between the different modalities in human communication introduces view-specific and cross-view dynamics BIBREF1 . View-specific dynamics refer to dynamics within each modality independent of other modalities. For example, the arrangement of words in a sentence according to the generative grammar of the language (language modality) or the activation of facial muscles for the presentation of a smile (vision modality). Cross-view dynamics refer to dynamics between modalities and are divided into synchronous and asynchronous categories. An example of synchronous cross-view dynamics is the simultaneous co-occurrence of a smile with a positive sentence and an example of asynchronous cross-view dynamics is the delayed occurrence of a laughter after the end of sentence. For machines to understand human communication, they must be able to understand these view-specific and cross-view dynamics.
To model these dual dynamics in human communication, we propose a novel deep recurrent neural model called the Multi-attention Recurrent Network (MARN). MARN is distinguishable from previous approaches in that it explicitly accounts for both view-specific and cross-view dynamics in the network architecture and continuously models both dynamics through time. In MARN, view-specific dynamics within each modality are modeled using a Long-short Term Hybrid Memory (LSTHM) assigned to that modality. The hybrid memory allows each modality's LSTHM to store important cross-view dynamics related to that modality. Cross-view dynamics are discovered at each recurrence time-step using a specific neural component called the Multi-attention Block (MAB). The MAB is capable of simultaneously finding multiple cross-view dynamics in each recurrence timestep. The MARN resembles the mechanism of our brains for understanding communication, where different regions independently process and understand different modalities BIBREF2 , BIBREF3 – our LSTHM – and are connected together using neural links for multimodal information integration BIBREF4 – our MAB. We benchmark MARN by evaluating its understanding of different aspects of human communication covering sentiment of speech, emotions conveyed by the speaker and displayed speaker traits. We perform extensive experiments on 16 different attributes related to human communication on public multimodal datasets. Our approach shows state-of-the-art performance in modeling human communication for all datasets.
Related Work
Modeling multimodal human communication has been studied previously. Past approaches can be categorized as follows:
Non-temporal Models: Studies have focused on simplifying the temporal aspect of cross-view dynamics BIBREF5 , BIBREF6 , BIBREF7 in order to model co-occurrences of information across the modalities. In these models, each modality is summarized in a representation by collapsing the time dimension, such as averaging the modality information through time BIBREF8 . While these models are successful in understanding co-occurrences, the lack of temporal modeling is a major flaw as these models cannot deal with multiple contradictory evidences, eg. if a smile and frown happen together in an utterance. Furthermore, these approaches cannot accurately model long sequences since the representation over long periods of time become less informative.
Early Fusion: Approaches have used multimodal input feature concatenation instead of modeling view-specific and cross-view dynamics explicitly. In other words, these approaches rely on generic models (such as Support Vector Machines or deep neural networks) to learn both view-specific and cross-view dynamics without any specific model design. This concatenation technique is known as early fusion BIBREF9 , BIBREF10 . Often, these early fusion approaches remove the time factor as well BIBREF11 , BIBREF0 . We additionally compare to a stronger recurrent baseline that uses early fusion while maintaining the factor of time. A shortcoming of these models is the lack of detailed modeling for view-specific dynamics, which in turn affects the modeling of cross-view dynamics, as well as causing overfitting on input data BIBREF12 .
Late Fusion: Late fusion methods learn different models for each modality and combine the outputs using decision voting BIBREF13 , BIBREF14 . While these methods are generally strong in modeling view-specific dynamics, they have shortcomings for cross-view dynamics since these inter-modality dynamics are normally more complex than a decision vote. As an example of this shortcoming, if a model is trained for sentiment analysis using the vision modality and predicts negative sentiment, late fusion models have no access to whether this negative sentiment was due to a frowning face or a disgusted face.
Multi-view Learning: Extensions of Hidden Markov Models BIBREF15 and Hidden Conditional Random Fields BIBREF16 , BIBREF17 have been proposed for learning from multiple different views (modalities) BIBREF18 , BIBREF19 . Extensions of LSTMs have also been proposed in a multi-view setting BIBREF20 .
MARN is different from the first category since we model both view-specific and cross-view dynamics. It is differs from the second and third category since we explicitly model view-specific dynamics using a LSTHM for each modality as well as cross-view dynamics using the MAB. Finally, MARN is different from the fourth category since it explicitly models view-specific dynamics and proposes more advanced temporal modeling of cross-view dynamics.
MARN Model
In this section we outline our pipeline for human communication comprehension: the Multi-attention Recurrent Network (MARN). MARN has two key components: Long-short Term Hybrid Memory and Multi-attention Block. Long-short Term Hybrid Memory (LSTHM) is an extension of the Long-short Term Memory (LSTM) by reformulating the memory component to carry hybrid information. LSTHM is intrinsically designed for multimodal setups and each modality is assigned a unique LSTHM. LSTHM has a hybrid memory that stores view-specific dynamics of its assigned modality and cross-view dynamics related to its assigned modality. The component that discovers cross-view dynamics across different modalities is called the Multi-attention Block (MAB). The MAB first uses information from hidden states of all LSTHMs at a timestep to regress coefficients to outline the multiple existing cross-view dynamics among them. It then weights the output dimensions based on these coefficients and learns a neural cross-view dynamics code for LSTHMs to update their hybrid memories. Figure 1 shows the overview of the MARN. MARN is differentiable end-to-end which allows the model to be learned efficiently using gradient decent approaches. In the next subsection, we first outline the Long-short Term Hybrid Memory. We then proceed to outline the Multi-attention Block and describe how the two components are integrated in the MARN.
Long-short Term Hybrid Memory
Long-short Term Memory (LSTM) networks have been among the most successful models in learning from sequential data BIBREF21 . The most important component of the LSTM is a memory which stores a representation of its input through time. In the LSTHM model, we seek to build a memory mechanism for each modality which in addition to storing view-specific dynamics, is also able to store the cross-view dynamics that are important for that modality. This allows the memory to function in a hybrid manner.
The Long-short Term Hybrid Memory is formulated in Algorithm 1. Given a set of $M$ modalities in the domain of the data, subsequently $M$ LSTHMs are built in the MARN pipeline. For each modality $m \in M$ , the input to the $m$ th LSTHM is of the form $\mathbf {X}^m=\lbrace {x}_{1}^m, {x}_{2}^m, {x}_{3}^m, \cdots , {x}_{T}^m \ ; {x}_{t}^m \in \mathbb {R}^{d_{in}^m} \rbrace $ , where ${x}^m_{t}$ is the input at time $t$ and $d^m_{in}$ is the dimensionality of the input of modality $m$ . For example if $m=l \textrm {(language)}$ , we can use word vectors with $M$0 at each time step $M$1 . $M$2 is the dimensionality of the memory for modality $M$3 . $M$4 is the (hard-)sigmoid activation function and $M$5 is the tangent hyperbolic activation function. $M$6 denotes vector concatenation and $M$7 denotes element-wise multiplication. Similar to the LSTM, $M$8 is the input gate, $M$9 is the forget gate, and $m \in M$0 is the output gate. $m \in M$1 is the proposed update to the hybrid memory $m \in M$2 at time $m \in M$3 . $m \in M$4 is the time distributed output of each modality.
The neural cross-view dynamics code $z_{t}$ is the output of the Multi-attention Block at the previous time-step and is discussed in detail in next subsection. This neural cross-view dynamics code $z_{t}$ is passed to each of the individual LSTHMs and is the hybrid factor, allowing each individual LSTHM to carry cross-view dynamics that it finds related to its modality. The set of weights $W^m_*$ , $U^m_*$ and $V^m_*$ respectively map the input of LSTHM $x^m_t$ , output of LSTHM $h^m_t$ , and neural cross-view dynamics code $z_{t}$ to each LSTHM memory space using affine transformations.
[b!] Multi-attention Recurrent Network (MARN), Long-short Term Hybrid Memory (LSTHM) and Multi-attention Block (MAB) Formulation [1] $\textrm {MARN}$ $\mathbf {X}^m$ $c_0,h_0,z_0 \leftarrow \mathbf {0}$ $t = 1, ..., T$ : $h_t \leftarrow \textrm {LSTHM\_Step} (\bigcup _{m \in M} \lbrace x^m_t\rbrace , z_{t-1})$ $z_t \leftarrow \textrm {MAB\_Step} (h_t)$ $h_T, z_T$
$\textrm {LSTHM\_Step}$ $\bigcup _{m \in M} \lbrace {x}^m_t\rbrace $ , $z_{t-1}$ $m \in M$ : $\triangleleft $ for all the $M$ modalities $i_t^m \leftarrow \sigma (W_i^m\ x^m_t+U^m_i\ h^m_{t-1}+V^m_i\ z_{t-1}+b^m_{i})$ $f^m_t \leftarrow \sigma (W^m_{f}\ x^m_t + U^m_{f}\ h^m_{t-1} + V^m_f\ z_{t-1}+b^m_{f})$ $o^k_t \leftarrow \sigma (W^m_{o}\ x^m_t + U^m_{o}\ h^m_{t-1} + V^m_o\ z_{t-1}+b^m_{o})$ $\bar{c}_t^m \leftarrow W_{\bar{c}}^m\ x^m_t + U_{\bar{c}}^m\ h^m_{t-1} + V_{\bar{c}}^m\ z_{t-1} + b^m_{\bar{c}}$ $\bigcup _{m \in M} \lbrace {x}^m_t\rbrace $0 $\bigcup _{m \in M} \lbrace {x}^m_t\rbrace $1 $\bigcup _{m \in M} \lbrace {x}^m_t\rbrace $2
$h_t$
$\textrm {MAB\_Step}$ $h_t$ $a_t \leftarrow \mathcal {A}(h_t; \theta _{\mathcal {A}})$ $\triangleleft $ $K$ output coefficients $\widetilde{h}_t \leftarrow a_t \odot \langle \Uparrow _K h_t \rangle $ $m \in M$ : $\triangleleft $ calculate cross-view dynamics $s^m_{t} \leftarrow \mathcal {C}_m (\widetilde{h}^m_{t}; \theta _{\mathcal {C}_m})$ $s_t \leftarrow \bigoplus _{m \in M} s^m_{t}$ $h_t$0
$z_t$
Multi-attention Block
At each timestamp $t$ , various cross-view dynamics across the modalities can occur simultaneously. For example, the first instance can be the connection between a smile and positive phrase both happening at time $t$ . A second instance can be the occurrence of the same smile at time $t$ being connected to an excited voice at time $t-4$ , that was carried to time $t$ using the audio LSTHM memory. In both of these examples, cross-view dynamics exist at time $t$ . Therefore, not only do cross-view dynamics span across various modalities, they are scattered across time forming asynchronous cross-view dynamics.
The Multi-attention Block is a network that can capture multiple different, possibly asynchronous, cross-view dynamics and encode all of them in a neural cross-view dynamics code $z_t$ . In the most important step of the Multi-attention Block, different dimensions of LSTHM outputs $h^m_t$ are assigned attention coefficients according to whether or not they form cross-view dynamics. These attention coefficients will be high if the dimension contributes to formation of a cross-view dynamics and low if they are irrelevant. The coefficient assignment is performed multiple times due to the existence of possibly multiple such cross-view dynamics across the outputs of LSTHM. The Multi-attention Block is formulated in Algorithm 1. We assume a maximum of $K$ cross-view dynamics to be present at each timestamp $t$ . To obtain the $K$ attention coefficients, $K$ softmax distributions are assigned to the concatenated LSTHM memories using a deep neural network $\mathcal {A} : \mathbb {R}^{{d_{mem}}} \mapsto \mathbb {R}^{K \times {d_{mem}}}$ with ${d_{mem}} = \sum _{m \in M} {d^{m}_{mem}}$ . At each timestep $t$ , the output of LSTHM is the set $\lbrace h^m_t : m \in M, h^m_t \in \mathbb {R}^{{d^m_{mem}}}\rbrace $ . $h^m_t$0 takes the concatenation of LSTHM outputs $h^m_t$1 as input and outputs a set of $h^m_t$2 attentions $h^m_t$3 with $h^m_t$4 , $h^m_t$5 . $h^m_t$6 has a softmax layer at top of the network which takes the softmax activation along each one of the $h^m_t$7 dimensions of its output $h^m_t$8 . As a result, $h^m_t$9 which forms a probability distribution over the output dimensions. $K$0 is then broadcasted (from $K$1 to $K$2 ) and element-wise multiplied by the $K$3 to produce attended outputs $K$4 , $K$5 . $K$6 denotes broadcasting by parameter $K$7 .
The first dimension of $\widetilde{h}_t$ contains information needed for the first cross-view dynamic highlighted using $a^1_t$ , the second dimension of $\widetilde{h}_t$ contains information for the second cross-view dynamic using $a^2_t$ , and so on until $K$ . $\widetilde{h}_t$ is high dimensional but ideally considered sparse due to presence of dimensions with zero value after element-wise multiplication with attentions. Therefore, $\widetilde{h}_t$ is split into $m$ different parts – one for each modality $m$ – and undergoes dimensionality reduction using $\mathcal {C}_m : \mathbb {R}^{K \times {d^m_{mem}}} \mapsto \mathbb {R}^{{d^m_{local}}}, \forall m \in M$ with $a^1_t$0 as the target low dimension of each modality split in $a^1_t$1 . The set of networks $a^1_t$2 maps the attended outputs of each modality $a^1_t$3 to the same vector space. This dimensionality reduction produces a dense code $a^1_t$4 for the $a^1_t$5 times attended dimensions of each modality. Finally, the set of all $a^1_t$6 attended modality outputs, $a^1_t$7 , are passed into a deep neural network $a^1_t$8 to generate the neural cross-view dynamics code $a^1_t$9 at time $\widetilde{h}_t$0 .
Experimental Methodology
In this paper we benchmark MARN's understanding of human communication on three tasks: 1) multimodal sentiment analysis, 2) multimodal speaker traits recognition and 3) multimodal emotion recognition. We perform experimentations on six publicly available datasets and compare the performance of MARN with the performance of state-of-the-art approaches on the same datasets. To ensure generalization of the model, all the datasets are split into train, validation and test sets that include no identical speakers between sets, i.e. all the speakers in the test set are different from the train and validation sets. All models are re-trained on the same train/validation/test splits. To train the MARN for different tasks, the final outputs $h_T$ and neural cross-view dynamics code $z_T$ are the inputs to another deep neural network that performs classification (categorical cross-entropy loss function) or regression (mean squared error loss function). The code, hyperparameters and instruction on data splits are publicly available at https://github.com/A2Zadeh/MARN.
Following is the description of different benchmarks.
Multimodal Sentiment Analysis
CMU-MOSI The CMU-MOSI dataset BIBREF11 is a collection of 2199 opinion video clips. Each opinion video is annotated with sentiment in the range [-3,3]. There are 1284 segments in the train set, 229 in the validation set and 686 in the test set.
ICT-MMMO The ICT-MMMO dataset BIBREF7 consists of online social review videos that encompass a strong diversity in how people express opinions, annotated at the video level for sentiment. The dataset contains 340 multimodal review videos, of which 220 are used for training, 40 for validation and 80 for testing.
YouTube The YouTube dataset BIBREF0 contains videos from the social media web site YouTube that span a wide range of product reviews and opinion videos. Out of 46 videos, 30 are used for training, 5 for validation and 11 for testing.
MOUD To show that MARN is generalizable to other languages, we perform experimentation on the MOUD dataset BIBREF22 which consists of product review videos in Spanish. Each video consists of multiple segments labeled to display positive, negative or neutral sentiment. Out of 79 videos in the dataset, 49 are used for training, 10 for validation and 20 for testing.
Multimodal Speaker Trait Recognition
POM Persuasion Opinion Multimodal (POM) dataset BIBREF23 contains movie review videos annotated for the following speaker traits: confidence, passion, dominance, credibility, entertaining, reserved, trusting, relaxed, nervous, humorous and persuasive. 903 videos were split into 600 were for training, 100 for validation and 203 for testing.
Multimodal Emotion Recognition
IEMOCAP The IEMOCAP dataset BIBREF24 consists of 151 videos of recorded dialogues, with 2 speakers per session for a total of 302 videos across the dataset. Each segment is annotated for the presence of 9 emotions (angry, excited, fear, sad, surprised, frustrated, happy, disappointed and neutral) as well as valence, arousal and dominance. The dataset is recorded across 5 sessions with 5 pairs of speakers. To ensure speaker independent learning, the dataset is split at the level of sessions: training is performed on 3 sessions (6 distinct speakers) while validation and testing are each performed on 1 session (2 distinct speakers).
Multimodal Computational Descriptors
All the datasets consist of videos where only one speaker is in front of the camera. The descriptors we used for each of the modalities are as follows:
Language All the datasets provide manual transcriptions. We use pre-trained word embeddings (glove.840B.300d) BIBREF25 to convert the transcripts of videos into a sequence of word vectors. The dimension of the word vectors is 300.
Vision Facet BIBREF26 is used to extract a set of features including per-frame basic and advanced emotions and facial action units as indicators of facial muscle movement.
Acoustic We use COVAREP BIBREF27 to extract low level acoustic features including 12 Mel-frequency cepstral coefficients (MFCCs), pitch tracking and voiced/unvoiced segmenting features, glottal source parameters, peak slope parameters and maxima dispersion quotients.
Modality Alignment To reach the same time alignment between different modalities we choose the granularity of the input to be at the level of words. The words are aligned with audio using P2FA BIBREF28 to get their exact utterance times. Time step $t$ represents the $t$ th spoken word in the transcript. We treat speech pause as a word with vector values of all zero across dimensions. The visual and acoustic modalities follow the same granularity. We use expected feature values across the entire word for vision and acoustic since they are extracted at a higher frequency (30 Hz for vision and 100 Hz for acoustic).
Comparison Metrics
Different datasets in our experiments have different labels. For binary classification and multiclass classification we report accuracy A $^C$ where $C$ denotes the number of classes, and F1 score. For regression we report Mean Absolute Error MAE and Pearson's correlation $r$ . For all the metrics, higher values denote better performance, except MAE where lower values denote better performance.
Baseline Models
We compare the performance of our MARN to the following state-of-the-art models in multimodal sentiment analysis, speaker trait recognition, and emotion recognition. All baselines are trained for datasets for complete comparison.
TFN (Tensor Fusion Network) BIBREF1 explicitly models view-specific and cross-view dynamics by creating a multi-dimensional tensor that captures unimodal, bimodal and trimodal interactions across three modalities. It is the current state of the art for CMU-MOSI dataset.
BC-LSTM (Bidirectional Contextual LSTM) BIBREF5 is a model for context-dependent sentiment analysis and emotion recognition, currently state of the art on the IEMOCAP and MOUD datasets.
MV-LSTM (Multi-View LSTM) BIBREF20 is a recurrent model that designates special regions inside one LSTM to different views of the data.
C-MKL (Convolutional Neural Network (CNN) with Multiple Kernel Learning) BIBREF29 is a model which uses a CNN for visual feature extraction and multiple kernel learning for prediction.
THMM (Tri-modal Hidden Markov Model) BIBREF0 performs early fusion of the modalities by concatenation and uses a HMM for classification.
SVM (Support Vector Machine) BIBREF30 a SVM is trained on the concatenated multimodal features for classification or regression BIBREF11 , BIBREF22 , BIBREF23 . To compare to another strong non-neural baseline we use RF (Random Forest) BIBREF31 using similar multimodal inputs.
SAL-CNN (Selective Additive Learning CNN) BIBREF9 is a model that attempts to prevent identity-dependent information from being learned by using Gaussian corruption introduced to the neuron outputs.
EF-HCRF: (Hidden Conditional Random Field) BIBREF16 uses a HCRF to learn a set of latent variables conditioned on the concatenated input at each time step. We also implement the following variations: 1) EF-LDHCRF (Latent Discriminative HCRFs) BIBREF17 are a class of models that learn hidden states in a CRF using a latent code between observed concatenated input and hidden output. 2) MV-HCRF: Multi-view HCRF BIBREF18 is an extension of the HCRF for Multi-view data, explicitly capturing view-shared and view specific sub-structures. 3) MV-LDHCRF: is a variation of the MV-HCRF model that uses LDHCRF instead of HCRF. 4) EF-HSSHCRF: (Hierarchical Sequence Summarization HCRF) BIBREF19 is a layered model that uses HCRFs with latent variables to learn hidden spatio-temporal dynamics. 5) MV-HSSHCRF: further extends EF-HSSHCRF by performing Multi-view hierarchical sequence summary representation. The best performing early fusion model is reported as EF-HCRF $_{(\star )}$ while the best multi-view model is reported as MV-HCRF $_{(\star )}$ , where $\star \in \lbrace \textrm {h, l, s}\rbrace $ to represent HCRF, LDCRF and HSSCRF respectively.
DF (Deep Fusion) BIBREF14 is a model that trains one deep model for each modality and performs decision voting on the output of each modality network.
EF-LSTM (Early Fusion LSTM) concatenates the inputs from different modalities at each time-step and uses that as the input to a single LSTM. We also implement the Stacked, (EF-SLSTM) Bidirectional (EF-BLSTM) and Stacked Bidirectional (EF-SBLSTM) LSTMs for stronger baselines. The best performing model is reported as EF-LSTM $_{(\star )}$ , $\star \in \lbrace \textrm {-, s, b, sb}\rbrace $ denoting vanilla, stacked, bidirectional and stacked bidirectional LSTMs respectively.
Majority performs majority voting for classification tasks, and predicts the expected label for regression tasks. This baseline is useful as a lower bound of model performance.
Human performance is calculated for CMU-MOSI dataset which offers per annotator results. This is the accuracy of human performance in a one-vs-rest classification/regression.
Finally, MARN indicates our proposed model. Additionally, the modified baseline MARN (no MAB) removes the MAB and learns no dense cross-view dynamics code $z$ . This model can be seen as three disjoint LSTMs and is used to investigate the importance of modeling temporal cross-view dynamics. The next modified baseline MARN (no $\mathcal {A}$ ) removes the $\mathcal {A}$ deep network and sets all $K$ attention coefficients $a^k_t = 1$ ( $h^k_t = \tilde{h}^k_t$ ). This comparison shows whether explicitly outlining the cross-view dynamics using the attention coefficients is required. For MARN and MARN (no $\mathcal {A}$ ), $K$ is treated as a hyperparamter and the best value of $K$ is indicated in parenthesis next to the best reported result.
Results on CMU-MOSI dataset
We summarize the results on the CMU-MOSI dataset in Table 1 . We are able to achieve new state-of-the-art results for this dataset in all the metrics using the MARN. This highlights our model's capability in understanding sentiment aspect of multimodal communication.
Results on ICT-MMMO, YouTube, MOUD Datasets
We achieve state-of-the-art performance with significant improvement over all the comparison metrics for two English sentiment analysis datasets. Table 2 shows the comparison of our MARN with state-of-the-art approaches for ICT-MMMO dataset as well as the comparison for YouTube dataset. To assess the generalization of the MARN to speakers communicating in different languages, we compare with state-of-the-art approaches for sentiment analysis on MOUD, with opinion utterance video clips in Spanish. The final third of Table 2 shows these results where we also achieve significant improvement over state-of-the-art approaches.
Results on POM Dataset
We experiment on speaker traits recognition based on observed multimodal communicative behaviors. Table 3 shows the performance of the MARN on POM dataset, where it achieves state-of-the-art accuracies on all 11 speaker trait recognition tasks including persuasiveness and credibility.
Results on IEMOCAP Dataset
Our results for multimodal emotion recognition on IEMOCAP dataset are reported in Table 4 . Our approach achieves state-of-the-art performance in emotion recognition: both emotion classification as well as continuous emotion regression except for the case of correlation in dominance which our results are competitive but not state of the art.
Discussion
Our experiments indicate outstanding performance of MARN in modeling various attributes related to human communication. In this section, we aim to better understand different characteristics of our model.
Properties of Attentions
To better understand the effects of attentions, we pose four fundamental research questions (RQ) in this section as RQ1: MARN (no MAB): whether the cross-view dynamics are helpful. RQ2: MARN (no $\mathcal {A}$ ): whether the attention coefficients are needed. RQ3: MARN: whether one attention is enough to extract all cross-view dynamics. RQ4: whether different tasks and datasets require different numbers of attentions.
RQ1: MARN (no MAB) model can only learn simple rules among modalities such as decision voting or simple co-occurrence rules such as Tensor Fusion baseline. Across all datasets, MARN (no MAB) is outperformed by MARN. This indicates that continuous modeling of cross-view dynamics is crucial in understanding human communication.
RQ2: Whether or not the presence of the coefficients $a_t$ are crucial is an important research question. From the results tables, we notice that the MARN (no $\mathcal {A}$ ) baseline severely under-performs compared to MARN. This supports the importance of the attentions in the MAB. Without these attentions, MARN is not able to accurately model the cross-view dynamics.
RQ3: In our experiments the MARN with only one attention (like conventional attention models) under-performs compared to the models with multiple attentions. One could argue that the models with more attentions have more parameters, and as a result their better performance may not be due to better modeling of cross-view dynamics, but rather due to more parameters. However we performed extensive grid search on the number of parameters in MARN with one attention. Increasing the number of parameters further (by increasing dense layers, LSTHM cellsizes etc.) did not improve performance. This indicates that the better performance of MARN with multiple attentions is not due to the higher number of parameters but rather due to better modeling of cross-view dynamics.
RQ4: Different tasks and datasets require different number of attentions. This is highly dependent on each dataset's nature and the underlying interconnections between modalities.
Visualization of Attentions
We visually display how each attention is sensitive to different dimensions of LSTHM outputs in Figure 3 . Each column of the figure denoted by $a^k$ shows the behavior of the $k$ th attention on a sample video from CMU-MOSI. The left side of $a^k$ is $t=1$ and the right side is $t=20$ , since the sequence has 20 words. The $y$ axis shows what modality the dimension belongs to. Dark blue means high coefficients and red means low coefficients. Our observations (O) are detailed below:
O1: By comparing each of the attentions together, they show diversity on which dimensions they are sensitive to, indicating that each attention is sensitive to different cross-view dynamics.
O2: Some attention coefficients are not active (always red) throughout time. These dimensions carry only view-specific dynamics needed by that modality and not other modalities. Hence, they are not needed for cross-view dynamics and will carry no weight in their formation.
O3: Attentions change their behaviors across time. For some coefficients, these changes are more drastic than the others. We suspect that the less drastic the change in an attention dimension over time, the higher the chances of that dimension being part of multiple cross-view dynamics. Thus more attentions activate this important dimension.
O4: Some attentions focus on cross-view dynamics that involve only two modalities. For example, in $a^3$ , the audio modality has no dark blue dimensions, while in $a^1$ all the modalities have dark blue dimensions. The attentions seem to have residual effects. $a^1$ shows activations over a broad set of variables while $a^4$ shows activation for fewer sets, indicating that attentions could learn to act in a complementary way.
Conclusion
In this paper we modeled multimodal human communication using a novel neural approach called the Multi-attention Recurrent Network (MARN). Our approach is designed to model both view-specific dynamics as well as cross-view dynamics continuously through time. View-specific dynamics are modeled using a Long-short Term Hybrid Memory (LSTHM) for each modality. Various cross-view dynamics are identified at each time-step using the Multi-attention Block (MAB) which outputs a multimodal neural code for the hybrid memory of LSTHM. MARN achieves state-of-the-art results in 6 publicly available datasets and across 16 different attributes related to understanding human communication.
Acknowledgements
This project was partially supported by Oculus research grant. We thank the reviewers for their valuable feedback. | Long-short Term Hybrid Memory (LSTHM) is an extension of the Long-short Term Memory (LSTM) |
46563a1fb2c3e1b39a185e4cbb3ee1c80c8012b7 | 46563a1fb2c3e1b39a185e4cbb3ee1c80c8012b7_0 | Q: Do they report results only on English data?
Text: Introduction
A metaphor is a way of forcing the normal boundaries of a word's meaning in order to better express an experience, a concept or an idea. To a native speaker's ear some metaphors sound more conventional (like the usage of the words ear and sound in this sentence), others more original. This is not the only dimension along which to judge a metaphor. One of the most important qualities of a metaphor is its appropriateness, its aptness: how good is a metaphor for conveying a given experience or concept. While a metaphor's degree of conventionality can be measured through probabilistic methods, like language models, it is harder to represent its aptness. BIBREF0 define aptness as “the extent to which a comparison captures important features of the topic".
It is possible to express an opinion about some metaphors' and similes' aptness (at least to a degree) without previously knowing what they are trying to convey, or the context in which they appear. For example, we don't need a particular context or frame of reference to construe the simile She was screaming like a turtle as strange, and less apt for expressing the quality of a scream than She was screaming like a banshee. In this case, the reason why the simile in the second sentence works best is intuitive. A salient characteristic of a banshee is a powerful scream. Turtles are not known for screaming, and so it is harder to define the quality of a scream through such a comparison, except as a form of irony. Other cases are more complicated to decide upon. The simile crying like a fire in the sun (It's All Over Now, Baby Blue, Bob Dylan) is powerfully apt for many readers, but simply odd for others. Fire and sun are not known to cry in any way. But at the same time the simile can capture the association we draw between something strong and intense in other senses - vision, touch, etc. - and a loud cry.
Nonetheless, most metaphors and similes need some kind of context, or external reference point to be interpreted. The sentence The old lady had a heart of stone is apt if the old lady is cruel or indifferent, but it is inappropriate as a description of a situation in which the old lady is kind and caring. We assume that, to an average reader's sensibility, the sentence models the situation in a satisfactory way only in the first case.
This is the approach to metaphor aptness that we assume in this paper. Following BIBREF3 , we treat a metaphor as apt in relation to a literal expression that it paraphrases. If the metaphor is judged to be a good paraphrase, then it closely expresses the core information of the literal sentence through its metaphorical shift. We refer to the prediction of readers' judgments on the aptness candidates for the literal paraphrase of a metaphor as the metaphor paraphrase aptness task (MPAT). BIBREF3 address the MPAT by using Amazon Mechanical Turk (AMT) to obtain crowd sourced annotations of metaphor-paraphrase candidate pairs. They train a composite Deep Neural Network (DNN) on a portion of their annotated corpus, and test it on the remaining part. Testing involves using the DNN as a binary classifier on paraphrase candidates. They derive predictions of gradient paraphrase aptness for their test set, and assess them by Pearson coefficient correlation to the mean judgments of their crowd sourced annotation of this set. Both training and testing are done independently of any document context for the metaphorical sentence and its literal paraphrase candidates.
In this paper we study the role of context on readers' judgments concerning the aptness of metaphor paraphrase candidates. We look at the accuracy of BIBREF3 's DNN when trained and tested on contextually embedded metaphor-paraphrase pairs for the MPAT. In Section SECREF2 we describe an AMT experiment in which annotators judge metaphors and paraphrases embodied in small document contexts, and in Section SECREF3 we discuss the results of this experiment. In Section SECREF4 we describe our MPAT modeling experiment, and in Section SECREF5 we discuss the results of this experiment. Section SECREF6 briefly surveys some related work. In Section SECREF7 we draw conclusions from our study, and we indicate directions for future work in this area.
Annotating Metaphor-Paraphrase Pairs in Contexts
BIBREF3 have recently produced a dataset of paraphrases containing metaphors designed to allow both supervised binary classification and gradient ranking. This dataset contains several pairs of sentences, where in each pair the first sentence contains a metaphor, and the second is a literal paraphrase candidate.
This corpus was constructed with a view to representing a large variety of syntactic structures and semantic phenomena in metaphorical sentences. Many of these structures and phenomena do not occur as metaphorical expressions, with any frequency, in natural text and were therefore introduced through hand crafted examples.
Each pair of sentences in the corpus has been rated by AMT annotators for paraphrase aptness on a scale of 1-4, with 4 being the highest degree of aptness. In BIBREF3 's dataset, sentences come in groups of five, where the first element is the “reference element" with a metaphorical expression, and the remaining four sentences are “candidates" that stand in a degree of paraphrasehood to the reference. Here is an example of a metaphor-paraphrase candidate pair.
The average AMT paraphrase score for this pair is 4.0, indicating a high degree of aptness.
We extracted 200 sentence pairs from BIBREF3 's dataset and provided each pair with a document context consisting of a preceding and a following sentence, as in the following example.
One of the authors constructed most of these contexts by hand. In some cases, it was possible to locate the original metaphor in an existing document. This was the case for
For these cases, a variant of the existing context was added to both the metaphorical and the literal sentences. We introduced small modifications to keep the context short and clear, and to avoid copyright issues. We lightly modified the contexts of metaphors extracted from corpora when the original context was too long, ie. when the contextual sentences of the selected metaphor were longer than the maximum length we specified for our corpus. In such cases we reduced the length of the sentence, while sustaining its meaning.
The context was designed to sound as natural as possible. Since the same context is used for metaphors and their literal candidate paraphrases, we tried to design short contexts that make sense for both the figurative and the literal sentences, even when the pair had been judged as non-paraphrases. We kept the context as neutral as possible in order to avoid a distortion in crowd source ratings.
For example, in the following pair of sentences, the literal sentence is not a good paraphrase of the figurative one (a simile).
We opted for a context that is natural for both sentences.
We sought to avoid, whenever possible, an incongruous context for one of the sentences that could influence our annotators' ratings.
We collected a sub-corpus of 200 contextually embedded pairs of sentences. We tried to keep our data as balanced as possible, drawing from all four rating classes of paraphrase aptness ratings (between 1 to 4) that BIBREF3 obtained. We selected 44 pairs of 1 ratings, 51 pairs of 2, 43 pairs of 3 and 62 pairs of 4.
We then used AMT crowd sourcing to rate the contextualized paraphrase pairs, so that we could observe the effect of document context on assessments of metaphor paraphrase aptness.
To test the reproducibility of BIBREF3 's ratings, we launched a pilot study for 10 original non-contextually embedded pairs, selected from all four classes of aptness. We observed that the annotators provided mean ratings very similar to those reported in BIBREF3 . The Pearson coefficent correlation between the mean judgments of our out-of-context pilot annotations and BIBREF3 's annotations for the same pair was over 0.9. We then conducted an AMT annotation task for the 200 contextualised pairs. On average, 20 different annotators rated each pair. We considered as “rogue" those annotators who rated the large majority of pairs with very high or very low scores, and those who responded inconsistently to two “trap" pairs. After filtering out the rogues, we had an average of 14 annotators per pair.
Annotation Results
We found a Pearson correlation of 0.81 between the in-context and out-of-context mean human paraphrase ratings for our two corpora. This correlation is virtually identical to the one that BIBREF5 report for mean acceptability ratings of out-of-context to in-context sentences in their crowd source experiment. It is interesting that a relatively high level of ranking correspondence should occur in mean judgments for sentences presented out of and within document contexts, for two entirely distinct tasks.
Our main result concerns the effect of context on mean paraphrase judgment. We observed that it tends to flatten aptness ratings towards the center of the rating scale. 71.1% of the metaphors that had been considered highly apt (average rounded score of 4) in the context-less pairs received a more moderate judgment (average rounded score of 3), but the reverse movement was rare. Only 5% of pairs rated 3 out of context (2 pairs) were boosted to a mean rating of 4 in context. At the other end of the scale, 68.2% of the metaphors judged at 1 category of aptness out of context were raised to a mean of 2 in context, while only the 3.9% of pairs rated 2 out of context were lowered to 1 in context.
Ratings at the middle of the scale - 2 (defined as semantically related non-paraphrases) and 3 (imperfect or loose paraphrases) - remained largely stable, with little movement in either direction. 9.8% of pairs rated 2 were re-ranked as 3 when presented in context, and 10% of pairs ranked at 3 changed to 2. The division between 2 and 3 separates paraphrases from non-paraphrases. Our results suggest that this binary rating of paraphrase aptness was not strongly affected by context. Context operates at the extremes of our scale, raising low aptness ratings and lowering high aptness ratings. This effect is clearly indicated in the regression chart in Fig FIGREF15 .
This effect of context on human ratings is very similar to the one reported in BIBREF5 . They find that sentences rated as ill formed out of context are improved when they are presented in their document contexts. However the mean ratings for sentences judged to be highly acceptable out of context declined when assessed in context. BIBREF5 's linear regression chart for the correlation between out-of-context and in-context acceptability judgments looks remarkably like our Fig FIGREF15 . There is, then, a striking parallel in the compression pattern that context appears to exert on human judgments for two entirely different linguistic properties.
This pattern requires an explanation. BIBREF5 suggest that adding context causes speakers to focus on broader semantic and pragmatic issues of discourse coherence, rather than simply judging syntactic well formedness (measured as naturalness) when a sentence is considered in isolation. On this view, compression of rating results from a pressure to construct a plausible interpretation for any sentence within its context.
If this is the case, an analogous process may generate the same compression effect for metaphor aptness assessment of sentence pairs in context. Speakers may attempt to achieve broader discourse coherence when assessing the metaphor-paraphrase aptness relation in a document context. Out of context they focus more narrowly on the semantic relations between a metaphorical sentence and its paraphrase candidate. Therefore, this relation is at the centre of a speaker's concern, and it receives more fine-grained assessment when considered out of context than in context. This issue clearly requires further research.
Modelling Paraphrase Judgments in Context
We use the DNN model described in BIBREF3 to predict aptness judgments for in-context paraphrase pairs. It has three main components:
The encoder for each pair of sentences taken as input is composed of two parallel "Atrous" Convolutional Neural Networks (CNNs) and LSTM RNNs, feeding two sequenced fully connected layers.
The encoder is preloaded with the lexical embeddings from Word2vec BIBREF6 . The sequences of word embeddings that we use as input provides the model with dense word-level information, while the model tries to generalize over these embedding patterns.
The combination of a CNN and an LSTM allows us to capture both long-distance syntactic and semantic relations, best identified by a CNN, and the sequential nature of the input, most efficiently identified by an LSTM. Several existing studies, cited in BIBREF4 , demonstrate the advantages of combining CNNs and LSTMs to process texts.
The model produces a single classifier value between 0 and 1. We transform this score into a binary output of 0 or 1 by applying a threshold of 0.5 for assigning 1.
The architecture of the model is given in Fig FIGREF19 .
We use the same general protocol as BIBREF3 for training with supervised learning, and testing the model.
Using BIBREF3 's out-of- context metaphor dataset and our contextualized extension of this set, we apply four variants of the training and testing protocol.
When we train or test the model on the out-of-context dataset, we use BIBREF3 's original annotated corpus of 800 metaphor-paraphrase pairs. The in-context dataset contains 200 annotated pairs.
MPAT Modelling Results
We use the model both to predict binary classification of a metaphor paraphrase candidate, and to generate gradient aptness ratings on the 4 category scale (see BIBREF3 for details). A positive binary classification is accurate if it is INLINEFORM0 a 2.5 mean human rating. The gradient predictions are derived from the softmax distribution of the output layer of the model. The results of our modelling experiments are given in Table TABREF24 .
The main result that we obtain from these experiments is that the model learns binary classification to a reasonable extent on the in-context dataset, both when trained on the same kind of data (in-context pairs), and when trained on BIBREF3 's original dataset (out-of-context pairs). However, the model does not perform well in predicting gradient in-context judgments when trained on in-context pairs. It improves slightly for this task when trained on out-of-context pairs.
By contrast, it does well in predicting both binary and gradient ratings when trained and tested on out-of-context data sets.
BIBREF5 also note a decline in Pearson correlation for their DNN models on the task of predicting human in-context acceptability judgments, but it is less drastic. They attribute this decline to the fact that the compression effect renders the gradient judgments less separable, and so harder to predict. A similar, but more pronounced version of this effect may account for the difficulty that our model encounters in predicting gradient in-context ratings. The binary classifier achieves greater success for these cases because its training tends to polarise the data in one direction or the other.
We also observe that the best combination seems to consist in training our model on the original out-of-context dataset and testing it on the in-context pairs. In this configuration we reach an F-score (0.72) only slightly lower than the one reported in BIBREF3 (0.74), and we record the highest Pearson correlation, 0.3 (which is still not strong, compared to BIBREF3 's best run, 0.75). This result may partly be an artifact of the the larger amount of training data provided by the out-of-context pairs.
We can use this variant (out-of-context training and in-context testing) to perform a fine-grained comparison of the model's predicted ratings for the same sentences in and out of context. When we do this, we observe that out of 200 sentence pairs, our model scores the majority (130 pairs) higher when processed in context than out of context. A smaller but significant group (70 pairs) receives a lower score when processed in context. The first group's average score before adding context (0.48) is consistently lower than that of the second group (0.68). Also, as Table TABREF26 indicates, the pairs that our model rated, out of context, with a score lower than 0.5 (on the model's softmax distribution), received on average a higher rating in context, while the opposite is true for the pairs rated with a score higher than 0.5. In general, sentence pairs that were rated highly out of context receive a lower score in context, and vice versa. When we did linear regression on the DNNs in and out of context predicted scores, we observed substantially the same compression pattern exhibited by our AMT mean human judgments. Figure FIGREF27 plots this regression graph.
Related Cognitive Work on Metaphor Aptness
BIBREF7 present ratings of aptness and comprehensibility for 64 metaphors from two groups of subjects. They note that metaphors were perceived as more apt and more comprehensible to the extent that their terms occupied similar positions within dissimilar domains. Interestingly, BIBREF8 also present experimental results to claim that imagery does not clearly correlate with metaphor aptness. Aptness judgments are also subjected to individual differences.
BIBREF9 points to such individual differences in metaphor processing. She asked 27 participants to rate 37 metaphors for difficulty, aptness and familiarity, and to write one or more interpretations of the metaphor. Subjects with higher working memory span were able to give more detailed and elaborate interpretations of metaphors. Familiarity and aptness correlated with both high and low span subjects. For high span subjects aptness of metaphor positively correlated with number of interpretations, while for low span subjects the opposite was true.
BIBREF10 analyses the aptness of metaphors with and without extended context. She finds that domain similarity correlates with aptness judgments in isolated metaphors, but not in contextualized metaphors. She also reports that there is no clear correlation between metaphor aptness ratings in isolated and in contextualized examples. BIBREF0 study the relation between aptness and comprehensibility in metaphors and similes. They provide experimental results indicating that aptness is a better predictor than comprehensibility for the “transformation" of a simile into a metaphor. Subjects tended to remember similes as metaphors (i.e. remember the dancer's arms moved like startled rattlesnakes as the dancer's arms were startled rattlesnakes) if they were judged to be particularly apt, rather than particularly comprehensible. They claim that context might play an important role in this process. They suggest that context should ease the transparency and increase the aptness of both metaphors and similes.
BIBREF11 present a series of experiments indicating that metaphors tend to be interpreted through emergent features that were not rated as particularly relevant, either for the tenor or for the vehicle of the metaphor. The number of emergent features that subjects were able to draw from a metaphor seems to correlate with their aptness judgments.
BIBREF12 use Event-Related Brain Potentials (ERPs) to study the temporal dynamics of metaphor processing in reading literary texts. They emphasize the influence of context on the ability of a reader to smoothly interpret an unusual metaphor.
BIBREF13 use electrophysiological experiments to try to disentangle the effect of a metaphor from that of its context. They find that de-contextualized metaphors elicited two different brain responses, INLINEFORM0 and INLINEFORM1 , while contextualized metaphors only produced the INLINEFORM2 effect. They attribute the INLINEFORM3 effect, often observed in neurological studies of metaphors, to expectations about upcoming words in the absence of a predictive context that “prepares" the reader for the metaphor. They suggest that the INLINEFORM4 effect reflects the actual interpretative processing of the metaphor.
This view is supported by several neurological studies showing that the INLINEFORM0 effect arises with unexpected elements, like new presuppositions introduced into a text in a way not implied by the context BIBREF14 , or unexpected associations with a noun-verb combination, not indicated by previous context (for example preceded by neutral context, as in BIBREF15 ).
Conclusions and Future Work
We have observed that embedding metaphorical sentences and their paraphrase candidates in a document context generates a compression effect in human metaphor aptness ratings. Context seems to mitigate the perceived aptness of metaphors in two ways. Those metaphor-paraphrase pairs given very low scores out of context receive increased scores in context, while those with very high scores out of context decline in rating when presented in context. At the same time, the demarcation line between paraphrase and non-paraphrase is not particularly affected by the introduction of extended context.
As previously observed by BIBREF10 , we found that context has an influence on human aptness ratings for metaphors, although, unlike her results, we did find a correlation between the two sets of ratings. BIBREF0 's expectation that context should facilitate a metaphor's aptness was supported only in one sense. Aptness increases for low-rated pairs. But it decreases for high-rated pairs.
We applied BIBREF3 's DNN for the MAPT to an in-context test set, experimenting with both out-of-context and in-context training corpora. We obtained reasonable results for binary classification of paraphrase candidates for aptness, but the performance of the model declined sharply for the prediction of human gradient aptness judgments, relative to its performance on a corresponding out-of-context test set. This appears to be the result of the increased difficulty in separating rating categories introduced by the compression effect.
Strikingly, the linear regression analyses of human aptness judgments for in- and out-of-context paraphrase pairs, and of our DNN's predictions for these pairs reveal similar compression patterns. These patterns produce ratings that cannot be clearly separated along a linear ranking scale.
To the best of our knowledge ours is the first study of the effect of context on metaphor aptness on a corpus of this dimension, using crowd sourced human judgments as the gold standard for assessing the predictions of a computational model of paraphrase. We also present the first comparative study of both human and model judgments of metaphor paraphrase for in-context and out-of-context variants of metaphorical sentences.
Finally, the compression effect that context induces on paraphrase judgments corresponds closely to the one observed independently in another task, which is reported in BIBREF5 . We regard this effect as a significant discovery that increases the plausibility and the interest of our results. The fact that it appears clearly with two tasks involving different sorts of DNNs and distinct learning regimes (unsupervised learning with neural network language models for the acceptability prediction task discussed, as opposed to supervised learning with our composite DNN for paraphrase prediction) reduces the likelihood that this effect is an artefact of our experimental design.
While our dataset is still small, we are presenting an initial investigation of a phenomenon which is, to date, little studied. We are working to enlarge our dataset and in future work we will expand both our in- and out-of-context annotated metaphor-paraphrase corpora.
While the corpus we used contains a number of hand crafted examples, it would be preferable to find these example types in natural corpora, and we are currently working on this. We will be extracting a dataset of completely natural (corpus-driven) examples. We are seeking to expand the size of the data set to improve the reliability of our modelling experiments.
We will also experiment with alternative DNN architectures for the MAPT. We will conduct qualitative analyses on the kinds of metaphors and similes that are more prone to a context-induced rating switch.
One of our main concerns in future research will be to achieve a better understanding of the compression effect of context on human judgments and DNN models. | Unanswerable |
6b7d76c1e1a2490beb69609ba5652476dde8831b | 6b7d76c1e1a2490beb69609ba5652476dde8831b_0 | Q: What provisional explanation do the authors give for the impact of document context?
Text: Introduction
A metaphor is a way of forcing the normal boundaries of a word's meaning in order to better express an experience, a concept or an idea. To a native speaker's ear some metaphors sound more conventional (like the usage of the words ear and sound in this sentence), others more original. This is not the only dimension along which to judge a metaphor. One of the most important qualities of a metaphor is its appropriateness, its aptness: how good is a metaphor for conveying a given experience or concept. While a metaphor's degree of conventionality can be measured through probabilistic methods, like language models, it is harder to represent its aptness. BIBREF0 define aptness as “the extent to which a comparison captures important features of the topic".
It is possible to express an opinion about some metaphors' and similes' aptness (at least to a degree) without previously knowing what they are trying to convey, or the context in which they appear. For example, we don't need a particular context or frame of reference to construe the simile She was screaming like a turtle as strange, and less apt for expressing the quality of a scream than She was screaming like a banshee. In this case, the reason why the simile in the second sentence works best is intuitive. A salient characteristic of a banshee is a powerful scream. Turtles are not known for screaming, and so it is harder to define the quality of a scream through such a comparison, except as a form of irony. Other cases are more complicated to decide upon. The simile crying like a fire in the sun (It's All Over Now, Baby Blue, Bob Dylan) is powerfully apt for many readers, but simply odd for others. Fire and sun are not known to cry in any way. But at the same time the simile can capture the association we draw between something strong and intense in other senses - vision, touch, etc. - and a loud cry.
Nonetheless, most metaphors and similes need some kind of context, or external reference point to be interpreted. The sentence The old lady had a heart of stone is apt if the old lady is cruel or indifferent, but it is inappropriate as a description of a situation in which the old lady is kind and caring. We assume that, to an average reader's sensibility, the sentence models the situation in a satisfactory way only in the first case.
This is the approach to metaphor aptness that we assume in this paper. Following BIBREF3 , we treat a metaphor as apt in relation to a literal expression that it paraphrases. If the metaphor is judged to be a good paraphrase, then it closely expresses the core information of the literal sentence through its metaphorical shift. We refer to the prediction of readers' judgments on the aptness candidates for the literal paraphrase of a metaphor as the metaphor paraphrase aptness task (MPAT). BIBREF3 address the MPAT by using Amazon Mechanical Turk (AMT) to obtain crowd sourced annotations of metaphor-paraphrase candidate pairs. They train a composite Deep Neural Network (DNN) on a portion of their annotated corpus, and test it on the remaining part. Testing involves using the DNN as a binary classifier on paraphrase candidates. They derive predictions of gradient paraphrase aptness for their test set, and assess them by Pearson coefficient correlation to the mean judgments of their crowd sourced annotation of this set. Both training and testing are done independently of any document context for the metaphorical sentence and its literal paraphrase candidates.
In this paper we study the role of context on readers' judgments concerning the aptness of metaphor paraphrase candidates. We look at the accuracy of BIBREF3 's DNN when trained and tested on contextually embedded metaphor-paraphrase pairs for the MPAT. In Section SECREF2 we describe an AMT experiment in which annotators judge metaphors and paraphrases embodied in small document contexts, and in Section SECREF3 we discuss the results of this experiment. In Section SECREF4 we describe our MPAT modeling experiment, and in Section SECREF5 we discuss the results of this experiment. Section SECREF6 briefly surveys some related work. In Section SECREF7 we draw conclusions from our study, and we indicate directions for future work in this area.
Annotating Metaphor-Paraphrase Pairs in Contexts
BIBREF3 have recently produced a dataset of paraphrases containing metaphors designed to allow both supervised binary classification and gradient ranking. This dataset contains several pairs of sentences, where in each pair the first sentence contains a metaphor, and the second is a literal paraphrase candidate.
This corpus was constructed with a view to representing a large variety of syntactic structures and semantic phenomena in metaphorical sentences. Many of these structures and phenomena do not occur as metaphorical expressions, with any frequency, in natural text and were therefore introduced through hand crafted examples.
Each pair of sentences in the corpus has been rated by AMT annotators for paraphrase aptness on a scale of 1-4, with 4 being the highest degree of aptness. In BIBREF3 's dataset, sentences come in groups of five, where the first element is the “reference element" with a metaphorical expression, and the remaining four sentences are “candidates" that stand in a degree of paraphrasehood to the reference. Here is an example of a metaphor-paraphrase candidate pair.
The average AMT paraphrase score for this pair is 4.0, indicating a high degree of aptness.
We extracted 200 sentence pairs from BIBREF3 's dataset and provided each pair with a document context consisting of a preceding and a following sentence, as in the following example.
One of the authors constructed most of these contexts by hand. In some cases, it was possible to locate the original metaphor in an existing document. This was the case for
For these cases, a variant of the existing context was added to both the metaphorical and the literal sentences. We introduced small modifications to keep the context short and clear, and to avoid copyright issues. We lightly modified the contexts of metaphors extracted from corpora when the original context was too long, ie. when the contextual sentences of the selected metaphor were longer than the maximum length we specified for our corpus. In such cases we reduced the length of the sentence, while sustaining its meaning.
The context was designed to sound as natural as possible. Since the same context is used for metaphors and their literal candidate paraphrases, we tried to design short contexts that make sense for both the figurative and the literal sentences, even when the pair had been judged as non-paraphrases. We kept the context as neutral as possible in order to avoid a distortion in crowd source ratings.
For example, in the following pair of sentences, the literal sentence is not a good paraphrase of the figurative one (a simile).
We opted for a context that is natural for both sentences.
We sought to avoid, whenever possible, an incongruous context for one of the sentences that could influence our annotators' ratings.
We collected a sub-corpus of 200 contextually embedded pairs of sentences. We tried to keep our data as balanced as possible, drawing from all four rating classes of paraphrase aptness ratings (between 1 to 4) that BIBREF3 obtained. We selected 44 pairs of 1 ratings, 51 pairs of 2, 43 pairs of 3 and 62 pairs of 4.
We then used AMT crowd sourcing to rate the contextualized paraphrase pairs, so that we could observe the effect of document context on assessments of metaphor paraphrase aptness.
To test the reproducibility of BIBREF3 's ratings, we launched a pilot study for 10 original non-contextually embedded pairs, selected from all four classes of aptness. We observed that the annotators provided mean ratings very similar to those reported in BIBREF3 . The Pearson coefficent correlation between the mean judgments of our out-of-context pilot annotations and BIBREF3 's annotations for the same pair was over 0.9. We then conducted an AMT annotation task for the 200 contextualised pairs. On average, 20 different annotators rated each pair. We considered as “rogue" those annotators who rated the large majority of pairs with very high or very low scores, and those who responded inconsistently to two “trap" pairs. After filtering out the rogues, we had an average of 14 annotators per pair.
Annotation Results
We found a Pearson correlation of 0.81 between the in-context and out-of-context mean human paraphrase ratings for our two corpora. This correlation is virtually identical to the one that BIBREF5 report for mean acceptability ratings of out-of-context to in-context sentences in their crowd source experiment. It is interesting that a relatively high level of ranking correspondence should occur in mean judgments for sentences presented out of and within document contexts, for two entirely distinct tasks.
Our main result concerns the effect of context on mean paraphrase judgment. We observed that it tends to flatten aptness ratings towards the center of the rating scale. 71.1% of the metaphors that had been considered highly apt (average rounded score of 4) in the context-less pairs received a more moderate judgment (average rounded score of 3), but the reverse movement was rare. Only 5% of pairs rated 3 out of context (2 pairs) were boosted to a mean rating of 4 in context. At the other end of the scale, 68.2% of the metaphors judged at 1 category of aptness out of context were raised to a mean of 2 in context, while only the 3.9% of pairs rated 2 out of context were lowered to 1 in context.
Ratings at the middle of the scale - 2 (defined as semantically related non-paraphrases) and 3 (imperfect or loose paraphrases) - remained largely stable, with little movement in either direction. 9.8% of pairs rated 2 were re-ranked as 3 when presented in context, and 10% of pairs ranked at 3 changed to 2. The division between 2 and 3 separates paraphrases from non-paraphrases. Our results suggest that this binary rating of paraphrase aptness was not strongly affected by context. Context operates at the extremes of our scale, raising low aptness ratings and lowering high aptness ratings. This effect is clearly indicated in the regression chart in Fig FIGREF15 .
This effect of context on human ratings is very similar to the one reported in BIBREF5 . They find that sentences rated as ill formed out of context are improved when they are presented in their document contexts. However the mean ratings for sentences judged to be highly acceptable out of context declined when assessed in context. BIBREF5 's linear regression chart for the correlation between out-of-context and in-context acceptability judgments looks remarkably like our Fig FIGREF15 . There is, then, a striking parallel in the compression pattern that context appears to exert on human judgments for two entirely different linguistic properties.
This pattern requires an explanation. BIBREF5 suggest that adding context causes speakers to focus on broader semantic and pragmatic issues of discourse coherence, rather than simply judging syntactic well formedness (measured as naturalness) when a sentence is considered in isolation. On this view, compression of rating results from a pressure to construct a plausible interpretation for any sentence within its context.
If this is the case, an analogous process may generate the same compression effect for metaphor aptness assessment of sentence pairs in context. Speakers may attempt to achieve broader discourse coherence when assessing the metaphor-paraphrase aptness relation in a document context. Out of context they focus more narrowly on the semantic relations between a metaphorical sentence and its paraphrase candidate. Therefore, this relation is at the centre of a speaker's concern, and it receives more fine-grained assessment when considered out of context than in context. This issue clearly requires further research.
Modelling Paraphrase Judgments in Context
We use the DNN model described in BIBREF3 to predict aptness judgments for in-context paraphrase pairs. It has three main components:
The encoder for each pair of sentences taken as input is composed of two parallel "Atrous" Convolutional Neural Networks (CNNs) and LSTM RNNs, feeding two sequenced fully connected layers.
The encoder is preloaded with the lexical embeddings from Word2vec BIBREF6 . The sequences of word embeddings that we use as input provides the model with dense word-level information, while the model tries to generalize over these embedding patterns.
The combination of a CNN and an LSTM allows us to capture both long-distance syntactic and semantic relations, best identified by a CNN, and the sequential nature of the input, most efficiently identified by an LSTM. Several existing studies, cited in BIBREF4 , demonstrate the advantages of combining CNNs and LSTMs to process texts.
The model produces a single classifier value between 0 and 1. We transform this score into a binary output of 0 or 1 by applying a threshold of 0.5 for assigning 1.
The architecture of the model is given in Fig FIGREF19 .
We use the same general protocol as BIBREF3 for training with supervised learning, and testing the model.
Using BIBREF3 's out-of- context metaphor dataset and our contextualized extension of this set, we apply four variants of the training and testing protocol.
When we train or test the model on the out-of-context dataset, we use BIBREF3 's original annotated corpus of 800 metaphor-paraphrase pairs. The in-context dataset contains 200 annotated pairs.
MPAT Modelling Results
We use the model both to predict binary classification of a metaphor paraphrase candidate, and to generate gradient aptness ratings on the 4 category scale (see BIBREF3 for details). A positive binary classification is accurate if it is INLINEFORM0 a 2.5 mean human rating. The gradient predictions are derived from the softmax distribution of the output layer of the model. The results of our modelling experiments are given in Table TABREF24 .
The main result that we obtain from these experiments is that the model learns binary classification to a reasonable extent on the in-context dataset, both when trained on the same kind of data (in-context pairs), and when trained on BIBREF3 's original dataset (out-of-context pairs). However, the model does not perform well in predicting gradient in-context judgments when trained on in-context pairs. It improves slightly for this task when trained on out-of-context pairs.
By contrast, it does well in predicting both binary and gradient ratings when trained and tested on out-of-context data sets.
BIBREF5 also note a decline in Pearson correlation for their DNN models on the task of predicting human in-context acceptability judgments, but it is less drastic. They attribute this decline to the fact that the compression effect renders the gradient judgments less separable, and so harder to predict. A similar, but more pronounced version of this effect may account for the difficulty that our model encounters in predicting gradient in-context ratings. The binary classifier achieves greater success for these cases because its training tends to polarise the data in one direction or the other.
We also observe that the best combination seems to consist in training our model on the original out-of-context dataset and testing it on the in-context pairs. In this configuration we reach an F-score (0.72) only slightly lower than the one reported in BIBREF3 (0.74), and we record the highest Pearson correlation, 0.3 (which is still not strong, compared to BIBREF3 's best run, 0.75). This result may partly be an artifact of the the larger amount of training data provided by the out-of-context pairs.
We can use this variant (out-of-context training and in-context testing) to perform a fine-grained comparison of the model's predicted ratings for the same sentences in and out of context. When we do this, we observe that out of 200 sentence pairs, our model scores the majority (130 pairs) higher when processed in context than out of context. A smaller but significant group (70 pairs) receives a lower score when processed in context. The first group's average score before adding context (0.48) is consistently lower than that of the second group (0.68). Also, as Table TABREF26 indicates, the pairs that our model rated, out of context, with a score lower than 0.5 (on the model's softmax distribution), received on average a higher rating in context, while the opposite is true for the pairs rated with a score higher than 0.5. In general, sentence pairs that were rated highly out of context receive a lower score in context, and vice versa. When we did linear regression on the DNNs in and out of context predicted scores, we observed substantially the same compression pattern exhibited by our AMT mean human judgments. Figure FIGREF27 plots this regression graph.
Related Cognitive Work on Metaphor Aptness
BIBREF7 present ratings of aptness and comprehensibility for 64 metaphors from two groups of subjects. They note that metaphors were perceived as more apt and more comprehensible to the extent that their terms occupied similar positions within dissimilar domains. Interestingly, BIBREF8 also present experimental results to claim that imagery does not clearly correlate with metaphor aptness. Aptness judgments are also subjected to individual differences.
BIBREF9 points to such individual differences in metaphor processing. She asked 27 participants to rate 37 metaphors for difficulty, aptness and familiarity, and to write one or more interpretations of the metaphor. Subjects with higher working memory span were able to give more detailed and elaborate interpretations of metaphors. Familiarity and aptness correlated with both high and low span subjects. For high span subjects aptness of metaphor positively correlated with number of interpretations, while for low span subjects the opposite was true.
BIBREF10 analyses the aptness of metaphors with and without extended context. She finds that domain similarity correlates with aptness judgments in isolated metaphors, but not in contextualized metaphors. She also reports that there is no clear correlation between metaphor aptness ratings in isolated and in contextualized examples. BIBREF0 study the relation between aptness and comprehensibility in metaphors and similes. They provide experimental results indicating that aptness is a better predictor than comprehensibility for the “transformation" of a simile into a metaphor. Subjects tended to remember similes as metaphors (i.e. remember the dancer's arms moved like startled rattlesnakes as the dancer's arms were startled rattlesnakes) if they were judged to be particularly apt, rather than particularly comprehensible. They claim that context might play an important role in this process. They suggest that context should ease the transparency and increase the aptness of both metaphors and similes.
BIBREF11 present a series of experiments indicating that metaphors tend to be interpreted through emergent features that were not rated as particularly relevant, either for the tenor or for the vehicle of the metaphor. The number of emergent features that subjects were able to draw from a metaphor seems to correlate with their aptness judgments.
BIBREF12 use Event-Related Brain Potentials (ERPs) to study the temporal dynamics of metaphor processing in reading literary texts. They emphasize the influence of context on the ability of a reader to smoothly interpret an unusual metaphor.
BIBREF13 use electrophysiological experiments to try to disentangle the effect of a metaphor from that of its context. They find that de-contextualized metaphors elicited two different brain responses, INLINEFORM0 and INLINEFORM1 , while contextualized metaphors only produced the INLINEFORM2 effect. They attribute the INLINEFORM3 effect, often observed in neurological studies of metaphors, to expectations about upcoming words in the absence of a predictive context that “prepares" the reader for the metaphor. They suggest that the INLINEFORM4 effect reflects the actual interpretative processing of the metaphor.
This view is supported by several neurological studies showing that the INLINEFORM0 effect arises with unexpected elements, like new presuppositions introduced into a text in a way not implied by the context BIBREF14 , or unexpected associations with a noun-verb combination, not indicated by previous context (for example preceded by neutral context, as in BIBREF15 ).
Conclusions and Future Work
We have observed that embedding metaphorical sentences and their paraphrase candidates in a document context generates a compression effect in human metaphor aptness ratings. Context seems to mitigate the perceived aptness of metaphors in two ways. Those metaphor-paraphrase pairs given very low scores out of context receive increased scores in context, while those with very high scores out of context decline in rating when presented in context. At the same time, the demarcation line between paraphrase and non-paraphrase is not particularly affected by the introduction of extended context.
As previously observed by BIBREF10 , we found that context has an influence on human aptness ratings for metaphors, although, unlike her results, we did find a correlation between the two sets of ratings. BIBREF0 's expectation that context should facilitate a metaphor's aptness was supported only in one sense. Aptness increases for low-rated pairs. But it decreases for high-rated pairs.
We applied BIBREF3 's DNN for the MAPT to an in-context test set, experimenting with both out-of-context and in-context training corpora. We obtained reasonable results for binary classification of paraphrase candidates for aptness, but the performance of the model declined sharply for the prediction of human gradient aptness judgments, relative to its performance on a corresponding out-of-context test set. This appears to be the result of the increased difficulty in separating rating categories introduced by the compression effect.
Strikingly, the linear regression analyses of human aptness judgments for in- and out-of-context paraphrase pairs, and of our DNN's predictions for these pairs reveal similar compression patterns. These patterns produce ratings that cannot be clearly separated along a linear ranking scale.
To the best of our knowledge ours is the first study of the effect of context on metaphor aptness on a corpus of this dimension, using crowd sourced human judgments as the gold standard for assessing the predictions of a computational model of paraphrase. We also present the first comparative study of both human and model judgments of metaphor paraphrase for in-context and out-of-context variants of metaphorical sentences.
Finally, the compression effect that context induces on paraphrase judgments corresponds closely to the one observed independently in another task, which is reported in BIBREF5 . We regard this effect as a significant discovery that increases the plausibility and the interest of our results. The fact that it appears clearly with two tasks involving different sorts of DNNs and distinct learning regimes (unsupervised learning with neural network language models for the acceptability prediction task discussed, as opposed to supervised learning with our composite DNN for paraphrase prediction) reduces the likelihood that this effect is an artefact of our experimental design.
While our dataset is still small, we are presenting an initial investigation of a phenomenon which is, to date, little studied. We are working to enlarge our dataset and in future work we will expand both our in- and out-of-context annotated metaphor-paraphrase corpora.
While the corpus we used contains a number of hand crafted examples, it would be preferable to find these example types in natural corpora, and we are currently working on this. We will be extracting a dataset of completely natural (corpus-driven) examples. We are seeking to expand the size of the data set to improve the reliability of our modelling experiments.
We will also experiment with alternative DNN architectures for the MAPT. We will conduct qualitative analyses on the kinds of metaphors and similes that are more prone to a context-induced rating switch.
One of our main concerns in future research will be to achieve a better understanding of the compression effect of context on human judgments and DNN models. | adding context causes speakers to focus on broader semantic and pragmatic issues of discourse coherence |
37753fbffc06ce7de6ada80c89f1bf5f190bbd88 | 37753fbffc06ce7de6ada80c89f1bf5f190bbd88_0 | Q: What document context was added?
Text: Introduction
A metaphor is a way of forcing the normal boundaries of a word's meaning in order to better express an experience, a concept or an idea. To a native speaker's ear some metaphors sound more conventional (like the usage of the words ear and sound in this sentence), others more original. This is not the only dimension along which to judge a metaphor. One of the most important qualities of a metaphor is its appropriateness, its aptness: how good is a metaphor for conveying a given experience or concept. While a metaphor's degree of conventionality can be measured through probabilistic methods, like language models, it is harder to represent its aptness. BIBREF0 define aptness as “the extent to which a comparison captures important features of the topic".
It is possible to express an opinion about some metaphors' and similes' aptness (at least to a degree) without previously knowing what they are trying to convey, or the context in which they appear. For example, we don't need a particular context or frame of reference to construe the simile She was screaming like a turtle as strange, and less apt for expressing the quality of a scream than She was screaming like a banshee. In this case, the reason why the simile in the second sentence works best is intuitive. A salient characteristic of a banshee is a powerful scream. Turtles are not known for screaming, and so it is harder to define the quality of a scream through such a comparison, except as a form of irony. Other cases are more complicated to decide upon. The simile crying like a fire in the sun (It's All Over Now, Baby Blue, Bob Dylan) is powerfully apt for many readers, but simply odd for others. Fire and sun are not known to cry in any way. But at the same time the simile can capture the association we draw between something strong and intense in other senses - vision, touch, etc. - and a loud cry.
Nonetheless, most metaphors and similes need some kind of context, or external reference point to be interpreted. The sentence The old lady had a heart of stone is apt if the old lady is cruel or indifferent, but it is inappropriate as a description of a situation in which the old lady is kind and caring. We assume that, to an average reader's sensibility, the sentence models the situation in a satisfactory way only in the first case.
This is the approach to metaphor aptness that we assume in this paper. Following BIBREF3 , we treat a metaphor as apt in relation to a literal expression that it paraphrases. If the metaphor is judged to be a good paraphrase, then it closely expresses the core information of the literal sentence through its metaphorical shift. We refer to the prediction of readers' judgments on the aptness candidates for the literal paraphrase of a metaphor as the metaphor paraphrase aptness task (MPAT). BIBREF3 address the MPAT by using Amazon Mechanical Turk (AMT) to obtain crowd sourced annotations of metaphor-paraphrase candidate pairs. They train a composite Deep Neural Network (DNN) on a portion of their annotated corpus, and test it on the remaining part. Testing involves using the DNN as a binary classifier on paraphrase candidates. They derive predictions of gradient paraphrase aptness for their test set, and assess them by Pearson coefficient correlation to the mean judgments of their crowd sourced annotation of this set. Both training and testing are done independently of any document context for the metaphorical sentence and its literal paraphrase candidates.
In this paper we study the role of context on readers' judgments concerning the aptness of metaphor paraphrase candidates. We look at the accuracy of BIBREF3 's DNN when trained and tested on contextually embedded metaphor-paraphrase pairs for the MPAT. In Section SECREF2 we describe an AMT experiment in which annotators judge metaphors and paraphrases embodied in small document contexts, and in Section SECREF3 we discuss the results of this experiment. In Section SECREF4 we describe our MPAT modeling experiment, and in Section SECREF5 we discuss the results of this experiment. Section SECREF6 briefly surveys some related work. In Section SECREF7 we draw conclusions from our study, and we indicate directions for future work in this area.
Annotating Metaphor-Paraphrase Pairs in Contexts
BIBREF3 have recently produced a dataset of paraphrases containing metaphors designed to allow both supervised binary classification and gradient ranking. This dataset contains several pairs of sentences, where in each pair the first sentence contains a metaphor, and the second is a literal paraphrase candidate.
This corpus was constructed with a view to representing a large variety of syntactic structures and semantic phenomena in metaphorical sentences. Many of these structures and phenomena do not occur as metaphorical expressions, with any frequency, in natural text and were therefore introduced through hand crafted examples.
Each pair of sentences in the corpus has been rated by AMT annotators for paraphrase aptness on a scale of 1-4, with 4 being the highest degree of aptness. In BIBREF3 's dataset, sentences come in groups of five, where the first element is the “reference element" with a metaphorical expression, and the remaining four sentences are “candidates" that stand in a degree of paraphrasehood to the reference. Here is an example of a metaphor-paraphrase candidate pair.
The average AMT paraphrase score for this pair is 4.0, indicating a high degree of aptness.
We extracted 200 sentence pairs from BIBREF3 's dataset and provided each pair with a document context consisting of a preceding and a following sentence, as in the following example.
One of the authors constructed most of these contexts by hand. In some cases, it was possible to locate the original metaphor in an existing document. This was the case for
For these cases, a variant of the existing context was added to both the metaphorical and the literal sentences. We introduced small modifications to keep the context short and clear, and to avoid copyright issues. We lightly modified the contexts of metaphors extracted from corpora when the original context was too long, ie. when the contextual sentences of the selected metaphor were longer than the maximum length we specified for our corpus. In such cases we reduced the length of the sentence, while sustaining its meaning.
The context was designed to sound as natural as possible. Since the same context is used for metaphors and their literal candidate paraphrases, we tried to design short contexts that make sense for both the figurative and the literal sentences, even when the pair had been judged as non-paraphrases. We kept the context as neutral as possible in order to avoid a distortion in crowd source ratings.
For example, in the following pair of sentences, the literal sentence is not a good paraphrase of the figurative one (a simile).
We opted for a context that is natural for both sentences.
We sought to avoid, whenever possible, an incongruous context for one of the sentences that could influence our annotators' ratings.
We collected a sub-corpus of 200 contextually embedded pairs of sentences. We tried to keep our data as balanced as possible, drawing from all four rating classes of paraphrase aptness ratings (between 1 to 4) that BIBREF3 obtained. We selected 44 pairs of 1 ratings, 51 pairs of 2, 43 pairs of 3 and 62 pairs of 4.
We then used AMT crowd sourcing to rate the contextualized paraphrase pairs, so that we could observe the effect of document context on assessments of metaphor paraphrase aptness.
To test the reproducibility of BIBREF3 's ratings, we launched a pilot study for 10 original non-contextually embedded pairs, selected from all four classes of aptness. We observed that the annotators provided mean ratings very similar to those reported in BIBREF3 . The Pearson coefficent correlation between the mean judgments of our out-of-context pilot annotations and BIBREF3 's annotations for the same pair was over 0.9. We then conducted an AMT annotation task for the 200 contextualised pairs. On average, 20 different annotators rated each pair. We considered as “rogue" those annotators who rated the large majority of pairs with very high or very low scores, and those who responded inconsistently to two “trap" pairs. After filtering out the rogues, we had an average of 14 annotators per pair.
Annotation Results
We found a Pearson correlation of 0.81 between the in-context and out-of-context mean human paraphrase ratings for our two corpora. This correlation is virtually identical to the one that BIBREF5 report for mean acceptability ratings of out-of-context to in-context sentences in their crowd source experiment. It is interesting that a relatively high level of ranking correspondence should occur in mean judgments for sentences presented out of and within document contexts, for two entirely distinct tasks.
Our main result concerns the effect of context on mean paraphrase judgment. We observed that it tends to flatten aptness ratings towards the center of the rating scale. 71.1% of the metaphors that had been considered highly apt (average rounded score of 4) in the context-less pairs received a more moderate judgment (average rounded score of 3), but the reverse movement was rare. Only 5% of pairs rated 3 out of context (2 pairs) were boosted to a mean rating of 4 in context. At the other end of the scale, 68.2% of the metaphors judged at 1 category of aptness out of context were raised to a mean of 2 in context, while only the 3.9% of pairs rated 2 out of context were lowered to 1 in context.
Ratings at the middle of the scale - 2 (defined as semantically related non-paraphrases) and 3 (imperfect or loose paraphrases) - remained largely stable, with little movement in either direction. 9.8% of pairs rated 2 were re-ranked as 3 when presented in context, and 10% of pairs ranked at 3 changed to 2. The division between 2 and 3 separates paraphrases from non-paraphrases. Our results suggest that this binary rating of paraphrase aptness was not strongly affected by context. Context operates at the extremes of our scale, raising low aptness ratings and lowering high aptness ratings. This effect is clearly indicated in the regression chart in Fig FIGREF15 .
This effect of context on human ratings is very similar to the one reported in BIBREF5 . They find that sentences rated as ill formed out of context are improved when they are presented in their document contexts. However the mean ratings for sentences judged to be highly acceptable out of context declined when assessed in context. BIBREF5 's linear regression chart for the correlation between out-of-context and in-context acceptability judgments looks remarkably like our Fig FIGREF15 . There is, then, a striking parallel in the compression pattern that context appears to exert on human judgments for two entirely different linguistic properties.
This pattern requires an explanation. BIBREF5 suggest that adding context causes speakers to focus on broader semantic and pragmatic issues of discourse coherence, rather than simply judging syntactic well formedness (measured as naturalness) when a sentence is considered in isolation. On this view, compression of rating results from a pressure to construct a plausible interpretation for any sentence within its context.
If this is the case, an analogous process may generate the same compression effect for metaphor aptness assessment of sentence pairs in context. Speakers may attempt to achieve broader discourse coherence when assessing the metaphor-paraphrase aptness relation in a document context. Out of context they focus more narrowly on the semantic relations between a metaphorical sentence and its paraphrase candidate. Therefore, this relation is at the centre of a speaker's concern, and it receives more fine-grained assessment when considered out of context than in context. This issue clearly requires further research.
Modelling Paraphrase Judgments in Context
We use the DNN model described in BIBREF3 to predict aptness judgments for in-context paraphrase pairs. It has three main components:
The encoder for each pair of sentences taken as input is composed of two parallel "Atrous" Convolutional Neural Networks (CNNs) and LSTM RNNs, feeding two sequenced fully connected layers.
The encoder is preloaded with the lexical embeddings from Word2vec BIBREF6 . The sequences of word embeddings that we use as input provides the model with dense word-level information, while the model tries to generalize over these embedding patterns.
The combination of a CNN and an LSTM allows us to capture both long-distance syntactic and semantic relations, best identified by a CNN, and the sequential nature of the input, most efficiently identified by an LSTM. Several existing studies, cited in BIBREF4 , demonstrate the advantages of combining CNNs and LSTMs to process texts.
The model produces a single classifier value between 0 and 1. We transform this score into a binary output of 0 or 1 by applying a threshold of 0.5 for assigning 1.
The architecture of the model is given in Fig FIGREF19 .
We use the same general protocol as BIBREF3 for training with supervised learning, and testing the model.
Using BIBREF3 's out-of- context metaphor dataset and our contextualized extension of this set, we apply four variants of the training and testing protocol.
When we train or test the model on the out-of-context dataset, we use BIBREF3 's original annotated corpus of 800 metaphor-paraphrase pairs. The in-context dataset contains 200 annotated pairs.
MPAT Modelling Results
We use the model both to predict binary classification of a metaphor paraphrase candidate, and to generate gradient aptness ratings on the 4 category scale (see BIBREF3 for details). A positive binary classification is accurate if it is INLINEFORM0 a 2.5 mean human rating. The gradient predictions are derived from the softmax distribution of the output layer of the model. The results of our modelling experiments are given in Table TABREF24 .
The main result that we obtain from these experiments is that the model learns binary classification to a reasonable extent on the in-context dataset, both when trained on the same kind of data (in-context pairs), and when trained on BIBREF3 's original dataset (out-of-context pairs). However, the model does not perform well in predicting gradient in-context judgments when trained on in-context pairs. It improves slightly for this task when trained on out-of-context pairs.
By contrast, it does well in predicting both binary and gradient ratings when trained and tested on out-of-context data sets.
BIBREF5 also note a decline in Pearson correlation for their DNN models on the task of predicting human in-context acceptability judgments, but it is less drastic. They attribute this decline to the fact that the compression effect renders the gradient judgments less separable, and so harder to predict. A similar, but more pronounced version of this effect may account for the difficulty that our model encounters in predicting gradient in-context ratings. The binary classifier achieves greater success for these cases because its training tends to polarise the data in one direction or the other.
We also observe that the best combination seems to consist in training our model on the original out-of-context dataset and testing it on the in-context pairs. In this configuration we reach an F-score (0.72) only slightly lower than the one reported in BIBREF3 (0.74), and we record the highest Pearson correlation, 0.3 (which is still not strong, compared to BIBREF3 's best run, 0.75). This result may partly be an artifact of the the larger amount of training data provided by the out-of-context pairs.
We can use this variant (out-of-context training and in-context testing) to perform a fine-grained comparison of the model's predicted ratings for the same sentences in and out of context. When we do this, we observe that out of 200 sentence pairs, our model scores the majority (130 pairs) higher when processed in context than out of context. A smaller but significant group (70 pairs) receives a lower score when processed in context. The first group's average score before adding context (0.48) is consistently lower than that of the second group (0.68). Also, as Table TABREF26 indicates, the pairs that our model rated, out of context, with a score lower than 0.5 (on the model's softmax distribution), received on average a higher rating in context, while the opposite is true for the pairs rated with a score higher than 0.5. In general, sentence pairs that were rated highly out of context receive a lower score in context, and vice versa. When we did linear regression on the DNNs in and out of context predicted scores, we observed substantially the same compression pattern exhibited by our AMT mean human judgments. Figure FIGREF27 plots this regression graph.
Related Cognitive Work on Metaphor Aptness
BIBREF7 present ratings of aptness and comprehensibility for 64 metaphors from two groups of subjects. They note that metaphors were perceived as more apt and more comprehensible to the extent that their terms occupied similar positions within dissimilar domains. Interestingly, BIBREF8 also present experimental results to claim that imagery does not clearly correlate with metaphor aptness. Aptness judgments are also subjected to individual differences.
BIBREF9 points to such individual differences in metaphor processing. She asked 27 participants to rate 37 metaphors for difficulty, aptness and familiarity, and to write one or more interpretations of the metaphor. Subjects with higher working memory span were able to give more detailed and elaborate interpretations of metaphors. Familiarity and aptness correlated with both high and low span subjects. For high span subjects aptness of metaphor positively correlated with number of interpretations, while for low span subjects the opposite was true.
BIBREF10 analyses the aptness of metaphors with and without extended context. She finds that domain similarity correlates with aptness judgments in isolated metaphors, but not in contextualized metaphors. She also reports that there is no clear correlation between metaphor aptness ratings in isolated and in contextualized examples. BIBREF0 study the relation between aptness and comprehensibility in metaphors and similes. They provide experimental results indicating that aptness is a better predictor than comprehensibility for the “transformation" of a simile into a metaphor. Subjects tended to remember similes as metaphors (i.e. remember the dancer's arms moved like startled rattlesnakes as the dancer's arms were startled rattlesnakes) if they were judged to be particularly apt, rather than particularly comprehensible. They claim that context might play an important role in this process. They suggest that context should ease the transparency and increase the aptness of both metaphors and similes.
BIBREF11 present a series of experiments indicating that metaphors tend to be interpreted through emergent features that were not rated as particularly relevant, either for the tenor or for the vehicle of the metaphor. The number of emergent features that subjects were able to draw from a metaphor seems to correlate with their aptness judgments.
BIBREF12 use Event-Related Brain Potentials (ERPs) to study the temporal dynamics of metaphor processing in reading literary texts. They emphasize the influence of context on the ability of a reader to smoothly interpret an unusual metaphor.
BIBREF13 use electrophysiological experiments to try to disentangle the effect of a metaphor from that of its context. They find that de-contextualized metaphors elicited two different brain responses, INLINEFORM0 and INLINEFORM1 , while contextualized metaphors only produced the INLINEFORM2 effect. They attribute the INLINEFORM3 effect, often observed in neurological studies of metaphors, to expectations about upcoming words in the absence of a predictive context that “prepares" the reader for the metaphor. They suggest that the INLINEFORM4 effect reflects the actual interpretative processing of the metaphor.
This view is supported by several neurological studies showing that the INLINEFORM0 effect arises with unexpected elements, like new presuppositions introduced into a text in a way not implied by the context BIBREF14 , or unexpected associations with a noun-verb combination, not indicated by previous context (for example preceded by neutral context, as in BIBREF15 ).
Conclusions and Future Work
We have observed that embedding metaphorical sentences and their paraphrase candidates in a document context generates a compression effect in human metaphor aptness ratings. Context seems to mitigate the perceived aptness of metaphors in two ways. Those metaphor-paraphrase pairs given very low scores out of context receive increased scores in context, while those with very high scores out of context decline in rating when presented in context. At the same time, the demarcation line between paraphrase and non-paraphrase is not particularly affected by the introduction of extended context.
As previously observed by BIBREF10 , we found that context has an influence on human aptness ratings for metaphors, although, unlike her results, we did find a correlation between the two sets of ratings. BIBREF0 's expectation that context should facilitate a metaphor's aptness was supported only in one sense. Aptness increases for low-rated pairs. But it decreases for high-rated pairs.
We applied BIBREF3 's DNN for the MAPT to an in-context test set, experimenting with both out-of-context and in-context training corpora. We obtained reasonable results for binary classification of paraphrase candidates for aptness, but the performance of the model declined sharply for the prediction of human gradient aptness judgments, relative to its performance on a corresponding out-of-context test set. This appears to be the result of the increased difficulty in separating rating categories introduced by the compression effect.
Strikingly, the linear regression analyses of human aptness judgments for in- and out-of-context paraphrase pairs, and of our DNN's predictions for these pairs reveal similar compression patterns. These patterns produce ratings that cannot be clearly separated along a linear ranking scale.
To the best of our knowledge ours is the first study of the effect of context on metaphor aptness on a corpus of this dimension, using crowd sourced human judgments as the gold standard for assessing the predictions of a computational model of paraphrase. We also present the first comparative study of both human and model judgments of metaphor paraphrase for in-context and out-of-context variants of metaphorical sentences.
Finally, the compression effect that context induces on paraphrase judgments corresponds closely to the one observed independently in another task, which is reported in BIBREF5 . We regard this effect as a significant discovery that increases the plausibility and the interest of our results. The fact that it appears clearly with two tasks involving different sorts of DNNs and distinct learning regimes (unsupervised learning with neural network language models for the acceptability prediction task discussed, as opposed to supervised learning with our composite DNN for paraphrase prediction) reduces the likelihood that this effect is an artefact of our experimental design.
While our dataset is still small, we are presenting an initial investigation of a phenomenon which is, to date, little studied. We are working to enlarge our dataset and in future work we will expand both our in- and out-of-context annotated metaphor-paraphrase corpora.
While the corpus we used contains a number of hand crafted examples, it would be preferable to find these example types in natural corpora, and we are currently working on this. We will be extracting a dataset of completely natural (corpus-driven) examples. We are seeking to expand the size of the data set to improve the reliability of our modelling experiments.
We will also experiment with alternative DNN architectures for the MAPT. We will conduct qualitative analyses on the kinds of metaphors and similes that are more prone to a context-induced rating switch.
One of our main concerns in future research will be to achieve a better understanding of the compression effect of context on human judgments and DNN models. | Preceding and following sentence of each metaphor and paraphrase are added as document context |
7ee29d657ccb8eb9d5ec64d4afc3ca8b5f3bcc9f | 7ee29d657ccb8eb9d5ec64d4afc3ca8b5f3bcc9f_0 | Q: What were the results of the first experiment?
Text: Introduction
A metaphor is a way of forcing the normal boundaries of a word's meaning in order to better express an experience, a concept or an idea. To a native speaker's ear some metaphors sound more conventional (like the usage of the words ear and sound in this sentence), others more original. This is not the only dimension along which to judge a metaphor. One of the most important qualities of a metaphor is its appropriateness, its aptness: how good is a metaphor for conveying a given experience or concept. While a metaphor's degree of conventionality can be measured through probabilistic methods, like language models, it is harder to represent its aptness. BIBREF0 define aptness as “the extent to which a comparison captures important features of the topic".
It is possible to express an opinion about some metaphors' and similes' aptness (at least to a degree) without previously knowing what they are trying to convey, or the context in which they appear. For example, we don't need a particular context or frame of reference to construe the simile She was screaming like a turtle as strange, and less apt for expressing the quality of a scream than She was screaming like a banshee. In this case, the reason why the simile in the second sentence works best is intuitive. A salient characteristic of a banshee is a powerful scream. Turtles are not known for screaming, and so it is harder to define the quality of a scream through such a comparison, except as a form of irony. Other cases are more complicated to decide upon. The simile crying like a fire in the sun (It's All Over Now, Baby Blue, Bob Dylan) is powerfully apt for many readers, but simply odd for others. Fire and sun are not known to cry in any way. But at the same time the simile can capture the association we draw between something strong and intense in other senses - vision, touch, etc. - and a loud cry.
Nonetheless, most metaphors and similes need some kind of context, or external reference point to be interpreted. The sentence The old lady had a heart of stone is apt if the old lady is cruel or indifferent, but it is inappropriate as a description of a situation in which the old lady is kind and caring. We assume that, to an average reader's sensibility, the sentence models the situation in a satisfactory way only in the first case.
This is the approach to metaphor aptness that we assume in this paper. Following BIBREF3 , we treat a metaphor as apt in relation to a literal expression that it paraphrases. If the metaphor is judged to be a good paraphrase, then it closely expresses the core information of the literal sentence through its metaphorical shift. We refer to the prediction of readers' judgments on the aptness candidates for the literal paraphrase of a metaphor as the metaphor paraphrase aptness task (MPAT). BIBREF3 address the MPAT by using Amazon Mechanical Turk (AMT) to obtain crowd sourced annotations of metaphor-paraphrase candidate pairs. They train a composite Deep Neural Network (DNN) on a portion of their annotated corpus, and test it on the remaining part. Testing involves using the DNN as a binary classifier on paraphrase candidates. They derive predictions of gradient paraphrase aptness for their test set, and assess them by Pearson coefficient correlation to the mean judgments of their crowd sourced annotation of this set. Both training and testing are done independently of any document context for the metaphorical sentence and its literal paraphrase candidates.
In this paper we study the role of context on readers' judgments concerning the aptness of metaphor paraphrase candidates. We look at the accuracy of BIBREF3 's DNN when trained and tested on contextually embedded metaphor-paraphrase pairs for the MPAT. In Section SECREF2 we describe an AMT experiment in which annotators judge metaphors and paraphrases embodied in small document contexts, and in Section SECREF3 we discuss the results of this experiment. In Section SECREF4 we describe our MPAT modeling experiment, and in Section SECREF5 we discuss the results of this experiment. Section SECREF6 briefly surveys some related work. In Section SECREF7 we draw conclusions from our study, and we indicate directions for future work in this area.
Annotating Metaphor-Paraphrase Pairs in Contexts
BIBREF3 have recently produced a dataset of paraphrases containing metaphors designed to allow both supervised binary classification and gradient ranking. This dataset contains several pairs of sentences, where in each pair the first sentence contains a metaphor, and the second is a literal paraphrase candidate.
This corpus was constructed with a view to representing a large variety of syntactic structures and semantic phenomena in metaphorical sentences. Many of these structures and phenomena do not occur as metaphorical expressions, with any frequency, in natural text and were therefore introduced through hand crafted examples.
Each pair of sentences in the corpus has been rated by AMT annotators for paraphrase aptness on a scale of 1-4, with 4 being the highest degree of aptness. In BIBREF3 's dataset, sentences come in groups of five, where the first element is the “reference element" with a metaphorical expression, and the remaining four sentences are “candidates" that stand in a degree of paraphrasehood to the reference. Here is an example of a metaphor-paraphrase candidate pair.
The average AMT paraphrase score for this pair is 4.0, indicating a high degree of aptness.
We extracted 200 sentence pairs from BIBREF3 's dataset and provided each pair with a document context consisting of a preceding and a following sentence, as in the following example.
One of the authors constructed most of these contexts by hand. In some cases, it was possible to locate the original metaphor in an existing document. This was the case for
For these cases, a variant of the existing context was added to both the metaphorical and the literal sentences. We introduced small modifications to keep the context short and clear, and to avoid copyright issues. We lightly modified the contexts of metaphors extracted from corpora when the original context was too long, ie. when the contextual sentences of the selected metaphor were longer than the maximum length we specified for our corpus. In such cases we reduced the length of the sentence, while sustaining its meaning.
The context was designed to sound as natural as possible. Since the same context is used for metaphors and their literal candidate paraphrases, we tried to design short contexts that make sense for both the figurative and the literal sentences, even when the pair had been judged as non-paraphrases. We kept the context as neutral as possible in order to avoid a distortion in crowd source ratings.
For example, in the following pair of sentences, the literal sentence is not a good paraphrase of the figurative one (a simile).
We opted for a context that is natural for both sentences.
We sought to avoid, whenever possible, an incongruous context for one of the sentences that could influence our annotators' ratings.
We collected a sub-corpus of 200 contextually embedded pairs of sentences. We tried to keep our data as balanced as possible, drawing from all four rating classes of paraphrase aptness ratings (between 1 to 4) that BIBREF3 obtained. We selected 44 pairs of 1 ratings, 51 pairs of 2, 43 pairs of 3 and 62 pairs of 4.
We then used AMT crowd sourcing to rate the contextualized paraphrase pairs, so that we could observe the effect of document context on assessments of metaphor paraphrase aptness.
To test the reproducibility of BIBREF3 's ratings, we launched a pilot study for 10 original non-contextually embedded pairs, selected from all four classes of aptness. We observed that the annotators provided mean ratings very similar to those reported in BIBREF3 . The Pearson coefficent correlation between the mean judgments of our out-of-context pilot annotations and BIBREF3 's annotations for the same pair was over 0.9. We then conducted an AMT annotation task for the 200 contextualised pairs. On average, 20 different annotators rated each pair. We considered as “rogue" those annotators who rated the large majority of pairs with very high or very low scores, and those who responded inconsistently to two “trap" pairs. After filtering out the rogues, we had an average of 14 annotators per pair.
Annotation Results
We found a Pearson correlation of 0.81 between the in-context and out-of-context mean human paraphrase ratings for our two corpora. This correlation is virtually identical to the one that BIBREF5 report for mean acceptability ratings of out-of-context to in-context sentences in their crowd source experiment. It is interesting that a relatively high level of ranking correspondence should occur in mean judgments for sentences presented out of and within document contexts, for two entirely distinct tasks.
Our main result concerns the effect of context on mean paraphrase judgment. We observed that it tends to flatten aptness ratings towards the center of the rating scale. 71.1% of the metaphors that had been considered highly apt (average rounded score of 4) in the context-less pairs received a more moderate judgment (average rounded score of 3), but the reverse movement was rare. Only 5% of pairs rated 3 out of context (2 pairs) were boosted to a mean rating of 4 in context. At the other end of the scale, 68.2% of the metaphors judged at 1 category of aptness out of context were raised to a mean of 2 in context, while only the 3.9% of pairs rated 2 out of context were lowered to 1 in context.
Ratings at the middle of the scale - 2 (defined as semantically related non-paraphrases) and 3 (imperfect or loose paraphrases) - remained largely stable, with little movement in either direction. 9.8% of pairs rated 2 were re-ranked as 3 when presented in context, and 10% of pairs ranked at 3 changed to 2. The division between 2 and 3 separates paraphrases from non-paraphrases. Our results suggest that this binary rating of paraphrase aptness was not strongly affected by context. Context operates at the extremes of our scale, raising low aptness ratings and lowering high aptness ratings. This effect is clearly indicated in the regression chart in Fig FIGREF15 .
This effect of context on human ratings is very similar to the one reported in BIBREF5 . They find that sentences rated as ill formed out of context are improved when they are presented in their document contexts. However the mean ratings for sentences judged to be highly acceptable out of context declined when assessed in context. BIBREF5 's linear regression chart for the correlation between out-of-context and in-context acceptability judgments looks remarkably like our Fig FIGREF15 . There is, then, a striking parallel in the compression pattern that context appears to exert on human judgments for two entirely different linguistic properties.
This pattern requires an explanation. BIBREF5 suggest that adding context causes speakers to focus on broader semantic and pragmatic issues of discourse coherence, rather than simply judging syntactic well formedness (measured as naturalness) when a sentence is considered in isolation. On this view, compression of rating results from a pressure to construct a plausible interpretation for any sentence within its context.
If this is the case, an analogous process may generate the same compression effect for metaphor aptness assessment of sentence pairs in context. Speakers may attempt to achieve broader discourse coherence when assessing the metaphor-paraphrase aptness relation in a document context. Out of context they focus more narrowly on the semantic relations between a metaphorical sentence and its paraphrase candidate. Therefore, this relation is at the centre of a speaker's concern, and it receives more fine-grained assessment when considered out of context than in context. This issue clearly requires further research.
Modelling Paraphrase Judgments in Context
We use the DNN model described in BIBREF3 to predict aptness judgments for in-context paraphrase pairs. It has three main components:
The encoder for each pair of sentences taken as input is composed of two parallel "Atrous" Convolutional Neural Networks (CNNs) and LSTM RNNs, feeding two sequenced fully connected layers.
The encoder is preloaded with the lexical embeddings from Word2vec BIBREF6 . The sequences of word embeddings that we use as input provides the model with dense word-level information, while the model tries to generalize over these embedding patterns.
The combination of a CNN and an LSTM allows us to capture both long-distance syntactic and semantic relations, best identified by a CNN, and the sequential nature of the input, most efficiently identified by an LSTM. Several existing studies, cited in BIBREF4 , demonstrate the advantages of combining CNNs and LSTMs to process texts.
The model produces a single classifier value between 0 and 1. We transform this score into a binary output of 0 or 1 by applying a threshold of 0.5 for assigning 1.
The architecture of the model is given in Fig FIGREF19 .
We use the same general protocol as BIBREF3 for training with supervised learning, and testing the model.
Using BIBREF3 's out-of- context metaphor dataset and our contextualized extension of this set, we apply four variants of the training and testing protocol.
When we train or test the model on the out-of-context dataset, we use BIBREF3 's original annotated corpus of 800 metaphor-paraphrase pairs. The in-context dataset contains 200 annotated pairs.
MPAT Modelling Results
We use the model both to predict binary classification of a metaphor paraphrase candidate, and to generate gradient aptness ratings on the 4 category scale (see BIBREF3 for details). A positive binary classification is accurate if it is INLINEFORM0 a 2.5 mean human rating. The gradient predictions are derived from the softmax distribution of the output layer of the model. The results of our modelling experiments are given in Table TABREF24 .
The main result that we obtain from these experiments is that the model learns binary classification to a reasonable extent on the in-context dataset, both when trained on the same kind of data (in-context pairs), and when trained on BIBREF3 's original dataset (out-of-context pairs). However, the model does not perform well in predicting gradient in-context judgments when trained on in-context pairs. It improves slightly for this task when trained on out-of-context pairs.
By contrast, it does well in predicting both binary and gradient ratings when trained and tested on out-of-context data sets.
BIBREF5 also note a decline in Pearson correlation for their DNN models on the task of predicting human in-context acceptability judgments, but it is less drastic. They attribute this decline to the fact that the compression effect renders the gradient judgments less separable, and so harder to predict. A similar, but more pronounced version of this effect may account for the difficulty that our model encounters in predicting gradient in-context ratings. The binary classifier achieves greater success for these cases because its training tends to polarise the data in one direction or the other.
We also observe that the best combination seems to consist in training our model on the original out-of-context dataset and testing it on the in-context pairs. In this configuration we reach an F-score (0.72) only slightly lower than the one reported in BIBREF3 (0.74), and we record the highest Pearson correlation, 0.3 (which is still not strong, compared to BIBREF3 's best run, 0.75). This result may partly be an artifact of the the larger amount of training data provided by the out-of-context pairs.
We can use this variant (out-of-context training and in-context testing) to perform a fine-grained comparison of the model's predicted ratings for the same sentences in and out of context. When we do this, we observe that out of 200 sentence pairs, our model scores the majority (130 pairs) higher when processed in context than out of context. A smaller but significant group (70 pairs) receives a lower score when processed in context. The first group's average score before adding context (0.48) is consistently lower than that of the second group (0.68). Also, as Table TABREF26 indicates, the pairs that our model rated, out of context, with a score lower than 0.5 (on the model's softmax distribution), received on average a higher rating in context, while the opposite is true for the pairs rated with a score higher than 0.5. In general, sentence pairs that were rated highly out of context receive a lower score in context, and vice versa. When we did linear regression on the DNNs in and out of context predicted scores, we observed substantially the same compression pattern exhibited by our AMT mean human judgments. Figure FIGREF27 plots this regression graph.
Related Cognitive Work on Metaphor Aptness
BIBREF7 present ratings of aptness and comprehensibility for 64 metaphors from two groups of subjects. They note that metaphors were perceived as more apt and more comprehensible to the extent that their terms occupied similar positions within dissimilar domains. Interestingly, BIBREF8 also present experimental results to claim that imagery does not clearly correlate with metaphor aptness. Aptness judgments are also subjected to individual differences.
BIBREF9 points to such individual differences in metaphor processing. She asked 27 participants to rate 37 metaphors for difficulty, aptness and familiarity, and to write one or more interpretations of the metaphor. Subjects with higher working memory span were able to give more detailed and elaborate interpretations of metaphors. Familiarity and aptness correlated with both high and low span subjects. For high span subjects aptness of metaphor positively correlated with number of interpretations, while for low span subjects the opposite was true.
BIBREF10 analyses the aptness of metaphors with and without extended context. She finds that domain similarity correlates with aptness judgments in isolated metaphors, but not in contextualized metaphors. She also reports that there is no clear correlation between metaphor aptness ratings in isolated and in contextualized examples. BIBREF0 study the relation between aptness and comprehensibility in metaphors and similes. They provide experimental results indicating that aptness is a better predictor than comprehensibility for the “transformation" of a simile into a metaphor. Subjects tended to remember similes as metaphors (i.e. remember the dancer's arms moved like startled rattlesnakes as the dancer's arms were startled rattlesnakes) if they were judged to be particularly apt, rather than particularly comprehensible. They claim that context might play an important role in this process. They suggest that context should ease the transparency and increase the aptness of both metaphors and similes.
BIBREF11 present a series of experiments indicating that metaphors tend to be interpreted through emergent features that were not rated as particularly relevant, either for the tenor or for the vehicle of the metaphor. The number of emergent features that subjects were able to draw from a metaphor seems to correlate with their aptness judgments.
BIBREF12 use Event-Related Brain Potentials (ERPs) to study the temporal dynamics of metaphor processing in reading literary texts. They emphasize the influence of context on the ability of a reader to smoothly interpret an unusual metaphor.
BIBREF13 use electrophysiological experiments to try to disentangle the effect of a metaphor from that of its context. They find that de-contextualized metaphors elicited two different brain responses, INLINEFORM0 and INLINEFORM1 , while contextualized metaphors only produced the INLINEFORM2 effect. They attribute the INLINEFORM3 effect, often observed in neurological studies of metaphors, to expectations about upcoming words in the absence of a predictive context that “prepares" the reader for the metaphor. They suggest that the INLINEFORM4 effect reflects the actual interpretative processing of the metaphor.
This view is supported by several neurological studies showing that the INLINEFORM0 effect arises with unexpected elements, like new presuppositions introduced into a text in a way not implied by the context BIBREF14 , or unexpected associations with a noun-verb combination, not indicated by previous context (for example preceded by neutral context, as in BIBREF15 ).
Conclusions and Future Work
We have observed that embedding metaphorical sentences and their paraphrase candidates in a document context generates a compression effect in human metaphor aptness ratings. Context seems to mitigate the perceived aptness of metaphors in two ways. Those metaphor-paraphrase pairs given very low scores out of context receive increased scores in context, while those with very high scores out of context decline in rating when presented in context. At the same time, the demarcation line between paraphrase and non-paraphrase is not particularly affected by the introduction of extended context.
As previously observed by BIBREF10 , we found that context has an influence on human aptness ratings for metaphors, although, unlike her results, we did find a correlation between the two sets of ratings. BIBREF0 's expectation that context should facilitate a metaphor's aptness was supported only in one sense. Aptness increases for low-rated pairs. But it decreases for high-rated pairs.
We applied BIBREF3 's DNN for the MAPT to an in-context test set, experimenting with both out-of-context and in-context training corpora. We obtained reasonable results for binary classification of paraphrase candidates for aptness, but the performance of the model declined sharply for the prediction of human gradient aptness judgments, relative to its performance on a corresponding out-of-context test set. This appears to be the result of the increased difficulty in separating rating categories introduced by the compression effect.
Strikingly, the linear regression analyses of human aptness judgments for in- and out-of-context paraphrase pairs, and of our DNN's predictions for these pairs reveal similar compression patterns. These patterns produce ratings that cannot be clearly separated along a linear ranking scale.
To the best of our knowledge ours is the first study of the effect of context on metaphor aptness on a corpus of this dimension, using crowd sourced human judgments as the gold standard for assessing the predictions of a computational model of paraphrase. We also present the first comparative study of both human and model judgments of metaphor paraphrase for in-context and out-of-context variants of metaphorical sentences.
Finally, the compression effect that context induces on paraphrase judgments corresponds closely to the one observed independently in another task, which is reported in BIBREF5 . We regard this effect as a significant discovery that increases the plausibility and the interest of our results. The fact that it appears clearly with two tasks involving different sorts of DNNs and distinct learning regimes (unsupervised learning with neural network language models for the acceptability prediction task discussed, as opposed to supervised learning with our composite DNN for paraphrase prediction) reduces the likelihood that this effect is an artefact of our experimental design.
While our dataset is still small, we are presenting an initial investigation of a phenomenon which is, to date, little studied. We are working to enlarge our dataset and in future work we will expand both our in- and out-of-context annotated metaphor-paraphrase corpora.
While the corpus we used contains a number of hand crafted examples, it would be preferable to find these example types in natural corpora, and we are currently working on this. We will be extracting a dataset of completely natural (corpus-driven) examples. We are seeking to expand the size of the data set to improve the reliability of our modelling experiments.
We will also experiment with alternative DNN architectures for the MAPT. We will conduct qualitative analyses on the kinds of metaphors and similes that are more prone to a context-induced rating switch.
One of our main concerns in future research will be to achieve a better understanding of the compression effect of context on human judgments and DNN models. | Best performance achieved is 0.72 F1 score |
b42323d60827ecf0d9e478c9a31f90940cfae975 | b42323d60827ecf0d9e478c9a31f90940cfae975_0 | Q: How big is the evaluated dataset?
Text: Introduction
Drug-drug interaction (DDI) is a situation when one drug increases or decreases the effect of another drug BIBREF0 . Adverse drug reactions may cause severe side effect, if two or more medicines were taken and their DDI were not investigated in detail. DDI is a common cause of illness, even a cause of death BIBREF1 . Thus, DDI databases for clinical medication decisions are proposed by some researchers. These databases such as SFINX BIBREF2 , KEGG BIBREF3 , CredibleMeds BIBREF4 help physicians and pharmacists avoid most adverse drug reactions.
Traditional DDI databases are manually constructed according to clinical records, scientific research and drug specifications. For instance, The sentence “With combined use, clinicians should be aware, when phenytoin is added, of the potential for reexacerbation of pulmonary symptomatology due to lowered serum theophylline concentrations BIBREF5 ”, which is from a pharmacotherapy report, describe the side effect of phenytoin and theophylline's combined use. Then this information on specific medicines will be added to DDI databases. As drug-drug interactions have being increasingly found, manually constructing DDI database would consume a lot of manpower and resources.
There has been many efforts to automatically extract DDIs from natural language BIBREF0 , BIBREF6 , BIBREF7 , BIBREF8 , BIBREF9 , BIBREF10 , BIBREF11 , BIBREF12 , mainly medical literature and clinical records. These works can be divided into the following categories:
To avoid complex feature engineering and NLP toolkits' usage, we employ deep learning approaches for sentence comprehension as a whole. Our model takes in a sentence from biomedical literature which contains a drug pair and outputs what kind of DDI this drug pair belongs. This assists physicians refrain from improper combined use of drugs. In addition, the word and sentence level attentions are introduced to our model for better DDI predictions.
We train our language comprehension model with labeled instances. Figure FIGREF5 shows partial records in DDI corpus BIBREF16 . We extract the sentence and drug pairs in the records. There are 3 drug pairs in this example thus we have 3 instances. The DDI corpus annotate each drug pair in the sentence with a DDI type. The DDI type, which is the most concerned information, is described in table TABREF4 . The details about how we train our model and extract the DDI type from text are described in the remaining sections.
Related Work
In DDI extraction task, NLP methods or machine learning approaches are proposed by most of the work. Chowdhury BIBREF14 and Thomas et al. BIBREF11 proposed methods that use linguistic phenomenons and two-stage SVM to classify DDIs. FBK-irst BIBREF10 is a follow-on work which applies kernel method to the existing model and outperforms it.
Neural network based approaches have been proposed by several works. Liu et al. BIBREF9 employ CNN for DDI extraction for the first time which outperforms the traditional machine learning based methods. Limited by the convolutional kernel size, the CNN can only extracted features of continuous 3 to 5 words rather than distant words. Liu et al. BIBREF8 proposed dependency-based CNN to handle distant but relevant words. Sahu et al. BIBREF12 proposed LSTM based DDI extraction approach and outperforms CNN based approach, since LSTM handles sentence as a sequence instead of slide windows. To conclude, Neural network based approaches have advantages of 1) less reliance on extra NLP toolkits, 2) simpler preprocessing procedure, 3) better performance than text analysis and machine learning methods.
Drug-drug interaction extraction is a relation extraction task of natural language processing. Relation extraction aims to determine the relation between two given entities in a sentence. In recent years, attention mechanism and various neural networks are applied to relation extraction BIBREF17 , BIBREF18 , BIBREF19 , BIBREF20 , BIBREF21 . Convolutional deep neural network are utilized for extracting sentence level features in BIBREF19 . Then the sentence level features are concatenated with lexical level features, which are obtained by NLP toolkit WordNet BIBREF22 , followed by a multilayer perceptron (MLP) to classify the entities' relation. A fixed work is proposed by Nguyen et al. BIBREF21 . The convolutional kernel is set various size to capture more n-gram features. In addition, the word and position embedding are trained automatically instead of keeping constant as in BIBREF19 . Wang et al. BIBREF20 introduce multi-level attention mechanism to CNN in order to emphasize the keywords and ignore the non-critical words during relation detection. The attention CNN model outperforms previous state-of-the-art methods.
Besides CNN, Recurrent neural network (RNN) has been applied to relation extraction as well. Zhang et al. BIBREF18 utilize long short-term memory network (LSTM), a typical RNN model, to represent sentence. The bidirectional LSTM chronologically captures the previous and future information, after which a pooling layer and MLP have been set to extract feature and classify the relation. Attention mechanism is added to bidirectional LSTM in BIBREF17 for relation extraction. An attention layer gives each memory cell a weight so that classifier can catch the principal feature for the relation detection. The Attention based bidirectional LSTM has been proven better than previous work.
Proposed Model
In this section, we present our bidirectional recurrent neural network with multiple attention layer model. The overview of our architecture is shown in figure FIGREF15 . For a given instance, which describes the details about two or more drugs, the model represents each word as a vector in embedding layer. Then the bidirectional RNN layer generates a sentence matrix, each column vector in which is the semantic representation of the corresponding word. The word level attention layer transforms the sentence matrix to vector representation. Then sentence level attention layer generates final representation for the instance by combining several relevant sentences in view of the fact that these sentences have the same drug pair. Followed by a softmax classifier, the model classifies the drug pair in the given instance as specific DDI.
Preprocessing
The DDI corpus contains thousands of XML files, each of which are constructed by several records. For a sentence containing INLINEFORM0 drugs, there are INLINEFORM1 drug pairs. We replace the interested two drugs with “drug1” and “drug2” while the other drugs are replaced by “durg0”, as in BIBREF9 did. This step is called drug blinding. For example, the sentence in figure FIGREF5 generates 3 instances after drug blinding: “drug1: an increased risk of hepatitis has been reported to result from combined use of drug2 and drug0”, “drug1: an increased risk of hepatitis has been reported to result from combined use of drug0 and drug2”, “drug0: an increased risk of hepatitis has been reported to result from combined use of drug1 and drug2”. The drug blinded sentences are the instances that are fed to our model.
We put the sentences with the same drug pairs together as a set, since the sentence level attention layer (will be described in Section SECREF21 ) will use the sentences which contain the same drugs.
Embedding Layer
Given an instance INLINEFORM0 which contains specified two drugs INLINEFORM1 , INLINEFORM2 , each word is embedded in a INLINEFORM3 dimensional space ( INLINEFORM4 , INLINEFORM5 are the dimension of word embedding and position embedding). The look up table function INLINEFORM6 maps a word or a relative position to a column vector. After embedding layer the sentence is represented by INLINEFORM7 , where DISPLAYFORM0
The INLINEFORM0 function is usually implemented with matrix-vector product. Let INLINEFORM1 , INLINEFORM2 denote the one-hot representation (column vector) of word and relative distance. INLINEFORM3 , INLINEFORM4 are word and position embedding query matrix. The look up functions are implemented by DISPLAYFORM0
Then the word sequence INLINEFORM0 is fed to the RNN layer. Note that the sentence will be filled with INLINEFORM1 if its length is less than INLINEFORM2 .
Bidirectional RNN Encoding Layer
The words in the sequence are read by RNN's gated recurrent unit (GRU) one by one. The GRU takes the current word INLINEFORM0 and the previous GRU's hidden state INLINEFORM1 as input. The current GRU encodes INLINEFORM2 and INLINEFORM3 into a new hidden state INLINEFORM4 (its dimension is INLINEFORM5 , a hyperparameter), which can be regarded as informations the GRU remembered.
Figure FIGREF25 shows the details in GRU. The reset gate INLINEFORM0 selectively forgets informations delivered by previous GRU. Then the hidden state becomes INLINEFORM1 . The update gate INLINEFORM2 updates the informations according to INLINEFORM3 and INLINEFORM4 . The equations below describe these procedures. Note that INLINEFORM5 stands for element wise multiplication. DISPLAYFORM0 DISPLAYFORM1
The bidirectional RNN contains forward RNN and backward RNN. Forward RNN reads sentence from INLINEFORM0 to INLINEFORM1 , generating INLINEFORM2 , INLINEFORM3 , ..., INLINEFORM4 . Backward RNN reads sentence from INLINEFORM5 to INLINEFORM6 , generating INLINEFORM7 , INLINEFORM8 , ..., INLINEFORM9 . Then the encode result of this layer is DISPLAYFORM0
We apply dropout technique in RNN layer to avoid overfitting. Each GRU have a probability (denoted by INLINEFORM0 , also a hyperparameter) of being dropped. The dropped GRU has no output and will not affect the subsequent GRUs. With bidirectional RNN and dropout technique, the input INLINEFORM1 is encoded into sentence matrix INLINEFORM2 .
Word Level Attention
The purpose of word level attention layer is to extract sentence representation (also known as feature vector) from encoded matrix. We use word level attention instead of max pooling, since attention mechanism can determine the importance of individual encoded word in each row of INLINEFORM0 . Let INLINEFORM1 denotes the attention vector (column vector), INLINEFORM2 denotes the filter that gives each element in the row of INLINEFORM3 a weight. The following equations shows the attention operation, which is also illustrated in figure FIGREF15 . DISPLAYFORM0 DISPLAYFORM1
The softmax function takes a vector INLINEFORM0 as input and outputs a vector, DISPLAYFORM0
INLINEFORM0 denotes the feature vector captured by this layer. Several approaches BIBREF12 , BIBREF17 use this vector and softmax classifier for classification. Inspired by BIBREF23 we propose the sentence level attention to combine the information of other sentences for a improved DDI classification.
Sentence Level Attention
The previous layers captures the features only from the given sentence. However, other sentences may contains informations that contribute to the understanding of this sentence. It is reasonable to look over other relevant instances when determine two drugs' interaction from the given sentence. In our implementation, the instances that have the same drug pair are believed to be relevant. The relevant instances set is denoted by INLINEFORM0 , where INLINEFORM1 is the sentence feature vector. INLINEFORM2 stands for how well the instance INLINEFORM3 matches its DDI INLINEFORM4 (Vector representation of a specific DDI). INLINEFORM5 is a diagonal attention matrix, multiplied by which the feature vector INLINEFORM6 can concentrate on those most representative features. DISPLAYFORM0 DISPLAYFORM1
INLINEFORM0 is the softmax result of INLINEFORM1 . The final sentence representation is decided by all of the relevant sentences' feature vector, as Equation EQREF24 shows. DISPLAYFORM0
Note that the set INLINEFORM0 is gradually growing as new sentence with the same drugs pairs is found when training. An instance INLINEFORM1 is represented by INLINEFORM2 before sentence level attention. The sentence level attention layer finds the set INLINEFORM3 , instances in which have the same drug pair as in INLINEFORM4 , and put INLINEFORM5 in INLINEFORM6 . Then the final sentence representation INLINEFORM7 is calculated in this layer.
Classification and Training
A given sentence INLINEFORM0 is finally represented by the feature vector INLINEFORM1 . Then we feed it to a softmax classifier. Let INLINEFORM2 denotes the set of all kinds of DDI. The output INLINEFORM3 is the probabilities of each class INLINEFORM4 belongs. DISPLAYFORM0
We use cross entropy cost function and INLINEFORM0 regularization as the optimization objective. For INLINEFORM1 -th instance, INLINEFORM2 denotes the one-hot representation of it's label, where the model outputs INLINEFORM3 . The cross entropy cost is: DISPLAYFORM0
For a mini-batch INLINEFORM0 , the optimization objective is: DISPLAYFORM0
All parameters in this model is: DISPLAYFORM0
We optimize the parameters of objective function INLINEFORM0 with Adam BIBREF24 , which is a variant of mini-batch stochastic gradient descent. During each train step, the gradient of INLINEFORM1 is calculated. Then INLINEFORM2 is adjusted according to the gradient. After the end of training, we have a model that is able to predict two drugs' interactions when a sentence about these drugs is given.
DDI Prediction
The model is trained for DDI classification. The parameters in list INLINEFORM0 are tuned during the training process. Given a new sentence with two drugs, we can use this model to classify the DDI type.
The DDI prediction follows the procedure described in Section SECREF6 - SECREF26 . The given sentence is eventually represented by feature vector INLINEFORM0 . Then INLINEFORM1 is classified to a specific DDI type with a softmax classifier. In next section, we will evaluate our model's DDI prediction performance and see the advantages and shortcomings of our model.
Datasets and Evaluation Metrics
We use the DDI corpus of the 2013 DDIExtraction challenge BIBREF16 to train and test our model. The DDIs in this corpus are classified as five types. We give the definitions of these types and their example sentences, as shown in table TABREF4 . This standard dataset is made up of training set and testing set. We use the same metrics as in other drug-drug interaction extraction literature BIBREF11 , BIBREF10 , BIBREF25 , BIBREF9 , BIBREF8 , BIBREF12 : the overall precision, recall, and F1 score on testing set. INLINEFORM0 denotes the set of {False, Mechanism, Effect, Advise, Int}. The precision and recall of each INLINEFORM1 are calculated by DISPLAYFORM0 DISPLAYFORM1
Then the overall precision, recall, and F1 score are calculated by DISPLAYFORM0
Besides, we evaluate the captured feature vectors with t-SNE BIBREF26 , a visualizing and intuitive way to map a high dimensional vector into a 2 or 3-dimensional space. If the points in a low dimensional space are easy to be split, the feature vectors are believed to be more distinguishable.
Hyperparameter Settings and Training
We use TensorFlow BIBREF27 r0.11 to implement the proposed model. The input of each word is an ordered triple (word, relative distance from drug1, relative distance from drug2). The sentence, which is represented as a matrix, is fed to the model. The output of the model is a INLINEFORM0 -dimensional vector representing the probabilities of being corresponding DDI. It is the network, parameters, and hyperparameters which decides the output vector. The network's parameters are adjusted during training, where the hyperparameters are tuned by hand. The hyperparameters after tuning are as follows. The word embedding's dimension INLINEFORM1 , the position embedding's dimension INLINEFORM2 , the hidden state's dimension INLINEFORM3 , the probability of dropout INLINEFORM4 , other hyperparameters which are not shown here are set to TensorFlow's default values.
The word embedding is initialized by pre-trained word vectors using GloVe BIBREF28 , while other parameters are initialized randomly. During each training step, a mini-batch (the mini-batch size INLINEFORM0 in our implementation) of sentences is selected from training set. The gradient of objective function is calculated for parameters updating (See Section SECREF26 ).
Figure FIGREF32 shows the training process. The objective function INLINEFORM0 is declining as the training mini-batches continuously sent to the model. As the testing mini-batches, the INLINEFORM1 function is fluctuating while its overall trend is descending. The instances in testing set are not participated in training so that INLINEFORM2 function is not descending so fast. However, training and testing instances have similar distribution in sample space, causing that testing instances' INLINEFORM3 tends to be smaller along with the training process. INLINEFORM4 has inverse relationship with the performance measurement. The F1 score is getting fluctuating around a specific value after enough training steps. The reason why fluctuating range is considerable is that only a tiny part of the whole training or testing set has been calculated the F1 score. Testing the whole set during every step is time consuming and not necessary. We will evaluate the model on the whole testing set in Section SECREF47 .
Experimental Results
We save our model every 100 step and predict all the DDIs of the instances in the testing set. These predictions' F1 score is shown in figure FIGREF40 . To demonstrate the sentence level attention layer is effective, we drop this layer and then directly use INLINEFORM0 for softmax classification (See figure FIGREF15 ). The result is shown with “RNN + dynamic word embedding + ATT” curve, which illustrates that the sentence level attention layer contributes to a more accurate model.
Whether a dynamic or static word embedding is better for a DDI extraction task is under consideration. Nguyen et al. BIBREF21 shows that updating word embedding at the time of other parameters being trained makes a better performance in relation extraction task. We let the embedding be static when training, while other conditions are all the same. The “RNN + static word embedding + 2ATT” curve shows this case. We can draw a conclusion that updating the initialized word embedding trains more suitable word vectors for the task, which promotes the performance.
We compare our best F1 score with other state-of-the-art approaches in table TABREF39 , which shows our model has competitive advantage in dealing with drug-drug interaction extraction. The predictions confusion matrix is shown in table TABREF46 . The DDIs other than false being classified as false makes most of the classification error. It may perform better if a classifier which can tells true and false DDI apart is trained. We leave this two-stage classifier to our future work. Another phenomenon is that the “Int” type is often classified as “Effect”. The “Int” sentence describes there exists interaction between two drugs and this information implies the two drugs' combination will have good or bed effect. That's the reason why “Int” and “Effect” are often obfuscated.
To evaluate the features our model captured, we employ scikit-learn BIBREF29 's t-SNE class to map high dimensional feature vectors to 2-dimensional vectors, which can be depicted on a plane. We depict all the features of the instances in testing set, as shown in figure FIGREF41 . The RNN model using dynamic word embedding and 2 layers of attention is the most distinguishable one. Unfortunately, the classifier can not classify all the instances into correct classes. Comparing table TABREF46 with figure UID44 , both of which are from the best performed model, we can observe some conclusions. The “Int” DDIs are often misclassified as “Effect”, for the reason that some of the “Int” points are in the “Effect” cluster. The “Effect” points are too scattered so that plenty of “Effect” DDIs are classified to other types. The “Mechanism” points are gathered around two clusters, causing that most of the “mechanism” DDIs are classified to two types: “False” and “Mechanism”. In short, the visualizability of feature mapping gives better explanations for the prediction results and the quality of captured features.
Conclusion and Future Work
To conclude, we propose a recurrent neural network with multiple attention layers to extract DDIs from biomedical text. The sentence level attention layer, which combines other sentences containing the same drugs, has been added to our model. The experiments shows that our model outperforms the state-of-the-art DDI extraction systems. Task relevant word embedding and two attention layers improved the performance to some extent.
The imbalance of the classes and the ambiguity of semantics cause most of the misclassifications. We consider that instance generation using generative adversarial networks would cover the instance shortage in specific category. It is also reasonable to use distant supervision learning (which utilize other relevant material) for knowledge supplement and obtain a better performed DDI extraction system.
Acknowledgment
This work is supported by the NSFC under Grant 61303191, 61303190, 61402504, 61103015. | contains thousands of XML files, each of which are constructed by several records |
1a69696034f70fb76cd7bb30494b2f5ab97e134d | 1a69696034f70fb76cd7bb30494b2f5ab97e134d_0 | Q: By how much does their model outperform existing methods?
Text: Introduction
Drug-drug interaction (DDI) is a situation when one drug increases or decreases the effect of another drug BIBREF0 . Adverse drug reactions may cause severe side effect, if two or more medicines were taken and their DDI were not investigated in detail. DDI is a common cause of illness, even a cause of death BIBREF1 . Thus, DDI databases for clinical medication decisions are proposed by some researchers. These databases such as SFINX BIBREF2 , KEGG BIBREF3 , CredibleMeds BIBREF4 help physicians and pharmacists avoid most adverse drug reactions.
Traditional DDI databases are manually constructed according to clinical records, scientific research and drug specifications. For instance, The sentence “With combined use, clinicians should be aware, when phenytoin is added, of the potential for reexacerbation of pulmonary symptomatology due to lowered serum theophylline concentrations BIBREF5 ”, which is from a pharmacotherapy report, describe the side effect of phenytoin and theophylline's combined use. Then this information on specific medicines will be added to DDI databases. As drug-drug interactions have being increasingly found, manually constructing DDI database would consume a lot of manpower and resources.
There has been many efforts to automatically extract DDIs from natural language BIBREF0 , BIBREF6 , BIBREF7 , BIBREF8 , BIBREF9 , BIBREF10 , BIBREF11 , BIBREF12 , mainly medical literature and clinical records. These works can be divided into the following categories:
To avoid complex feature engineering and NLP toolkits' usage, we employ deep learning approaches for sentence comprehension as a whole. Our model takes in a sentence from biomedical literature which contains a drug pair and outputs what kind of DDI this drug pair belongs. This assists physicians refrain from improper combined use of drugs. In addition, the word and sentence level attentions are introduced to our model for better DDI predictions.
We train our language comprehension model with labeled instances. Figure FIGREF5 shows partial records in DDI corpus BIBREF16 . We extract the sentence and drug pairs in the records. There are 3 drug pairs in this example thus we have 3 instances. The DDI corpus annotate each drug pair in the sentence with a DDI type. The DDI type, which is the most concerned information, is described in table TABREF4 . The details about how we train our model and extract the DDI type from text are described in the remaining sections.
Related Work
In DDI extraction task, NLP methods or machine learning approaches are proposed by most of the work. Chowdhury BIBREF14 and Thomas et al. BIBREF11 proposed methods that use linguistic phenomenons and two-stage SVM to classify DDIs. FBK-irst BIBREF10 is a follow-on work which applies kernel method to the existing model and outperforms it.
Neural network based approaches have been proposed by several works. Liu et al. BIBREF9 employ CNN for DDI extraction for the first time which outperforms the traditional machine learning based methods. Limited by the convolutional kernel size, the CNN can only extracted features of continuous 3 to 5 words rather than distant words. Liu et al. BIBREF8 proposed dependency-based CNN to handle distant but relevant words. Sahu et al. BIBREF12 proposed LSTM based DDI extraction approach and outperforms CNN based approach, since LSTM handles sentence as a sequence instead of slide windows. To conclude, Neural network based approaches have advantages of 1) less reliance on extra NLP toolkits, 2) simpler preprocessing procedure, 3) better performance than text analysis and machine learning methods.
Drug-drug interaction extraction is a relation extraction task of natural language processing. Relation extraction aims to determine the relation between two given entities in a sentence. In recent years, attention mechanism and various neural networks are applied to relation extraction BIBREF17 , BIBREF18 , BIBREF19 , BIBREF20 , BIBREF21 . Convolutional deep neural network are utilized for extracting sentence level features in BIBREF19 . Then the sentence level features are concatenated with lexical level features, which are obtained by NLP toolkit WordNet BIBREF22 , followed by a multilayer perceptron (MLP) to classify the entities' relation. A fixed work is proposed by Nguyen et al. BIBREF21 . The convolutional kernel is set various size to capture more n-gram features. In addition, the word and position embedding are trained automatically instead of keeping constant as in BIBREF19 . Wang et al. BIBREF20 introduce multi-level attention mechanism to CNN in order to emphasize the keywords and ignore the non-critical words during relation detection. The attention CNN model outperforms previous state-of-the-art methods.
Besides CNN, Recurrent neural network (RNN) has been applied to relation extraction as well. Zhang et al. BIBREF18 utilize long short-term memory network (LSTM), a typical RNN model, to represent sentence. The bidirectional LSTM chronologically captures the previous and future information, after which a pooling layer and MLP have been set to extract feature and classify the relation. Attention mechanism is added to bidirectional LSTM in BIBREF17 for relation extraction. An attention layer gives each memory cell a weight so that classifier can catch the principal feature for the relation detection. The Attention based bidirectional LSTM has been proven better than previous work.
Proposed Model
In this section, we present our bidirectional recurrent neural network with multiple attention layer model. The overview of our architecture is shown in figure FIGREF15 . For a given instance, which describes the details about two or more drugs, the model represents each word as a vector in embedding layer. Then the bidirectional RNN layer generates a sentence matrix, each column vector in which is the semantic representation of the corresponding word. The word level attention layer transforms the sentence matrix to vector representation. Then sentence level attention layer generates final representation for the instance by combining several relevant sentences in view of the fact that these sentences have the same drug pair. Followed by a softmax classifier, the model classifies the drug pair in the given instance as specific DDI.
Preprocessing
The DDI corpus contains thousands of XML files, each of which are constructed by several records. For a sentence containing INLINEFORM0 drugs, there are INLINEFORM1 drug pairs. We replace the interested two drugs with “drug1” and “drug2” while the other drugs are replaced by “durg0”, as in BIBREF9 did. This step is called drug blinding. For example, the sentence in figure FIGREF5 generates 3 instances after drug blinding: “drug1: an increased risk of hepatitis has been reported to result from combined use of drug2 and drug0”, “drug1: an increased risk of hepatitis has been reported to result from combined use of drug0 and drug2”, “drug0: an increased risk of hepatitis has been reported to result from combined use of drug1 and drug2”. The drug blinded sentences are the instances that are fed to our model.
We put the sentences with the same drug pairs together as a set, since the sentence level attention layer (will be described in Section SECREF21 ) will use the sentences which contain the same drugs.
Embedding Layer
Given an instance INLINEFORM0 which contains specified two drugs INLINEFORM1 , INLINEFORM2 , each word is embedded in a INLINEFORM3 dimensional space ( INLINEFORM4 , INLINEFORM5 are the dimension of word embedding and position embedding). The look up table function INLINEFORM6 maps a word or a relative position to a column vector. After embedding layer the sentence is represented by INLINEFORM7 , where DISPLAYFORM0
The INLINEFORM0 function is usually implemented with matrix-vector product. Let INLINEFORM1 , INLINEFORM2 denote the one-hot representation (column vector) of word and relative distance. INLINEFORM3 , INLINEFORM4 are word and position embedding query matrix. The look up functions are implemented by DISPLAYFORM0
Then the word sequence INLINEFORM0 is fed to the RNN layer. Note that the sentence will be filled with INLINEFORM1 if its length is less than INLINEFORM2 .
Bidirectional RNN Encoding Layer
The words in the sequence are read by RNN's gated recurrent unit (GRU) one by one. The GRU takes the current word INLINEFORM0 and the previous GRU's hidden state INLINEFORM1 as input. The current GRU encodes INLINEFORM2 and INLINEFORM3 into a new hidden state INLINEFORM4 (its dimension is INLINEFORM5 , a hyperparameter), which can be regarded as informations the GRU remembered.
Figure FIGREF25 shows the details in GRU. The reset gate INLINEFORM0 selectively forgets informations delivered by previous GRU. Then the hidden state becomes INLINEFORM1 . The update gate INLINEFORM2 updates the informations according to INLINEFORM3 and INLINEFORM4 . The equations below describe these procedures. Note that INLINEFORM5 stands for element wise multiplication. DISPLAYFORM0 DISPLAYFORM1
The bidirectional RNN contains forward RNN and backward RNN. Forward RNN reads sentence from INLINEFORM0 to INLINEFORM1 , generating INLINEFORM2 , INLINEFORM3 , ..., INLINEFORM4 . Backward RNN reads sentence from INLINEFORM5 to INLINEFORM6 , generating INLINEFORM7 , INLINEFORM8 , ..., INLINEFORM9 . Then the encode result of this layer is DISPLAYFORM0
We apply dropout technique in RNN layer to avoid overfitting. Each GRU have a probability (denoted by INLINEFORM0 , also a hyperparameter) of being dropped. The dropped GRU has no output and will not affect the subsequent GRUs. With bidirectional RNN and dropout technique, the input INLINEFORM1 is encoded into sentence matrix INLINEFORM2 .
Word Level Attention
The purpose of word level attention layer is to extract sentence representation (also known as feature vector) from encoded matrix. We use word level attention instead of max pooling, since attention mechanism can determine the importance of individual encoded word in each row of INLINEFORM0 . Let INLINEFORM1 denotes the attention vector (column vector), INLINEFORM2 denotes the filter that gives each element in the row of INLINEFORM3 a weight. The following equations shows the attention operation, which is also illustrated in figure FIGREF15 . DISPLAYFORM0 DISPLAYFORM1
The softmax function takes a vector INLINEFORM0 as input and outputs a vector, DISPLAYFORM0
INLINEFORM0 denotes the feature vector captured by this layer. Several approaches BIBREF12 , BIBREF17 use this vector and softmax classifier for classification. Inspired by BIBREF23 we propose the sentence level attention to combine the information of other sentences for a improved DDI classification.
Sentence Level Attention
The previous layers captures the features only from the given sentence. However, other sentences may contains informations that contribute to the understanding of this sentence. It is reasonable to look over other relevant instances when determine two drugs' interaction from the given sentence. In our implementation, the instances that have the same drug pair are believed to be relevant. The relevant instances set is denoted by INLINEFORM0 , where INLINEFORM1 is the sentence feature vector. INLINEFORM2 stands for how well the instance INLINEFORM3 matches its DDI INLINEFORM4 (Vector representation of a specific DDI). INLINEFORM5 is a diagonal attention matrix, multiplied by which the feature vector INLINEFORM6 can concentrate on those most representative features. DISPLAYFORM0 DISPLAYFORM1
INLINEFORM0 is the softmax result of INLINEFORM1 . The final sentence representation is decided by all of the relevant sentences' feature vector, as Equation EQREF24 shows. DISPLAYFORM0
Note that the set INLINEFORM0 is gradually growing as new sentence with the same drugs pairs is found when training. An instance INLINEFORM1 is represented by INLINEFORM2 before sentence level attention. The sentence level attention layer finds the set INLINEFORM3 , instances in which have the same drug pair as in INLINEFORM4 , and put INLINEFORM5 in INLINEFORM6 . Then the final sentence representation INLINEFORM7 is calculated in this layer.
Classification and Training
A given sentence INLINEFORM0 is finally represented by the feature vector INLINEFORM1 . Then we feed it to a softmax classifier. Let INLINEFORM2 denotes the set of all kinds of DDI. The output INLINEFORM3 is the probabilities of each class INLINEFORM4 belongs. DISPLAYFORM0
We use cross entropy cost function and INLINEFORM0 regularization as the optimization objective. For INLINEFORM1 -th instance, INLINEFORM2 denotes the one-hot representation of it's label, where the model outputs INLINEFORM3 . The cross entropy cost is: DISPLAYFORM0
For a mini-batch INLINEFORM0 , the optimization objective is: DISPLAYFORM0
All parameters in this model is: DISPLAYFORM0
We optimize the parameters of objective function INLINEFORM0 with Adam BIBREF24 , which is a variant of mini-batch stochastic gradient descent. During each train step, the gradient of INLINEFORM1 is calculated. Then INLINEFORM2 is adjusted according to the gradient. After the end of training, we have a model that is able to predict two drugs' interactions when a sentence about these drugs is given.
DDI Prediction
The model is trained for DDI classification. The parameters in list INLINEFORM0 are tuned during the training process. Given a new sentence with two drugs, we can use this model to classify the DDI type.
The DDI prediction follows the procedure described in Section SECREF6 - SECREF26 . The given sentence is eventually represented by feature vector INLINEFORM0 . Then INLINEFORM1 is classified to a specific DDI type with a softmax classifier. In next section, we will evaluate our model's DDI prediction performance and see the advantages and shortcomings of our model.
Datasets and Evaluation Metrics
We use the DDI corpus of the 2013 DDIExtraction challenge BIBREF16 to train and test our model. The DDIs in this corpus are classified as five types. We give the definitions of these types and their example sentences, as shown in table TABREF4 . This standard dataset is made up of training set and testing set. We use the same metrics as in other drug-drug interaction extraction literature BIBREF11 , BIBREF10 , BIBREF25 , BIBREF9 , BIBREF8 , BIBREF12 : the overall precision, recall, and F1 score on testing set. INLINEFORM0 denotes the set of {False, Mechanism, Effect, Advise, Int}. The precision and recall of each INLINEFORM1 are calculated by DISPLAYFORM0 DISPLAYFORM1
Then the overall precision, recall, and F1 score are calculated by DISPLAYFORM0
Besides, we evaluate the captured feature vectors with t-SNE BIBREF26 , a visualizing and intuitive way to map a high dimensional vector into a 2 or 3-dimensional space. If the points in a low dimensional space are easy to be split, the feature vectors are believed to be more distinguishable.
Hyperparameter Settings and Training
We use TensorFlow BIBREF27 r0.11 to implement the proposed model. The input of each word is an ordered triple (word, relative distance from drug1, relative distance from drug2). The sentence, which is represented as a matrix, is fed to the model. The output of the model is a INLINEFORM0 -dimensional vector representing the probabilities of being corresponding DDI. It is the network, parameters, and hyperparameters which decides the output vector. The network's parameters are adjusted during training, where the hyperparameters are tuned by hand. The hyperparameters after tuning are as follows. The word embedding's dimension INLINEFORM1 , the position embedding's dimension INLINEFORM2 , the hidden state's dimension INLINEFORM3 , the probability of dropout INLINEFORM4 , other hyperparameters which are not shown here are set to TensorFlow's default values.
The word embedding is initialized by pre-trained word vectors using GloVe BIBREF28 , while other parameters are initialized randomly. During each training step, a mini-batch (the mini-batch size INLINEFORM0 in our implementation) of sentences is selected from training set. The gradient of objective function is calculated for parameters updating (See Section SECREF26 ).
Figure FIGREF32 shows the training process. The objective function INLINEFORM0 is declining as the training mini-batches continuously sent to the model. As the testing mini-batches, the INLINEFORM1 function is fluctuating while its overall trend is descending. The instances in testing set are not participated in training so that INLINEFORM2 function is not descending so fast. However, training and testing instances have similar distribution in sample space, causing that testing instances' INLINEFORM3 tends to be smaller along with the training process. INLINEFORM4 has inverse relationship with the performance measurement. The F1 score is getting fluctuating around a specific value after enough training steps. The reason why fluctuating range is considerable is that only a tiny part of the whole training or testing set has been calculated the F1 score. Testing the whole set during every step is time consuming and not necessary. We will evaluate the model on the whole testing set in Section SECREF47 .
Experimental Results
We save our model every 100 step and predict all the DDIs of the instances in the testing set. These predictions' F1 score is shown in figure FIGREF40 . To demonstrate the sentence level attention layer is effective, we drop this layer and then directly use INLINEFORM0 for softmax classification (See figure FIGREF15 ). The result is shown with “RNN + dynamic word embedding + ATT” curve, which illustrates that the sentence level attention layer contributes to a more accurate model.
Whether a dynamic or static word embedding is better for a DDI extraction task is under consideration. Nguyen et al. BIBREF21 shows that updating word embedding at the time of other parameters being trained makes a better performance in relation extraction task. We let the embedding be static when training, while other conditions are all the same. The “RNN + static word embedding + 2ATT” curve shows this case. We can draw a conclusion that updating the initialized word embedding trains more suitable word vectors for the task, which promotes the performance.
We compare our best F1 score with other state-of-the-art approaches in table TABREF39 , which shows our model has competitive advantage in dealing with drug-drug interaction extraction. The predictions confusion matrix is shown in table TABREF46 . The DDIs other than false being classified as false makes most of the classification error. It may perform better if a classifier which can tells true and false DDI apart is trained. We leave this two-stage classifier to our future work. Another phenomenon is that the “Int” type is often classified as “Effect”. The “Int” sentence describes there exists interaction between two drugs and this information implies the two drugs' combination will have good or bed effect. That's the reason why “Int” and “Effect” are often obfuscated.
To evaluate the features our model captured, we employ scikit-learn BIBREF29 's t-SNE class to map high dimensional feature vectors to 2-dimensional vectors, which can be depicted on a plane. We depict all the features of the instances in testing set, as shown in figure FIGREF41 . The RNN model using dynamic word embedding and 2 layers of attention is the most distinguishable one. Unfortunately, the classifier can not classify all the instances into correct classes. Comparing table TABREF46 with figure UID44 , both of which are from the best performed model, we can observe some conclusions. The “Int” DDIs are often misclassified as “Effect”, for the reason that some of the “Int” points are in the “Effect” cluster. The “Effect” points are too scattered so that plenty of “Effect” DDIs are classified to other types. The “Mechanism” points are gathered around two clusters, causing that most of the “mechanism” DDIs are classified to two types: “False” and “Mechanism”. In short, the visualizability of feature mapping gives better explanations for the prediction results and the quality of captured features.
Conclusion and Future Work
To conclude, we propose a recurrent neural network with multiple attention layers to extract DDIs from biomedical text. The sentence level attention layer, which combines other sentences containing the same drugs, has been added to our model. The experiments shows that our model outperforms the state-of-the-art DDI extraction systems. Task relevant word embedding and two attention layers improved the performance to some extent.
The imbalance of the classes and the ambiguity of semantics cause most of the misclassifications. We consider that instance generation using generative adversarial networks would cover the instance shortage in specific category. It is also reasonable to use distant supervision learning (which utilize other relevant material) for knowledge supplement and obtain a better performed DDI extraction system.
Acknowledgment
This work is supported by the NSFC under Grant 61303191, 61303190, 61402504, 61103015. | Answer with content missing: (Table II) Proposed model has F1 score of 0.7220 compared to 0.7148 best state-state-of-the-art result. |
9a596bd3a1b504601d49c2bec92d1592d7635042 | 9a596bd3a1b504601d49c2bec92d1592d7635042_0 | Q: What is the performance of their model?
Text: Introduction
Drug-drug interaction (DDI) is a situation when one drug increases or decreases the effect of another drug BIBREF0 . Adverse drug reactions may cause severe side effect, if two or more medicines were taken and their DDI were not investigated in detail. DDI is a common cause of illness, even a cause of death BIBREF1 . Thus, DDI databases for clinical medication decisions are proposed by some researchers. These databases such as SFINX BIBREF2 , KEGG BIBREF3 , CredibleMeds BIBREF4 help physicians and pharmacists avoid most adverse drug reactions.
Traditional DDI databases are manually constructed according to clinical records, scientific research and drug specifications. For instance, The sentence “With combined use, clinicians should be aware, when phenytoin is added, of the potential for reexacerbation of pulmonary symptomatology due to lowered serum theophylline concentrations BIBREF5 ”, which is from a pharmacotherapy report, describe the side effect of phenytoin and theophylline's combined use. Then this information on specific medicines will be added to DDI databases. As drug-drug interactions have being increasingly found, manually constructing DDI database would consume a lot of manpower and resources.
There has been many efforts to automatically extract DDIs from natural language BIBREF0 , BIBREF6 , BIBREF7 , BIBREF8 , BIBREF9 , BIBREF10 , BIBREF11 , BIBREF12 , mainly medical literature and clinical records. These works can be divided into the following categories:
To avoid complex feature engineering and NLP toolkits' usage, we employ deep learning approaches for sentence comprehension as a whole. Our model takes in a sentence from biomedical literature which contains a drug pair and outputs what kind of DDI this drug pair belongs. This assists physicians refrain from improper combined use of drugs. In addition, the word and sentence level attentions are introduced to our model for better DDI predictions.
We train our language comprehension model with labeled instances. Figure FIGREF5 shows partial records in DDI corpus BIBREF16 . We extract the sentence and drug pairs in the records. There are 3 drug pairs in this example thus we have 3 instances. The DDI corpus annotate each drug pair in the sentence with a DDI type. The DDI type, which is the most concerned information, is described in table TABREF4 . The details about how we train our model and extract the DDI type from text are described in the remaining sections.
Related Work
In DDI extraction task, NLP methods or machine learning approaches are proposed by most of the work. Chowdhury BIBREF14 and Thomas et al. BIBREF11 proposed methods that use linguistic phenomenons and two-stage SVM to classify DDIs. FBK-irst BIBREF10 is a follow-on work which applies kernel method to the existing model and outperforms it.
Neural network based approaches have been proposed by several works. Liu et al. BIBREF9 employ CNN for DDI extraction for the first time which outperforms the traditional machine learning based methods. Limited by the convolutional kernel size, the CNN can only extracted features of continuous 3 to 5 words rather than distant words. Liu et al. BIBREF8 proposed dependency-based CNN to handle distant but relevant words. Sahu et al. BIBREF12 proposed LSTM based DDI extraction approach and outperforms CNN based approach, since LSTM handles sentence as a sequence instead of slide windows. To conclude, Neural network based approaches have advantages of 1) less reliance on extra NLP toolkits, 2) simpler preprocessing procedure, 3) better performance than text analysis and machine learning methods.
Drug-drug interaction extraction is a relation extraction task of natural language processing. Relation extraction aims to determine the relation between two given entities in a sentence. In recent years, attention mechanism and various neural networks are applied to relation extraction BIBREF17 , BIBREF18 , BIBREF19 , BIBREF20 , BIBREF21 . Convolutional deep neural network are utilized for extracting sentence level features in BIBREF19 . Then the sentence level features are concatenated with lexical level features, which are obtained by NLP toolkit WordNet BIBREF22 , followed by a multilayer perceptron (MLP) to classify the entities' relation. A fixed work is proposed by Nguyen et al. BIBREF21 . The convolutional kernel is set various size to capture more n-gram features. In addition, the word and position embedding are trained automatically instead of keeping constant as in BIBREF19 . Wang et al. BIBREF20 introduce multi-level attention mechanism to CNN in order to emphasize the keywords and ignore the non-critical words during relation detection. The attention CNN model outperforms previous state-of-the-art methods.
Besides CNN, Recurrent neural network (RNN) has been applied to relation extraction as well. Zhang et al. BIBREF18 utilize long short-term memory network (LSTM), a typical RNN model, to represent sentence. The bidirectional LSTM chronologically captures the previous and future information, after which a pooling layer and MLP have been set to extract feature and classify the relation. Attention mechanism is added to bidirectional LSTM in BIBREF17 for relation extraction. An attention layer gives each memory cell a weight so that classifier can catch the principal feature for the relation detection. The Attention based bidirectional LSTM has been proven better than previous work.
Proposed Model
In this section, we present our bidirectional recurrent neural network with multiple attention layer model. The overview of our architecture is shown in figure FIGREF15 . For a given instance, which describes the details about two or more drugs, the model represents each word as a vector in embedding layer. Then the bidirectional RNN layer generates a sentence matrix, each column vector in which is the semantic representation of the corresponding word. The word level attention layer transforms the sentence matrix to vector representation. Then sentence level attention layer generates final representation for the instance by combining several relevant sentences in view of the fact that these sentences have the same drug pair. Followed by a softmax classifier, the model classifies the drug pair in the given instance as specific DDI.
Preprocessing
The DDI corpus contains thousands of XML files, each of which are constructed by several records. For a sentence containing INLINEFORM0 drugs, there are INLINEFORM1 drug pairs. We replace the interested two drugs with “drug1” and “drug2” while the other drugs are replaced by “durg0”, as in BIBREF9 did. This step is called drug blinding. For example, the sentence in figure FIGREF5 generates 3 instances after drug blinding: “drug1: an increased risk of hepatitis has been reported to result from combined use of drug2 and drug0”, “drug1: an increased risk of hepatitis has been reported to result from combined use of drug0 and drug2”, “drug0: an increased risk of hepatitis has been reported to result from combined use of drug1 and drug2”. The drug blinded sentences are the instances that are fed to our model.
We put the sentences with the same drug pairs together as a set, since the sentence level attention layer (will be described in Section SECREF21 ) will use the sentences which contain the same drugs.
Embedding Layer
Given an instance INLINEFORM0 which contains specified two drugs INLINEFORM1 , INLINEFORM2 , each word is embedded in a INLINEFORM3 dimensional space ( INLINEFORM4 , INLINEFORM5 are the dimension of word embedding and position embedding). The look up table function INLINEFORM6 maps a word or a relative position to a column vector. After embedding layer the sentence is represented by INLINEFORM7 , where DISPLAYFORM0
The INLINEFORM0 function is usually implemented with matrix-vector product. Let INLINEFORM1 , INLINEFORM2 denote the one-hot representation (column vector) of word and relative distance. INLINEFORM3 , INLINEFORM4 are word and position embedding query matrix. The look up functions are implemented by DISPLAYFORM0
Then the word sequence INLINEFORM0 is fed to the RNN layer. Note that the sentence will be filled with INLINEFORM1 if its length is less than INLINEFORM2 .
Bidirectional RNN Encoding Layer
The words in the sequence are read by RNN's gated recurrent unit (GRU) one by one. The GRU takes the current word INLINEFORM0 and the previous GRU's hidden state INLINEFORM1 as input. The current GRU encodes INLINEFORM2 and INLINEFORM3 into a new hidden state INLINEFORM4 (its dimension is INLINEFORM5 , a hyperparameter), which can be regarded as informations the GRU remembered.
Figure FIGREF25 shows the details in GRU. The reset gate INLINEFORM0 selectively forgets informations delivered by previous GRU. Then the hidden state becomes INLINEFORM1 . The update gate INLINEFORM2 updates the informations according to INLINEFORM3 and INLINEFORM4 . The equations below describe these procedures. Note that INLINEFORM5 stands for element wise multiplication. DISPLAYFORM0 DISPLAYFORM1
The bidirectional RNN contains forward RNN and backward RNN. Forward RNN reads sentence from INLINEFORM0 to INLINEFORM1 , generating INLINEFORM2 , INLINEFORM3 , ..., INLINEFORM4 . Backward RNN reads sentence from INLINEFORM5 to INLINEFORM6 , generating INLINEFORM7 , INLINEFORM8 , ..., INLINEFORM9 . Then the encode result of this layer is DISPLAYFORM0
We apply dropout technique in RNN layer to avoid overfitting. Each GRU have a probability (denoted by INLINEFORM0 , also a hyperparameter) of being dropped. The dropped GRU has no output and will not affect the subsequent GRUs. With bidirectional RNN and dropout technique, the input INLINEFORM1 is encoded into sentence matrix INLINEFORM2 .
Word Level Attention
The purpose of word level attention layer is to extract sentence representation (also known as feature vector) from encoded matrix. We use word level attention instead of max pooling, since attention mechanism can determine the importance of individual encoded word in each row of INLINEFORM0 . Let INLINEFORM1 denotes the attention vector (column vector), INLINEFORM2 denotes the filter that gives each element in the row of INLINEFORM3 a weight. The following equations shows the attention operation, which is also illustrated in figure FIGREF15 . DISPLAYFORM0 DISPLAYFORM1
The softmax function takes a vector INLINEFORM0 as input and outputs a vector, DISPLAYFORM0
INLINEFORM0 denotes the feature vector captured by this layer. Several approaches BIBREF12 , BIBREF17 use this vector and softmax classifier for classification. Inspired by BIBREF23 we propose the sentence level attention to combine the information of other sentences for a improved DDI classification.
Sentence Level Attention
The previous layers captures the features only from the given sentence. However, other sentences may contains informations that contribute to the understanding of this sentence. It is reasonable to look over other relevant instances when determine two drugs' interaction from the given sentence. In our implementation, the instances that have the same drug pair are believed to be relevant. The relevant instances set is denoted by INLINEFORM0 , where INLINEFORM1 is the sentence feature vector. INLINEFORM2 stands for how well the instance INLINEFORM3 matches its DDI INLINEFORM4 (Vector representation of a specific DDI). INLINEFORM5 is a diagonal attention matrix, multiplied by which the feature vector INLINEFORM6 can concentrate on those most representative features. DISPLAYFORM0 DISPLAYFORM1
INLINEFORM0 is the softmax result of INLINEFORM1 . The final sentence representation is decided by all of the relevant sentences' feature vector, as Equation EQREF24 shows. DISPLAYFORM0
Note that the set INLINEFORM0 is gradually growing as new sentence with the same drugs pairs is found when training. An instance INLINEFORM1 is represented by INLINEFORM2 before sentence level attention. The sentence level attention layer finds the set INLINEFORM3 , instances in which have the same drug pair as in INLINEFORM4 , and put INLINEFORM5 in INLINEFORM6 . Then the final sentence representation INLINEFORM7 is calculated in this layer.
Classification and Training
A given sentence INLINEFORM0 is finally represented by the feature vector INLINEFORM1 . Then we feed it to a softmax classifier. Let INLINEFORM2 denotes the set of all kinds of DDI. The output INLINEFORM3 is the probabilities of each class INLINEFORM4 belongs. DISPLAYFORM0
We use cross entropy cost function and INLINEFORM0 regularization as the optimization objective. For INLINEFORM1 -th instance, INLINEFORM2 denotes the one-hot representation of it's label, where the model outputs INLINEFORM3 . The cross entropy cost is: DISPLAYFORM0
For a mini-batch INLINEFORM0 , the optimization objective is: DISPLAYFORM0
All parameters in this model is: DISPLAYFORM0
We optimize the parameters of objective function INLINEFORM0 with Adam BIBREF24 , which is a variant of mini-batch stochastic gradient descent. During each train step, the gradient of INLINEFORM1 is calculated. Then INLINEFORM2 is adjusted according to the gradient. After the end of training, we have a model that is able to predict two drugs' interactions when a sentence about these drugs is given.
DDI Prediction
The model is trained for DDI classification. The parameters in list INLINEFORM0 are tuned during the training process. Given a new sentence with two drugs, we can use this model to classify the DDI type.
The DDI prediction follows the procedure described in Section SECREF6 - SECREF26 . The given sentence is eventually represented by feature vector INLINEFORM0 . Then INLINEFORM1 is classified to a specific DDI type with a softmax classifier. In next section, we will evaluate our model's DDI prediction performance and see the advantages and shortcomings of our model.
Datasets and Evaluation Metrics
We use the DDI corpus of the 2013 DDIExtraction challenge BIBREF16 to train and test our model. The DDIs in this corpus are classified as five types. We give the definitions of these types and their example sentences, as shown in table TABREF4 . This standard dataset is made up of training set and testing set. We use the same metrics as in other drug-drug interaction extraction literature BIBREF11 , BIBREF10 , BIBREF25 , BIBREF9 , BIBREF8 , BIBREF12 : the overall precision, recall, and F1 score on testing set. INLINEFORM0 denotes the set of {False, Mechanism, Effect, Advise, Int}. The precision and recall of each INLINEFORM1 are calculated by DISPLAYFORM0 DISPLAYFORM1
Then the overall precision, recall, and F1 score are calculated by DISPLAYFORM0
Besides, we evaluate the captured feature vectors with t-SNE BIBREF26 , a visualizing and intuitive way to map a high dimensional vector into a 2 or 3-dimensional space. If the points in a low dimensional space are easy to be split, the feature vectors are believed to be more distinguishable.
Hyperparameter Settings and Training
We use TensorFlow BIBREF27 r0.11 to implement the proposed model. The input of each word is an ordered triple (word, relative distance from drug1, relative distance from drug2). The sentence, which is represented as a matrix, is fed to the model. The output of the model is a INLINEFORM0 -dimensional vector representing the probabilities of being corresponding DDI. It is the network, parameters, and hyperparameters which decides the output vector. The network's parameters are adjusted during training, where the hyperparameters are tuned by hand. The hyperparameters after tuning are as follows. The word embedding's dimension INLINEFORM1 , the position embedding's dimension INLINEFORM2 , the hidden state's dimension INLINEFORM3 , the probability of dropout INLINEFORM4 , other hyperparameters which are not shown here are set to TensorFlow's default values.
The word embedding is initialized by pre-trained word vectors using GloVe BIBREF28 , while other parameters are initialized randomly. During each training step, a mini-batch (the mini-batch size INLINEFORM0 in our implementation) of sentences is selected from training set. The gradient of objective function is calculated for parameters updating (See Section SECREF26 ).
Figure FIGREF32 shows the training process. The objective function INLINEFORM0 is declining as the training mini-batches continuously sent to the model. As the testing mini-batches, the INLINEFORM1 function is fluctuating while its overall trend is descending. The instances in testing set are not participated in training so that INLINEFORM2 function is not descending so fast. However, training and testing instances have similar distribution in sample space, causing that testing instances' INLINEFORM3 tends to be smaller along with the training process. INLINEFORM4 has inverse relationship with the performance measurement. The F1 score is getting fluctuating around a specific value after enough training steps. The reason why fluctuating range is considerable is that only a tiny part of the whole training or testing set has been calculated the F1 score. Testing the whole set during every step is time consuming and not necessary. We will evaluate the model on the whole testing set in Section SECREF47 .
Experimental Results
We save our model every 100 step and predict all the DDIs of the instances in the testing set. These predictions' F1 score is shown in figure FIGREF40 . To demonstrate the sentence level attention layer is effective, we drop this layer and then directly use INLINEFORM0 for softmax classification (See figure FIGREF15 ). The result is shown with “RNN + dynamic word embedding + ATT” curve, which illustrates that the sentence level attention layer contributes to a more accurate model.
Whether a dynamic or static word embedding is better for a DDI extraction task is under consideration. Nguyen et al. BIBREF21 shows that updating word embedding at the time of other parameters being trained makes a better performance in relation extraction task. We let the embedding be static when training, while other conditions are all the same. The “RNN + static word embedding + 2ATT” curve shows this case. We can draw a conclusion that updating the initialized word embedding trains more suitable word vectors for the task, which promotes the performance.
We compare our best F1 score with other state-of-the-art approaches in table TABREF39 , which shows our model has competitive advantage in dealing with drug-drug interaction extraction. The predictions confusion matrix is shown in table TABREF46 . The DDIs other than false being classified as false makes most of the classification error. It may perform better if a classifier which can tells true and false DDI apart is trained. We leave this two-stage classifier to our future work. Another phenomenon is that the “Int” type is often classified as “Effect”. The “Int” sentence describes there exists interaction between two drugs and this information implies the two drugs' combination will have good or bed effect. That's the reason why “Int” and “Effect” are often obfuscated.
To evaluate the features our model captured, we employ scikit-learn BIBREF29 's t-SNE class to map high dimensional feature vectors to 2-dimensional vectors, which can be depicted on a plane. We depict all the features of the instances in testing set, as shown in figure FIGREF41 . The RNN model using dynamic word embedding and 2 layers of attention is the most distinguishable one. Unfortunately, the classifier can not classify all the instances into correct classes. Comparing table TABREF46 with figure UID44 , both of which are from the best performed model, we can observe some conclusions. The “Int” DDIs are often misclassified as “Effect”, for the reason that some of the “Int” points are in the “Effect” cluster. The “Effect” points are too scattered so that plenty of “Effect” DDIs are classified to other types. The “Mechanism” points are gathered around two clusters, causing that most of the “mechanism” DDIs are classified to two types: “False” and “Mechanism”. In short, the visualizability of feature mapping gives better explanations for the prediction results and the quality of captured features.
Conclusion and Future Work
To conclude, we propose a recurrent neural network with multiple attention layers to extract DDIs from biomedical text. The sentence level attention layer, which combines other sentences containing the same drugs, has been added to our model. The experiments shows that our model outperforms the state-of-the-art DDI extraction systems. Task relevant word embedding and two attention layers improved the performance to some extent.
The imbalance of the classes and the ambiguity of semantics cause most of the misclassifications. We consider that instance generation using generative adversarial networks would cover the instance shortage in specific category. It is also reasonable to use distant supervision learning (which utilize other relevant material) for knowledge supplement and obtain a better performed DDI extraction system.
Acknowledgment
This work is supported by the NSFC under Grant 61303191, 61303190, 61402504, 61103015. | Answer with content missing: (Table II) Proposed model has F1 score of 0.7220. |
1ba28338d3f993674a19d2ee2ec35447e361505b | 1ba28338d3f993674a19d2ee2ec35447e361505b_0 | Q: What are the existing methods mentioned in the paper?
Text: Introduction
Drug-drug interaction (DDI) is a situation when one drug increases or decreases the effect of another drug BIBREF0 . Adverse drug reactions may cause severe side effect, if two or more medicines were taken and their DDI were not investigated in detail. DDI is a common cause of illness, even a cause of death BIBREF1 . Thus, DDI databases for clinical medication decisions are proposed by some researchers. These databases such as SFINX BIBREF2 , KEGG BIBREF3 , CredibleMeds BIBREF4 help physicians and pharmacists avoid most adverse drug reactions.
Traditional DDI databases are manually constructed according to clinical records, scientific research and drug specifications. For instance, The sentence “With combined use, clinicians should be aware, when phenytoin is added, of the potential for reexacerbation of pulmonary symptomatology due to lowered serum theophylline concentrations BIBREF5 ”, which is from a pharmacotherapy report, describe the side effect of phenytoin and theophylline's combined use. Then this information on specific medicines will be added to DDI databases. As drug-drug interactions have being increasingly found, manually constructing DDI database would consume a lot of manpower and resources.
There has been many efforts to automatically extract DDIs from natural language BIBREF0 , BIBREF6 , BIBREF7 , BIBREF8 , BIBREF9 , BIBREF10 , BIBREF11 , BIBREF12 , mainly medical literature and clinical records. These works can be divided into the following categories:
To avoid complex feature engineering and NLP toolkits' usage, we employ deep learning approaches for sentence comprehension as a whole. Our model takes in a sentence from biomedical literature which contains a drug pair and outputs what kind of DDI this drug pair belongs. This assists physicians refrain from improper combined use of drugs. In addition, the word and sentence level attentions are introduced to our model for better DDI predictions.
We train our language comprehension model with labeled instances. Figure FIGREF5 shows partial records in DDI corpus BIBREF16 . We extract the sentence and drug pairs in the records. There are 3 drug pairs in this example thus we have 3 instances. The DDI corpus annotate each drug pair in the sentence with a DDI type. The DDI type, which is the most concerned information, is described in table TABREF4 . The details about how we train our model and extract the DDI type from text are described in the remaining sections.
Related Work
In DDI extraction task, NLP methods or machine learning approaches are proposed by most of the work. Chowdhury BIBREF14 and Thomas et al. BIBREF11 proposed methods that use linguistic phenomenons and two-stage SVM to classify DDIs. FBK-irst BIBREF10 is a follow-on work which applies kernel method to the existing model and outperforms it.
Neural network based approaches have been proposed by several works. Liu et al. BIBREF9 employ CNN for DDI extraction for the first time which outperforms the traditional machine learning based methods. Limited by the convolutional kernel size, the CNN can only extracted features of continuous 3 to 5 words rather than distant words. Liu et al. BIBREF8 proposed dependency-based CNN to handle distant but relevant words. Sahu et al. BIBREF12 proposed LSTM based DDI extraction approach and outperforms CNN based approach, since LSTM handles sentence as a sequence instead of slide windows. To conclude, Neural network based approaches have advantages of 1) less reliance on extra NLP toolkits, 2) simpler preprocessing procedure, 3) better performance than text analysis and machine learning methods.
Drug-drug interaction extraction is a relation extraction task of natural language processing. Relation extraction aims to determine the relation between two given entities in a sentence. In recent years, attention mechanism and various neural networks are applied to relation extraction BIBREF17 , BIBREF18 , BIBREF19 , BIBREF20 , BIBREF21 . Convolutional deep neural network are utilized for extracting sentence level features in BIBREF19 . Then the sentence level features are concatenated with lexical level features, which are obtained by NLP toolkit WordNet BIBREF22 , followed by a multilayer perceptron (MLP) to classify the entities' relation. A fixed work is proposed by Nguyen et al. BIBREF21 . The convolutional kernel is set various size to capture more n-gram features. In addition, the word and position embedding are trained automatically instead of keeping constant as in BIBREF19 . Wang et al. BIBREF20 introduce multi-level attention mechanism to CNN in order to emphasize the keywords and ignore the non-critical words during relation detection. The attention CNN model outperforms previous state-of-the-art methods.
Besides CNN, Recurrent neural network (RNN) has been applied to relation extraction as well. Zhang et al. BIBREF18 utilize long short-term memory network (LSTM), a typical RNN model, to represent sentence. The bidirectional LSTM chronologically captures the previous and future information, after which a pooling layer and MLP have been set to extract feature and classify the relation. Attention mechanism is added to bidirectional LSTM in BIBREF17 for relation extraction. An attention layer gives each memory cell a weight so that classifier can catch the principal feature for the relation detection. The Attention based bidirectional LSTM has been proven better than previous work.
Proposed Model
In this section, we present our bidirectional recurrent neural network with multiple attention layer model. The overview of our architecture is shown in figure FIGREF15 . For a given instance, which describes the details about two or more drugs, the model represents each word as a vector in embedding layer. Then the bidirectional RNN layer generates a sentence matrix, each column vector in which is the semantic representation of the corresponding word. The word level attention layer transforms the sentence matrix to vector representation. Then sentence level attention layer generates final representation for the instance by combining several relevant sentences in view of the fact that these sentences have the same drug pair. Followed by a softmax classifier, the model classifies the drug pair in the given instance as specific DDI.
Preprocessing
The DDI corpus contains thousands of XML files, each of which are constructed by several records. For a sentence containing INLINEFORM0 drugs, there are INLINEFORM1 drug pairs. We replace the interested two drugs with “drug1” and “drug2” while the other drugs are replaced by “durg0”, as in BIBREF9 did. This step is called drug blinding. For example, the sentence in figure FIGREF5 generates 3 instances after drug blinding: “drug1: an increased risk of hepatitis has been reported to result from combined use of drug2 and drug0”, “drug1: an increased risk of hepatitis has been reported to result from combined use of drug0 and drug2”, “drug0: an increased risk of hepatitis has been reported to result from combined use of drug1 and drug2”. The drug blinded sentences are the instances that are fed to our model.
We put the sentences with the same drug pairs together as a set, since the sentence level attention layer (will be described in Section SECREF21 ) will use the sentences which contain the same drugs.
Embedding Layer
Given an instance INLINEFORM0 which contains specified two drugs INLINEFORM1 , INLINEFORM2 , each word is embedded in a INLINEFORM3 dimensional space ( INLINEFORM4 , INLINEFORM5 are the dimension of word embedding and position embedding). The look up table function INLINEFORM6 maps a word or a relative position to a column vector. After embedding layer the sentence is represented by INLINEFORM7 , where DISPLAYFORM0
The INLINEFORM0 function is usually implemented with matrix-vector product. Let INLINEFORM1 , INLINEFORM2 denote the one-hot representation (column vector) of word and relative distance. INLINEFORM3 , INLINEFORM4 are word and position embedding query matrix. The look up functions are implemented by DISPLAYFORM0
Then the word sequence INLINEFORM0 is fed to the RNN layer. Note that the sentence will be filled with INLINEFORM1 if its length is less than INLINEFORM2 .
Bidirectional RNN Encoding Layer
The words in the sequence are read by RNN's gated recurrent unit (GRU) one by one. The GRU takes the current word INLINEFORM0 and the previous GRU's hidden state INLINEFORM1 as input. The current GRU encodes INLINEFORM2 and INLINEFORM3 into a new hidden state INLINEFORM4 (its dimension is INLINEFORM5 , a hyperparameter), which can be regarded as informations the GRU remembered.
Figure FIGREF25 shows the details in GRU. The reset gate INLINEFORM0 selectively forgets informations delivered by previous GRU. Then the hidden state becomes INLINEFORM1 . The update gate INLINEFORM2 updates the informations according to INLINEFORM3 and INLINEFORM4 . The equations below describe these procedures. Note that INLINEFORM5 stands for element wise multiplication. DISPLAYFORM0 DISPLAYFORM1
The bidirectional RNN contains forward RNN and backward RNN. Forward RNN reads sentence from INLINEFORM0 to INLINEFORM1 , generating INLINEFORM2 , INLINEFORM3 , ..., INLINEFORM4 . Backward RNN reads sentence from INLINEFORM5 to INLINEFORM6 , generating INLINEFORM7 , INLINEFORM8 , ..., INLINEFORM9 . Then the encode result of this layer is DISPLAYFORM0
We apply dropout technique in RNN layer to avoid overfitting. Each GRU have a probability (denoted by INLINEFORM0 , also a hyperparameter) of being dropped. The dropped GRU has no output and will not affect the subsequent GRUs. With bidirectional RNN and dropout technique, the input INLINEFORM1 is encoded into sentence matrix INLINEFORM2 .
Word Level Attention
The purpose of word level attention layer is to extract sentence representation (also known as feature vector) from encoded matrix. We use word level attention instead of max pooling, since attention mechanism can determine the importance of individual encoded word in each row of INLINEFORM0 . Let INLINEFORM1 denotes the attention vector (column vector), INLINEFORM2 denotes the filter that gives each element in the row of INLINEFORM3 a weight. The following equations shows the attention operation, which is also illustrated in figure FIGREF15 . DISPLAYFORM0 DISPLAYFORM1
The softmax function takes a vector INLINEFORM0 as input and outputs a vector, DISPLAYFORM0
INLINEFORM0 denotes the feature vector captured by this layer. Several approaches BIBREF12 , BIBREF17 use this vector and softmax classifier for classification. Inspired by BIBREF23 we propose the sentence level attention to combine the information of other sentences for a improved DDI classification.
Sentence Level Attention
The previous layers captures the features only from the given sentence. However, other sentences may contains informations that contribute to the understanding of this sentence. It is reasonable to look over other relevant instances when determine two drugs' interaction from the given sentence. In our implementation, the instances that have the same drug pair are believed to be relevant. The relevant instances set is denoted by INLINEFORM0 , where INLINEFORM1 is the sentence feature vector. INLINEFORM2 stands for how well the instance INLINEFORM3 matches its DDI INLINEFORM4 (Vector representation of a specific DDI). INLINEFORM5 is a diagonal attention matrix, multiplied by which the feature vector INLINEFORM6 can concentrate on those most representative features. DISPLAYFORM0 DISPLAYFORM1
INLINEFORM0 is the softmax result of INLINEFORM1 . The final sentence representation is decided by all of the relevant sentences' feature vector, as Equation EQREF24 shows. DISPLAYFORM0
Note that the set INLINEFORM0 is gradually growing as new sentence with the same drugs pairs is found when training. An instance INLINEFORM1 is represented by INLINEFORM2 before sentence level attention. The sentence level attention layer finds the set INLINEFORM3 , instances in which have the same drug pair as in INLINEFORM4 , and put INLINEFORM5 in INLINEFORM6 . Then the final sentence representation INLINEFORM7 is calculated in this layer.
Classification and Training
A given sentence INLINEFORM0 is finally represented by the feature vector INLINEFORM1 . Then we feed it to a softmax classifier. Let INLINEFORM2 denotes the set of all kinds of DDI. The output INLINEFORM3 is the probabilities of each class INLINEFORM4 belongs. DISPLAYFORM0
We use cross entropy cost function and INLINEFORM0 regularization as the optimization objective. For INLINEFORM1 -th instance, INLINEFORM2 denotes the one-hot representation of it's label, where the model outputs INLINEFORM3 . The cross entropy cost is: DISPLAYFORM0
For a mini-batch INLINEFORM0 , the optimization objective is: DISPLAYFORM0
All parameters in this model is: DISPLAYFORM0
We optimize the parameters of objective function INLINEFORM0 with Adam BIBREF24 , which is a variant of mini-batch stochastic gradient descent. During each train step, the gradient of INLINEFORM1 is calculated. Then INLINEFORM2 is adjusted according to the gradient. After the end of training, we have a model that is able to predict two drugs' interactions when a sentence about these drugs is given.
DDI Prediction
The model is trained for DDI classification. The parameters in list INLINEFORM0 are tuned during the training process. Given a new sentence with two drugs, we can use this model to classify the DDI type.
The DDI prediction follows the procedure described in Section SECREF6 - SECREF26 . The given sentence is eventually represented by feature vector INLINEFORM0 . Then INLINEFORM1 is classified to a specific DDI type with a softmax classifier. In next section, we will evaluate our model's DDI prediction performance and see the advantages and shortcomings of our model.
Datasets and Evaluation Metrics
We use the DDI corpus of the 2013 DDIExtraction challenge BIBREF16 to train and test our model. The DDIs in this corpus are classified as five types. We give the definitions of these types and their example sentences, as shown in table TABREF4 . This standard dataset is made up of training set and testing set. We use the same metrics as in other drug-drug interaction extraction literature BIBREF11 , BIBREF10 , BIBREF25 , BIBREF9 , BIBREF8 , BIBREF12 : the overall precision, recall, and F1 score on testing set. INLINEFORM0 denotes the set of {False, Mechanism, Effect, Advise, Int}. The precision and recall of each INLINEFORM1 are calculated by DISPLAYFORM0 DISPLAYFORM1
Then the overall precision, recall, and F1 score are calculated by DISPLAYFORM0
Besides, we evaluate the captured feature vectors with t-SNE BIBREF26 , a visualizing and intuitive way to map a high dimensional vector into a 2 or 3-dimensional space. If the points in a low dimensional space are easy to be split, the feature vectors are believed to be more distinguishable.
Hyperparameter Settings and Training
We use TensorFlow BIBREF27 r0.11 to implement the proposed model. The input of each word is an ordered triple (word, relative distance from drug1, relative distance from drug2). The sentence, which is represented as a matrix, is fed to the model. The output of the model is a INLINEFORM0 -dimensional vector representing the probabilities of being corresponding DDI. It is the network, parameters, and hyperparameters which decides the output vector. The network's parameters are adjusted during training, where the hyperparameters are tuned by hand. The hyperparameters after tuning are as follows. The word embedding's dimension INLINEFORM1 , the position embedding's dimension INLINEFORM2 , the hidden state's dimension INLINEFORM3 , the probability of dropout INLINEFORM4 , other hyperparameters which are not shown here are set to TensorFlow's default values.
The word embedding is initialized by pre-trained word vectors using GloVe BIBREF28 , while other parameters are initialized randomly. During each training step, a mini-batch (the mini-batch size INLINEFORM0 in our implementation) of sentences is selected from training set. The gradient of objective function is calculated for parameters updating (See Section SECREF26 ).
Figure FIGREF32 shows the training process. The objective function INLINEFORM0 is declining as the training mini-batches continuously sent to the model. As the testing mini-batches, the INLINEFORM1 function is fluctuating while its overall trend is descending. The instances in testing set are not participated in training so that INLINEFORM2 function is not descending so fast. However, training and testing instances have similar distribution in sample space, causing that testing instances' INLINEFORM3 tends to be smaller along with the training process. INLINEFORM4 has inverse relationship with the performance measurement. The F1 score is getting fluctuating around a specific value after enough training steps. The reason why fluctuating range is considerable is that only a tiny part of the whole training or testing set has been calculated the F1 score. Testing the whole set during every step is time consuming and not necessary. We will evaluate the model on the whole testing set in Section SECREF47 .
Experimental Results
We save our model every 100 step and predict all the DDIs of the instances in the testing set. These predictions' F1 score is shown in figure FIGREF40 . To demonstrate the sentence level attention layer is effective, we drop this layer and then directly use INLINEFORM0 for softmax classification (See figure FIGREF15 ). The result is shown with “RNN + dynamic word embedding + ATT” curve, which illustrates that the sentence level attention layer contributes to a more accurate model.
Whether a dynamic or static word embedding is better for a DDI extraction task is under consideration. Nguyen et al. BIBREF21 shows that updating word embedding at the time of other parameters being trained makes a better performance in relation extraction task. We let the embedding be static when training, while other conditions are all the same. The “RNN + static word embedding + 2ATT” curve shows this case. We can draw a conclusion that updating the initialized word embedding trains more suitable word vectors for the task, which promotes the performance.
We compare our best F1 score with other state-of-the-art approaches in table TABREF39 , which shows our model has competitive advantage in dealing with drug-drug interaction extraction. The predictions confusion matrix is shown in table TABREF46 . The DDIs other than false being classified as false makes most of the classification error. It may perform better if a classifier which can tells true and false DDI apart is trained. We leave this two-stage classifier to our future work. Another phenomenon is that the “Int” type is often classified as “Effect”. The “Int” sentence describes there exists interaction between two drugs and this information implies the two drugs' combination will have good or bed effect. That's the reason why “Int” and “Effect” are often obfuscated.
To evaluate the features our model captured, we employ scikit-learn BIBREF29 's t-SNE class to map high dimensional feature vectors to 2-dimensional vectors, which can be depicted on a plane. We depict all the features of the instances in testing set, as shown in figure FIGREF41 . The RNN model using dynamic word embedding and 2 layers of attention is the most distinguishable one. Unfortunately, the classifier can not classify all the instances into correct classes. Comparing table TABREF46 with figure UID44 , both of which are from the best performed model, we can observe some conclusions. The “Int” DDIs are often misclassified as “Effect”, for the reason that some of the “Int” points are in the “Effect” cluster. The “Effect” points are too scattered so that plenty of “Effect” DDIs are classified to other types. The “Mechanism” points are gathered around two clusters, causing that most of the “mechanism” DDIs are classified to two types: “False” and “Mechanism”. In short, the visualizability of feature mapping gives better explanations for the prediction results and the quality of captured features.
Conclusion and Future Work
To conclude, we propose a recurrent neural network with multiple attention layers to extract DDIs from biomedical text. The sentence level attention layer, which combines other sentences containing the same drugs, has been added to our model. The experiments shows that our model outperforms the state-of-the-art DDI extraction systems. Task relevant word embedding and two attention layers improved the performance to some extent.
The imbalance of the classes and the ambiguity of semantics cause most of the misclassifications. We consider that instance generation using generative adversarial networks would cover the instance shortage in specific category. It is also reasonable to use distant supervision learning (which utilize other relevant material) for knowledge supplement and obtain a better performed DDI extraction system.
Acknowledgment
This work is supported by the NSFC under Grant 61303191, 61303190, 61402504, 61103015. | Chowdhury BIBREF14 and Thomas et al. BIBREF11, FBK-irst BIBREF10, Liu et al. BIBREF9, Sahu et al. BIBREF12 |
8ec94313ea908b6462e1f5ee809a977a7b6bdf01 | 8ec94313ea908b6462e1f5ee809a977a7b6bdf01_0 | Q: Does having constrained neural units imply word meanings are fixed across different context?
Text: Introduction
Studies of Broca's and Wernicke's aphasia provide evidence that our brains understand an utterance by creating separate representations for word meaning and word arrangement BIBREF0. There is a related thesis about human language, present across many theories of semantics, which is that syntactic categories are partially agnostic to the identity of words BIBREF1. This regularity in how humans derive meaning from an utterance is applicable to the task of natural language translation. This is because, by definition, translation necessitates the creation of a meaning representation for an input. According to the cognitive and neural imperative, we introduce new units to regularize an artificial neural encoder and decoder BIBREF2. These are called the Lexicon and Lexicon-Adversary units (collectively, LLA). Tests are done on a diagnostic task, and naturalistic tasks including semantic parsing, syntactic parsing, and English to Mandarin Chinese translation. We evaluate a Long Short-Term Memory (LSTM) BIBREF3 encoder and decoder, with and without the LLA units, and show that the LLA version achieves superior translation performance. In addition, we examine our model's weights, and its performance when some of its neurons are damaged. We find that the model exhibits the knowledge and the lack thereof expected of a Broca's aphasic BIBREF0 when one module's weights are corrupted. It also exhibits that expected of a Wernicke's aphasic BIBREF0 when another module's weights are corrupted.
Neural Motivation
BIBREF0 BIBREF0 showed that Broca's aphasics were able to understand “the apple that the boy is eating is red” with significantly higher accuracy than “the cow that the monkey is scaring is yellow,” along with similar pairs. The critical difference between these sentences is that, due to semantic constraints from the words, the first can be understood if it is presented as a set of words. The second cannot. This experiment provides evidence that the rest of the language neurons in the brain (largely Wernicke's area) can yield an understanding of word meanings but not how words are arranged. This also suggests that Broca's area builds a representation of the syntax.
In the same study, Wernicke's aphasics performed poorly regardless of the sentence type. This provides evidence that Broca's area cannot yield an understanding of word meanings.
Taken together, the two experiments support the theory that Broca's area creates a representation of the syntax without encoding complete word meanings. These other lexical aspects are represented separately in Wernicke's area, which does not encode syntax.
Cognitive Motivation
A tenet of generative grammar theories is that different words can share the same syntactic category BIBREF1. It is possible, for example, to know that the syntax for an utterance is a noun phrase that is composed of a determiner and a noun, followed by a verb phrase that is composed of a verb. One can know this without knowing the words. This also means that there are aspects of a word's meaning that the syntax does not determine; by definition, these aspects are invariant to word arrangement.
Model
In a natural language translation setting, suppose that an input word corresponds to a set of output tokens independently of its context. Even though this information might be useful to determine the syntax of the input utterance in the first place, the syntax does not determine this knowledge at all (by supposition). So, we can impose the constraint that our model's representation of the input's syntax cannot contain this context-invariant information. This regularization is strictly preferable to allowing all aspects of word meaning to propagate into the input's syntax representation. Without such a constraint, all inputs could, in principle, be given their own syntactic categories. This scenario is refuted by cognitive and neural theories. We incorporate the regularization with neural units that can separate representations of word meaning and arrangement.
With the exception of the equations that we list below, the encoding and decoding follows standard paradigms BIBREF2. The input at a time step to the LSTM encoder is a vector embedding for the input token. The final hidden and cell states of the encoder are the starting hidden and cell states of the LSTM decoder. The decoder does not take tokens as inputs; it decodes by relying solely on its hidden and cell states. The $t$th output, $o_t$, from the decoder is Softmax$(W(h_t))$, where $W$ is a fully connected layer and $h_t$ is the decoder's $t$th hidden state. $o_t$ is the length of the output dictionary. $o_t$'s index with the highest value corresponds to the token choice. The encoder and decoder's weights are optimized with the negative log likelihood loss. The inputs to the loss function are the log of the model's output and the ground-truth at each time step. Below, we describe our modifications. l = ((w1, w2, , wm))
la = (Wa2(ReLU(Wa1(GradReverse(hece)))))
o't = l ot
Where:
$m$ is the number of input tokens.
$w_i$ is a vector embedding for the $i$th input token, and is the length of the output dictionary. It is not the same embedding used by the encoder LSTM.
$\sigma $ is the Sigmoid function.
$\vee $ is the max pooling of a sequence of vectors of the same length. The weight at the output vector's $i$th index is the max of all input vectors' weights at their $i$th indices.
$h_e$ and $c_e$ are the final hidden and cell states of the encoder.
$W_{a_1}$ and $W_{a_2}$ are fully connected layers.
$\frown $ is concatenation.
$\odot $ is the elementwise product.
GradReverse multiplies the gradient by a negative number upon backpropagation.
$l$ is the output of the Lexicon Unit. Due to the max pooling, only one input token can be responsible for the value at a particular index of the output vector. The weights, $w_i$, are optimized solely by computing the binary cross entropy (BCE) loss between $l$ and the indicator vector where the $k$th element is 1 if the $k$th token in the output dictionary is in the output and 0 otherwise. This procedure forces a $w_i$ to represent the output tokens that are associated with its respective input token, without relying on aggregated contributions from the presence of several input tokens, and independently of the input word order.
$l_a$ is the output of the Lexicon-Adversary Unit. Its weights are optimized according to the BCE loss with $l$ as the target. This means that $l_a$ is the Lexicon-Adversary Unit's approximation of $l$. Because $h_e$ and $c_e$ are passed through a gradient reversal layer, the LSTM encoder is regularized to produce a representation that does not include information from $l$. Consequently, the LSTM decoder does not have this information either.
$o^{\prime }_t$ is the $t$th output of our model. It can be converted to a token by finding the index with the highest weight. It is the result of combining $l$ via an elementwise product with the information from the regularized decoder.
The recurrent encoder and decoder are the only modules that can represent the syntax, but they lack the expressivity to encode all potential aspects of word meaning. So, they are not always capable of producing a theoretically denied representation by giving all words their own syntactic category. The Lexicon Unit can represent these missing lexical aspects, but it lacks the expressivity to represent the syntax. See Figure FIGREF3 for the model.
Experiments
We used BIBREF4's BIBREF4 small diagnostic, the Geoquery semantic parsing dataset BIBREF5, the Wall Street Journal syntactic parsing dataset of sentences up to length 10 BIBREF6, and the Tatoeba BIBREF7 English to Chinese translation dataset processed by BIBREF8 BIBREF8.
To avoid the biases that can be introduced with hyperparameter tuning, we used the same hyperparameters with every model on every domain. These were chosen arbitrarily and kept after they enabled all models to reach a similar train accuracy (typically, close to 100 percent) and after they enabled all models to achieve a peak validation performance and then gradually yield worse validation scores. The hyperparameters are as follows. LSTM hidden size = 300, Lexicon Unit batch size = 1, batch size for other modules = 30, epoch to stop training the Lexicon Unit and start training other modules = 30, epoch to stop training = 1000, and Lexicon-Adversary Unit hidden size = 1000. The optimizer used for the Lexicon Unit was a sparse implementation of Adam BIBREF9 with a learning rate of 0.1 and otherwise the default PyTorch settings BIBREF10. In the other cases it was Adam BIBREF9 with the default PyTorch settings BIBREF10. The gradient through the encoder from the adversary's gradient reversal layer is multiplied by -0.0001. Additionally, the validation score is calculated after each train epoch and the model with the best is tested. To compute the Lexicon Unit to use, we measure its loss (BCE) on the validation set. Unless otherwise stated, we use the mean number of exact matches as the validation metric for the full model.
To judge overall translation performance, we compared the LLA-LSTM encoder and decoder with the standard LSTM encoder and decoder. We also compared our model with one that does not have the adversary but is otherwise identical. The LLA-LSTM model shows improvements over the standard model on many or all of the metrics for every naturalistic domain. Many of the improvements over the other models are several percentage points. In the few scenarios where the LLA-LSTM model does not improve upon the standard model, the discrepancy between the models is small. The discrepancy is also small when the LLA-LSTM model with no adversary performs better than the LLA-LSTM model. Table TABREF4 displays the test results across the domains.
Additionally, we provide evidence that the model learns knowledge of a separation between syntax and the lexicon that is similar to that of a human. Figure FIGREF6 displays the learned $\sigma (w)$ embeddings for some input words, across the domains. To avoid cherry-picking the results, we chose the input words arbitrarily, subject to the following constraint. We considered each word to typically have a different syntactic category than the other choices from that domain. This constraint was used to present a diverse selection of words. Table TABREF5 displays the output behavior of models that we damaged to resemble the damage that causes aphasia. To avoid cherry-picking the results, we arbitrarily chose an input for each domain, subject to the following constraint. The input is not in the train set and the undamaged LLA-LSTM model produces a translation that we judge to be correct. For all inputs that we chose, damage to the analog of Broca's area (the LSTMs) results in an output that describes content only if it is described by the input. However, the output does not show understanding of the input's syntax. In the naturalistic domains, damage to the analog of Wernicke's area (the Lexicon Unit) results in an output with incorrect content that would be acceptable if the input had different words but the same syntax. These knowledge distortions are precisely those that are expected in the respective human aphasics BIBREF0. We also provide corpus-level results from the damaged models by presenting mean precision on the test sets. Because the output languages in all of our domains use tokens to represent meanings in many cases, it is expected that the analog to Wernicke's area is responsible for maintaining a high precision.
Experiments ::: Sequences of Color
The first experiment by BIBREF4 BIBREF4 included a dataset of 14 training pairs and 10 test pairs. In the dataset, an input is a sequence of words from an artificial language created by the authors. An output is a sequence of colored dots. Because the dataset is so small, we use the train set as the validation set. The input and output dictionary are 7 and 4 words, respectively (not including the stop, “$<s>$,” token). In their paper, the authors argue that it is clear that the words have meanings. Four of the words correspond to unique output tokens, and three of them correspond to functions of the output tokens (for example, repeating the same dot three times). The dataset showcases the contrast between human and standard neural network responses. Their paper shows that humans had high accuracy on the test set, whereas standard neural models scored essentially zero exact matches.
The LLA-LSTM model that we tested appears to achieve only insignificantly higher results in Table TABREF4. However, it has learned, from just 14 training examples, how to map some of the words to BIBREF4's interpretation of their context-invariant meanings. This is shown in Figure FIGREF6 (a). In the figure, “dax,” “lug,” “wif,” and “zup” are interpreted correctly to mean “r,” “g,” “b,” and “y,” respectively. Here, the letters correspond to the types of unique dots, which are red, green, blue, and yellow, respectively. The other words, “fep,” “kiki,” and “blicket,” are taken by BIBREF4 to have functional meanings, and so are correctly not associated strongly with any of the output tokens. The exceptions are two erroneous associations between “kiki” and blue and “blicket” and green. Also, every sentence has a stop token, so the LLA units learned that the context-invariant meanings of each word include it. The LLA units can handle cases where a word corresponds to multiple output tokens, and the output tokens need not be monolithic in the output sequence. As shown in tests from all of the other domains, these output token correspondences may or may not be relevant depending on the specific context of a word, but the recurrent component of the architecture is capable of determining which to use.
Experiments ::: Semantic Parsing
Geoquery (GEO) is a dataset where an input is an English geography query and the corresponding output is a parse that a computer could use to look up the answer in a database BIBREF5. We used the standard test set of 250 pairs from BIBREF12 BIBREF12. The remaining data were randomly split into a validation set of 100 pairs and a train set of 539 pairs. We tokenized the input data by splitting on the words and removing punctuation. We tokenized the output data by removing commas and splitting on words, parentheses, and variables. There are 283 tokens in the input dictionary and 177 tokens in the output dictionary, respectively.
Figure FIGREF6 (b) shows some weights for four input words, which are all relevant to the inputs. Many of the weights correspond directly to the correct predicates. Other tokens have high weights because they are typically important to any parse. These are parentheses, variables (A, B, C, and D), the “answer” token, and the stop token.
Experiments ::: Syntactic Parsing
The Wall Street Journal portion of the Penn Treebank is a dataset where English sentences from The Wall Street Journal are paired with human-generated phrase parses BIBREF6. We use the test, validation, and train set from BIBREF13's BIBREF13 paper. For efficiency, we only use sentences that have 10 or fewer words, lowercase all words, and modify BIBREF13's output data so that left parentheses are paired with their corresponding nonterminal and right parentheses are paired with their corresponding terminal. The input and output data were both tokenized by splitting where there is a space. The test, validation, and train set are 398, 258, and 6007 pairs, respectively. There are 9243 tokens in the input dictionary and 9486 tokens in the output dictionary.
Figure FIGREF6 (c) shows some weights for four input words. They all highlight the relevant terminal, and syntactic categories that are usually associated with that word. The associated categories typically are either those of that word, the phrases headed by the category of that word, or those that select or are selected by that word. The relevant nonterminal terminology is as follows BIBREF6: “(in” is a preposition or subordinating conjunction, “(np” is a noun phrase, “(pp” is a prepositional phrase, “(np-subj” is a noun phrase with a surface subject marking, “(vp” is a verb phrase, “(vbn” is a verb in the past participle, “(adjp” is an adjective phrase, “(vbp” is a non-3rd person singular present verb, “(prp” is a personal pronoun, “(rb” is an adverb, “(sq” is the main clause of a wh-question, or it indicates an inverted yes or no question, and “(s” is the root.
Experiments ::: English to Chinese
The Tatoeba BIBREF7 English to Chinese translation dataset, processed by BIBREF8 BIBREF8, is a product of a crowdsourced effort to translate sentences of a user's choice into another language. The data were split randomly into a test, validation, and train set of 1500, 1500, and 18205 pairs, respectively. The English was tokenized by splitting on punctuation and words. The Chinese was tokenized by splitting on punctuation and characters. There are 6919 and 3434 tokens in the input and output dictionary, respectively. There are often many acceptable outputs when translating one natural language to another. As a result, we use the corpus-level BLEU score BIBREF11 to test models and score them on the validation set.
Figure FIGREF6 (d) shows some weights for four input words. The listed Chinese words are an acceptable translation (depending on the context) and correspond roughly one-to-one with the English inputs. There are three exceptions. Although UTF8bsmi 么 is correctly given a low weight, its presence seems to be an error; it usually appears with another character to mean “what.” UTF8bsmi 我 們 and UTF8gbsn 我 们 typically translate to “we,” even though UTF8gbsn 我 alone translates to “me.” UTF8bsmi 們 is a plural marker and UTF8gbsn 们 is the same, but simplified; both versions evidently found their way into the dataset. The network has correctly learned to associate both Chinese words necessary to form the meaning of “we.” Also, UTF8gbsn 步散 means “walk,” but UTF8gbsn 散 generally does not appear alone to mean “walk.” Again, the network has learned to correctly associate all of the necessary characters with an input word.
The results from this dataset in Table TABREF5 warrant a discussion for readers who do not know Chinese. As in the other cases, the model demonstrates the expected knowledge and lack thereof when different types of artificial aphasia are induced. The outputs are also productions that Chinese aphasics are expected to make per BIBREF0's BIBREF0 description. When the model is undamaged, its output is a correct translation for “I ate some fish.” When the model's LSTMs are damaged (simulating the conditions for Broca's aphasia), the production has incorrect syntax, and translates word for word to “eat I ...” These are both correct content words. When the model's Lexicon Unit is damaged (simulating the conditions for Wernicke's aphasia), the production has correct syntax. Impressively, the Chinese actually has the same syntax as the correct translation for “I ate some fish.” However, the content is nonsensical. The English translation is “I took the utterance.” Compared to the correct Mandarin translation, this incorrect one has the same subject and the same past-tense marker, UTF8gbsn 了 , for the verb. However it uses a different verb, object, and determiner.
Related Work
There is evidence that generic attention mechanisms for machine translation already utilize the thesis that words have meanings that are independent of syntax. They learn correspondences between output tokens and a hidden state produced immediately after an encoder reads a particular input word BIBREF14. But the same mechanism is not at play in our model. Generic attention mechanisms do not necessarily impose a constraint on the input's syntax representation. Additionally, typical objective functions do not explicitly link input words with invariance in the output. Finally, one does not need to choose either LLA units or attention. LLA units can be incorporated into recurrent neural network systems with attention or other machine transduction architectures such as transformers BIBREF15.
Recent work has incorporated some of the ideas in our paper into a neural machine translation model with the use of a specific attention mechanism BIBREF16. But the authors only demonstrate success on a single artificial dataset with a lexicon of about ten words, and they did not explore the effects of damaging parts of their model. Their optimization procedure also does not prohibit context-invariant lexical information from passing through the recurrent portion of their model. This incorrectly allows the possibility for a representation to be learned that gives every input word its own syntactic category. Lastly, their architecture provides a softer constraint than the one that we demonstrate, as information from several input words can aggregate and pass through the non-recurrent module that they use.
There are other attempts to incorporate theories about human language to regularize a transduction model, but many have not scaled to the level of generality that the LLA units and some attention architectures show. These include synchronous grammars BIBREF17, data augmentation BIBREF18, Meta learning BIBREF19, and hard-coded maps or copying capabilities from input to output BIBREF20 BIBREF21. All require hard-coded rules that are often broken by the real world.
Conclusion
Neural and cognitive theories provide an imperative for computational models to understand human language by separating representations of word meanings from those of syntax. Using this constraint, we introduced new neural units that can provide this separation for the purpose of translating human languages. When added to an LSTM encoder and decoder, our units showed improvements in all of our experiment domains over the typical model. The domains were a small artificial diagnostic dataset, semantic parsing, syntactic parsing, and English to Mandarin Chinese translation. We also showed that the model learns a representation of human language that is similar to that of our brains. When damaged, the model displays the same knowledge distortions that aphasics do.
Acknowledgments
NOT INCLUDED IN DRAFT SUBMISSION
.125in - | No |
f052444f3b3bf61a3f226645278b780ebd7774db | f052444f3b3bf61a3f226645278b780ebd7774db_0 | Q: Do they perform a quantitative analysis of their model displaying knowledge distortions?
Text: Introduction
Studies of Broca's and Wernicke's aphasia provide evidence that our brains understand an utterance by creating separate representations for word meaning and word arrangement BIBREF0. There is a related thesis about human language, present across many theories of semantics, which is that syntactic categories are partially agnostic to the identity of words BIBREF1. This regularity in how humans derive meaning from an utterance is applicable to the task of natural language translation. This is because, by definition, translation necessitates the creation of a meaning representation for an input. According to the cognitive and neural imperative, we introduce new units to regularize an artificial neural encoder and decoder BIBREF2. These are called the Lexicon and Lexicon-Adversary units (collectively, LLA). Tests are done on a diagnostic task, and naturalistic tasks including semantic parsing, syntactic parsing, and English to Mandarin Chinese translation. We evaluate a Long Short-Term Memory (LSTM) BIBREF3 encoder and decoder, with and without the LLA units, and show that the LLA version achieves superior translation performance. In addition, we examine our model's weights, and its performance when some of its neurons are damaged. We find that the model exhibits the knowledge and the lack thereof expected of a Broca's aphasic BIBREF0 when one module's weights are corrupted. It also exhibits that expected of a Wernicke's aphasic BIBREF0 when another module's weights are corrupted.
Neural Motivation
BIBREF0 BIBREF0 showed that Broca's aphasics were able to understand “the apple that the boy is eating is red” with significantly higher accuracy than “the cow that the monkey is scaring is yellow,” along with similar pairs. The critical difference between these sentences is that, due to semantic constraints from the words, the first can be understood if it is presented as a set of words. The second cannot. This experiment provides evidence that the rest of the language neurons in the brain (largely Wernicke's area) can yield an understanding of word meanings but not how words are arranged. This also suggests that Broca's area builds a representation of the syntax.
In the same study, Wernicke's aphasics performed poorly regardless of the sentence type. This provides evidence that Broca's area cannot yield an understanding of word meanings.
Taken together, the two experiments support the theory that Broca's area creates a representation of the syntax without encoding complete word meanings. These other lexical aspects are represented separately in Wernicke's area, which does not encode syntax.
Cognitive Motivation
A tenet of generative grammar theories is that different words can share the same syntactic category BIBREF1. It is possible, for example, to know that the syntax for an utterance is a noun phrase that is composed of a determiner and a noun, followed by a verb phrase that is composed of a verb. One can know this without knowing the words. This also means that there are aspects of a word's meaning that the syntax does not determine; by definition, these aspects are invariant to word arrangement.
Model
In a natural language translation setting, suppose that an input word corresponds to a set of output tokens independently of its context. Even though this information might be useful to determine the syntax of the input utterance in the first place, the syntax does not determine this knowledge at all (by supposition). So, we can impose the constraint that our model's representation of the input's syntax cannot contain this context-invariant information. This regularization is strictly preferable to allowing all aspects of word meaning to propagate into the input's syntax representation. Without such a constraint, all inputs could, in principle, be given their own syntactic categories. This scenario is refuted by cognitive and neural theories. We incorporate the regularization with neural units that can separate representations of word meaning and arrangement.
With the exception of the equations that we list below, the encoding and decoding follows standard paradigms BIBREF2. The input at a time step to the LSTM encoder is a vector embedding for the input token. The final hidden and cell states of the encoder are the starting hidden and cell states of the LSTM decoder. The decoder does not take tokens as inputs; it decodes by relying solely on its hidden and cell states. The $t$th output, $o_t$, from the decoder is Softmax$(W(h_t))$, where $W$ is a fully connected layer and $h_t$ is the decoder's $t$th hidden state. $o_t$ is the length of the output dictionary. $o_t$'s index with the highest value corresponds to the token choice. The encoder and decoder's weights are optimized with the negative log likelihood loss. The inputs to the loss function are the log of the model's output and the ground-truth at each time step. Below, we describe our modifications. l = ((w1, w2, , wm))
la = (Wa2(ReLU(Wa1(GradReverse(hece)))))
o't = l ot
Where:
$m$ is the number of input tokens.
$w_i$ is a vector embedding for the $i$th input token, and is the length of the output dictionary. It is not the same embedding used by the encoder LSTM.
$\sigma $ is the Sigmoid function.
$\vee $ is the max pooling of a sequence of vectors of the same length. The weight at the output vector's $i$th index is the max of all input vectors' weights at their $i$th indices.
$h_e$ and $c_e$ are the final hidden and cell states of the encoder.
$W_{a_1}$ and $W_{a_2}$ are fully connected layers.
$\frown $ is concatenation.
$\odot $ is the elementwise product.
GradReverse multiplies the gradient by a negative number upon backpropagation.
$l$ is the output of the Lexicon Unit. Due to the max pooling, only one input token can be responsible for the value at a particular index of the output vector. The weights, $w_i$, are optimized solely by computing the binary cross entropy (BCE) loss between $l$ and the indicator vector where the $k$th element is 1 if the $k$th token in the output dictionary is in the output and 0 otherwise. This procedure forces a $w_i$ to represent the output tokens that are associated with its respective input token, without relying on aggregated contributions from the presence of several input tokens, and independently of the input word order.
$l_a$ is the output of the Lexicon-Adversary Unit. Its weights are optimized according to the BCE loss with $l$ as the target. This means that $l_a$ is the Lexicon-Adversary Unit's approximation of $l$. Because $h_e$ and $c_e$ are passed through a gradient reversal layer, the LSTM encoder is regularized to produce a representation that does not include information from $l$. Consequently, the LSTM decoder does not have this information either.
$o^{\prime }_t$ is the $t$th output of our model. It can be converted to a token by finding the index with the highest weight. It is the result of combining $l$ via an elementwise product with the information from the regularized decoder.
The recurrent encoder and decoder are the only modules that can represent the syntax, but they lack the expressivity to encode all potential aspects of word meaning. So, they are not always capable of producing a theoretically denied representation by giving all words their own syntactic category. The Lexicon Unit can represent these missing lexical aspects, but it lacks the expressivity to represent the syntax. See Figure FIGREF3 for the model.
Experiments
We used BIBREF4's BIBREF4 small diagnostic, the Geoquery semantic parsing dataset BIBREF5, the Wall Street Journal syntactic parsing dataset of sentences up to length 10 BIBREF6, and the Tatoeba BIBREF7 English to Chinese translation dataset processed by BIBREF8 BIBREF8.
To avoid the biases that can be introduced with hyperparameter tuning, we used the same hyperparameters with every model on every domain. These were chosen arbitrarily and kept after they enabled all models to reach a similar train accuracy (typically, close to 100 percent) and after they enabled all models to achieve a peak validation performance and then gradually yield worse validation scores. The hyperparameters are as follows. LSTM hidden size = 300, Lexicon Unit batch size = 1, batch size for other modules = 30, epoch to stop training the Lexicon Unit and start training other modules = 30, epoch to stop training = 1000, and Lexicon-Adversary Unit hidden size = 1000. The optimizer used for the Lexicon Unit was a sparse implementation of Adam BIBREF9 with a learning rate of 0.1 and otherwise the default PyTorch settings BIBREF10. In the other cases it was Adam BIBREF9 with the default PyTorch settings BIBREF10. The gradient through the encoder from the adversary's gradient reversal layer is multiplied by -0.0001. Additionally, the validation score is calculated after each train epoch and the model with the best is tested. To compute the Lexicon Unit to use, we measure its loss (BCE) on the validation set. Unless otherwise stated, we use the mean number of exact matches as the validation metric for the full model.
To judge overall translation performance, we compared the LLA-LSTM encoder and decoder with the standard LSTM encoder and decoder. We also compared our model with one that does not have the adversary but is otherwise identical. The LLA-LSTM model shows improvements over the standard model on many or all of the metrics for every naturalistic domain. Many of the improvements over the other models are several percentage points. In the few scenarios where the LLA-LSTM model does not improve upon the standard model, the discrepancy between the models is small. The discrepancy is also small when the LLA-LSTM model with no adversary performs better than the LLA-LSTM model. Table TABREF4 displays the test results across the domains.
Additionally, we provide evidence that the model learns knowledge of a separation between syntax and the lexicon that is similar to that of a human. Figure FIGREF6 displays the learned $\sigma (w)$ embeddings for some input words, across the domains. To avoid cherry-picking the results, we chose the input words arbitrarily, subject to the following constraint. We considered each word to typically have a different syntactic category than the other choices from that domain. This constraint was used to present a diverse selection of words. Table TABREF5 displays the output behavior of models that we damaged to resemble the damage that causes aphasia. To avoid cherry-picking the results, we arbitrarily chose an input for each domain, subject to the following constraint. The input is not in the train set and the undamaged LLA-LSTM model produces a translation that we judge to be correct. For all inputs that we chose, damage to the analog of Broca's area (the LSTMs) results in an output that describes content only if it is described by the input. However, the output does not show understanding of the input's syntax. In the naturalistic domains, damage to the analog of Wernicke's area (the Lexicon Unit) results in an output with incorrect content that would be acceptable if the input had different words but the same syntax. These knowledge distortions are precisely those that are expected in the respective human aphasics BIBREF0. We also provide corpus-level results from the damaged models by presenting mean precision on the test sets. Because the output languages in all of our domains use tokens to represent meanings in many cases, it is expected that the analog to Wernicke's area is responsible for maintaining a high precision.
Experiments ::: Sequences of Color
The first experiment by BIBREF4 BIBREF4 included a dataset of 14 training pairs and 10 test pairs. In the dataset, an input is a sequence of words from an artificial language created by the authors. An output is a sequence of colored dots. Because the dataset is so small, we use the train set as the validation set. The input and output dictionary are 7 and 4 words, respectively (not including the stop, “$<s>$,” token). In their paper, the authors argue that it is clear that the words have meanings. Four of the words correspond to unique output tokens, and three of them correspond to functions of the output tokens (for example, repeating the same dot three times). The dataset showcases the contrast between human and standard neural network responses. Their paper shows that humans had high accuracy on the test set, whereas standard neural models scored essentially zero exact matches.
The LLA-LSTM model that we tested appears to achieve only insignificantly higher results in Table TABREF4. However, it has learned, from just 14 training examples, how to map some of the words to BIBREF4's interpretation of their context-invariant meanings. This is shown in Figure FIGREF6 (a). In the figure, “dax,” “lug,” “wif,” and “zup” are interpreted correctly to mean “r,” “g,” “b,” and “y,” respectively. Here, the letters correspond to the types of unique dots, which are red, green, blue, and yellow, respectively. The other words, “fep,” “kiki,” and “blicket,” are taken by BIBREF4 to have functional meanings, and so are correctly not associated strongly with any of the output tokens. The exceptions are two erroneous associations between “kiki” and blue and “blicket” and green. Also, every sentence has a stop token, so the LLA units learned that the context-invariant meanings of each word include it. The LLA units can handle cases where a word corresponds to multiple output tokens, and the output tokens need not be monolithic in the output sequence. As shown in tests from all of the other domains, these output token correspondences may or may not be relevant depending on the specific context of a word, but the recurrent component of the architecture is capable of determining which to use.
Experiments ::: Semantic Parsing
Geoquery (GEO) is a dataset where an input is an English geography query and the corresponding output is a parse that a computer could use to look up the answer in a database BIBREF5. We used the standard test set of 250 pairs from BIBREF12 BIBREF12. The remaining data were randomly split into a validation set of 100 pairs and a train set of 539 pairs. We tokenized the input data by splitting on the words and removing punctuation. We tokenized the output data by removing commas and splitting on words, parentheses, and variables. There are 283 tokens in the input dictionary and 177 tokens in the output dictionary, respectively.
Figure FIGREF6 (b) shows some weights for four input words, which are all relevant to the inputs. Many of the weights correspond directly to the correct predicates. Other tokens have high weights because they are typically important to any parse. These are parentheses, variables (A, B, C, and D), the “answer” token, and the stop token.
Experiments ::: Syntactic Parsing
The Wall Street Journal portion of the Penn Treebank is a dataset where English sentences from The Wall Street Journal are paired with human-generated phrase parses BIBREF6. We use the test, validation, and train set from BIBREF13's BIBREF13 paper. For efficiency, we only use sentences that have 10 or fewer words, lowercase all words, and modify BIBREF13's output data so that left parentheses are paired with their corresponding nonterminal and right parentheses are paired with their corresponding terminal. The input and output data were both tokenized by splitting where there is a space. The test, validation, and train set are 398, 258, and 6007 pairs, respectively. There are 9243 tokens in the input dictionary and 9486 tokens in the output dictionary.
Figure FIGREF6 (c) shows some weights for four input words. They all highlight the relevant terminal, and syntactic categories that are usually associated with that word. The associated categories typically are either those of that word, the phrases headed by the category of that word, or those that select or are selected by that word. The relevant nonterminal terminology is as follows BIBREF6: “(in” is a preposition or subordinating conjunction, “(np” is a noun phrase, “(pp” is a prepositional phrase, “(np-subj” is a noun phrase with a surface subject marking, “(vp” is a verb phrase, “(vbn” is a verb in the past participle, “(adjp” is an adjective phrase, “(vbp” is a non-3rd person singular present verb, “(prp” is a personal pronoun, “(rb” is an adverb, “(sq” is the main clause of a wh-question, or it indicates an inverted yes or no question, and “(s” is the root.
Experiments ::: English to Chinese
The Tatoeba BIBREF7 English to Chinese translation dataset, processed by BIBREF8 BIBREF8, is a product of a crowdsourced effort to translate sentences of a user's choice into another language. The data were split randomly into a test, validation, and train set of 1500, 1500, and 18205 pairs, respectively. The English was tokenized by splitting on punctuation and words. The Chinese was tokenized by splitting on punctuation and characters. There are 6919 and 3434 tokens in the input and output dictionary, respectively. There are often many acceptable outputs when translating one natural language to another. As a result, we use the corpus-level BLEU score BIBREF11 to test models and score them on the validation set.
Figure FIGREF6 (d) shows some weights for four input words. The listed Chinese words are an acceptable translation (depending on the context) and correspond roughly one-to-one with the English inputs. There are three exceptions. Although UTF8bsmi 么 is correctly given a low weight, its presence seems to be an error; it usually appears with another character to mean “what.” UTF8bsmi 我 們 and UTF8gbsn 我 们 typically translate to “we,” even though UTF8gbsn 我 alone translates to “me.” UTF8bsmi 們 is a plural marker and UTF8gbsn 们 is the same, but simplified; both versions evidently found their way into the dataset. The network has correctly learned to associate both Chinese words necessary to form the meaning of “we.” Also, UTF8gbsn 步散 means “walk,” but UTF8gbsn 散 generally does not appear alone to mean “walk.” Again, the network has learned to correctly associate all of the necessary characters with an input word.
The results from this dataset in Table TABREF5 warrant a discussion for readers who do not know Chinese. As in the other cases, the model demonstrates the expected knowledge and lack thereof when different types of artificial aphasia are induced. The outputs are also productions that Chinese aphasics are expected to make per BIBREF0's BIBREF0 description. When the model is undamaged, its output is a correct translation for “I ate some fish.” When the model's LSTMs are damaged (simulating the conditions for Broca's aphasia), the production has incorrect syntax, and translates word for word to “eat I ...” These are both correct content words. When the model's Lexicon Unit is damaged (simulating the conditions for Wernicke's aphasia), the production has correct syntax. Impressively, the Chinese actually has the same syntax as the correct translation for “I ate some fish.” However, the content is nonsensical. The English translation is “I took the utterance.” Compared to the correct Mandarin translation, this incorrect one has the same subject and the same past-tense marker, UTF8gbsn 了 , for the verb. However it uses a different verb, object, and determiner.
Related Work
There is evidence that generic attention mechanisms for machine translation already utilize the thesis that words have meanings that are independent of syntax. They learn correspondences between output tokens and a hidden state produced immediately after an encoder reads a particular input word BIBREF14. But the same mechanism is not at play in our model. Generic attention mechanisms do not necessarily impose a constraint on the input's syntax representation. Additionally, typical objective functions do not explicitly link input words with invariance in the output. Finally, one does not need to choose either LLA units or attention. LLA units can be incorporated into recurrent neural network systems with attention or other machine transduction architectures such as transformers BIBREF15.
Recent work has incorporated some of the ideas in our paper into a neural machine translation model with the use of a specific attention mechanism BIBREF16. But the authors only demonstrate success on a single artificial dataset with a lexicon of about ten words, and they did not explore the effects of damaging parts of their model. Their optimization procedure also does not prohibit context-invariant lexical information from passing through the recurrent portion of their model. This incorrectly allows the possibility for a representation to be learned that gives every input word its own syntactic category. Lastly, their architecture provides a softer constraint than the one that we demonstrate, as information from several input words can aggregate and pass through the non-recurrent module that they use.
There are other attempts to incorporate theories about human language to regularize a transduction model, but many have not scaled to the level of generality that the LLA units and some attention architectures show. These include synchronous grammars BIBREF17, data augmentation BIBREF18, Meta learning BIBREF19, and hard-coded maps or copying capabilities from input to output BIBREF20 BIBREF21. All require hard-coded rules that are often broken by the real world.
Conclusion
Neural and cognitive theories provide an imperative for computational models to understand human language by separating representations of word meanings from those of syntax. Using this constraint, we introduced new neural units that can provide this separation for the purpose of translating human languages. When added to an LSTM encoder and decoder, our units showed improvements in all of our experiment domains over the typical model. The domains were a small artificial diagnostic dataset, semantic parsing, syntactic parsing, and English to Mandarin Chinese translation. We also showed that the model learns a representation of human language that is similar to that of our brains. When damaged, the model displays the same knowledge distortions that aphasics do.
Acknowledgments
NOT INCLUDED IN DRAFT SUBMISSION
.125in - | Yes |
79ed71a3505cf6f5e8bf121fd7ec1518cab55cae | 79ed71a3505cf6f5e8bf121fd7ec1518cab55cae_0 | Q: How do they damage different neural modules?
Text: Introduction
Studies of Broca's and Wernicke's aphasia provide evidence that our brains understand an utterance by creating separate representations for word meaning and word arrangement BIBREF0. There is a related thesis about human language, present across many theories of semantics, which is that syntactic categories are partially agnostic to the identity of words BIBREF1. This regularity in how humans derive meaning from an utterance is applicable to the task of natural language translation. This is because, by definition, translation necessitates the creation of a meaning representation for an input. According to the cognitive and neural imperative, we introduce new units to regularize an artificial neural encoder and decoder BIBREF2. These are called the Lexicon and Lexicon-Adversary units (collectively, LLA). Tests are done on a diagnostic task, and naturalistic tasks including semantic parsing, syntactic parsing, and English to Mandarin Chinese translation. We evaluate a Long Short-Term Memory (LSTM) BIBREF3 encoder and decoder, with and without the LLA units, and show that the LLA version achieves superior translation performance. In addition, we examine our model's weights, and its performance when some of its neurons are damaged. We find that the model exhibits the knowledge and the lack thereof expected of a Broca's aphasic BIBREF0 when one module's weights are corrupted. It also exhibits that expected of a Wernicke's aphasic BIBREF0 when another module's weights are corrupted.
Neural Motivation
BIBREF0 BIBREF0 showed that Broca's aphasics were able to understand “the apple that the boy is eating is red” with significantly higher accuracy than “the cow that the monkey is scaring is yellow,” along with similar pairs. The critical difference between these sentences is that, due to semantic constraints from the words, the first can be understood if it is presented as a set of words. The second cannot. This experiment provides evidence that the rest of the language neurons in the brain (largely Wernicke's area) can yield an understanding of word meanings but not how words are arranged. This also suggests that Broca's area builds a representation of the syntax.
In the same study, Wernicke's aphasics performed poorly regardless of the sentence type. This provides evidence that Broca's area cannot yield an understanding of word meanings.
Taken together, the two experiments support the theory that Broca's area creates a representation of the syntax without encoding complete word meanings. These other lexical aspects are represented separately in Wernicke's area, which does not encode syntax.
Cognitive Motivation
A tenet of generative grammar theories is that different words can share the same syntactic category BIBREF1. It is possible, for example, to know that the syntax for an utterance is a noun phrase that is composed of a determiner and a noun, followed by a verb phrase that is composed of a verb. One can know this without knowing the words. This also means that there are aspects of a word's meaning that the syntax does not determine; by definition, these aspects are invariant to word arrangement.
Model
In a natural language translation setting, suppose that an input word corresponds to a set of output tokens independently of its context. Even though this information might be useful to determine the syntax of the input utterance in the first place, the syntax does not determine this knowledge at all (by supposition). So, we can impose the constraint that our model's representation of the input's syntax cannot contain this context-invariant information. This regularization is strictly preferable to allowing all aspects of word meaning to propagate into the input's syntax representation. Without such a constraint, all inputs could, in principle, be given their own syntactic categories. This scenario is refuted by cognitive and neural theories. We incorporate the regularization with neural units that can separate representations of word meaning and arrangement.
With the exception of the equations that we list below, the encoding and decoding follows standard paradigms BIBREF2. The input at a time step to the LSTM encoder is a vector embedding for the input token. The final hidden and cell states of the encoder are the starting hidden and cell states of the LSTM decoder. The decoder does not take tokens as inputs; it decodes by relying solely on its hidden and cell states. The $t$th output, $o_t$, from the decoder is Softmax$(W(h_t))$, where $W$ is a fully connected layer and $h_t$ is the decoder's $t$th hidden state. $o_t$ is the length of the output dictionary. $o_t$'s index with the highest value corresponds to the token choice. The encoder and decoder's weights are optimized with the negative log likelihood loss. The inputs to the loss function are the log of the model's output and the ground-truth at each time step. Below, we describe our modifications. l = ((w1, w2, , wm))
la = (Wa2(ReLU(Wa1(GradReverse(hece)))))
o't = l ot
Where:
$m$ is the number of input tokens.
$w_i$ is a vector embedding for the $i$th input token, and is the length of the output dictionary. It is not the same embedding used by the encoder LSTM.
$\sigma $ is the Sigmoid function.
$\vee $ is the max pooling of a sequence of vectors of the same length. The weight at the output vector's $i$th index is the max of all input vectors' weights at their $i$th indices.
$h_e$ and $c_e$ are the final hidden and cell states of the encoder.
$W_{a_1}$ and $W_{a_2}$ are fully connected layers.
$\frown $ is concatenation.
$\odot $ is the elementwise product.
GradReverse multiplies the gradient by a negative number upon backpropagation.
$l$ is the output of the Lexicon Unit. Due to the max pooling, only one input token can be responsible for the value at a particular index of the output vector. The weights, $w_i$, are optimized solely by computing the binary cross entropy (BCE) loss between $l$ and the indicator vector where the $k$th element is 1 if the $k$th token in the output dictionary is in the output and 0 otherwise. This procedure forces a $w_i$ to represent the output tokens that are associated with its respective input token, without relying on aggregated contributions from the presence of several input tokens, and independently of the input word order.
$l_a$ is the output of the Lexicon-Adversary Unit. Its weights are optimized according to the BCE loss with $l$ as the target. This means that $l_a$ is the Lexicon-Adversary Unit's approximation of $l$. Because $h_e$ and $c_e$ are passed through a gradient reversal layer, the LSTM encoder is regularized to produce a representation that does not include information from $l$. Consequently, the LSTM decoder does not have this information either.
$o^{\prime }_t$ is the $t$th output of our model. It can be converted to a token by finding the index with the highest weight. It is the result of combining $l$ via an elementwise product with the information from the regularized decoder.
The recurrent encoder and decoder are the only modules that can represent the syntax, but they lack the expressivity to encode all potential aspects of word meaning. So, they are not always capable of producing a theoretically denied representation by giving all words their own syntactic category. The Lexicon Unit can represent these missing lexical aspects, but it lacks the expressivity to represent the syntax. See Figure FIGREF3 for the model.
Experiments
We used BIBREF4's BIBREF4 small diagnostic, the Geoquery semantic parsing dataset BIBREF5, the Wall Street Journal syntactic parsing dataset of sentences up to length 10 BIBREF6, and the Tatoeba BIBREF7 English to Chinese translation dataset processed by BIBREF8 BIBREF8.
To avoid the biases that can be introduced with hyperparameter tuning, we used the same hyperparameters with every model on every domain. These were chosen arbitrarily and kept after they enabled all models to reach a similar train accuracy (typically, close to 100 percent) and after they enabled all models to achieve a peak validation performance and then gradually yield worse validation scores. The hyperparameters are as follows. LSTM hidden size = 300, Lexicon Unit batch size = 1, batch size for other modules = 30, epoch to stop training the Lexicon Unit and start training other modules = 30, epoch to stop training = 1000, and Lexicon-Adversary Unit hidden size = 1000. The optimizer used for the Lexicon Unit was a sparse implementation of Adam BIBREF9 with a learning rate of 0.1 and otherwise the default PyTorch settings BIBREF10. In the other cases it was Adam BIBREF9 with the default PyTorch settings BIBREF10. The gradient through the encoder from the adversary's gradient reversal layer is multiplied by -0.0001. Additionally, the validation score is calculated after each train epoch and the model with the best is tested. To compute the Lexicon Unit to use, we measure its loss (BCE) on the validation set. Unless otherwise stated, we use the mean number of exact matches as the validation metric for the full model.
To judge overall translation performance, we compared the LLA-LSTM encoder and decoder with the standard LSTM encoder and decoder. We also compared our model with one that does not have the adversary but is otherwise identical. The LLA-LSTM model shows improvements over the standard model on many or all of the metrics for every naturalistic domain. Many of the improvements over the other models are several percentage points. In the few scenarios where the LLA-LSTM model does not improve upon the standard model, the discrepancy between the models is small. The discrepancy is also small when the LLA-LSTM model with no adversary performs better than the LLA-LSTM model. Table TABREF4 displays the test results across the domains.
Additionally, we provide evidence that the model learns knowledge of a separation between syntax and the lexicon that is similar to that of a human. Figure FIGREF6 displays the learned $\sigma (w)$ embeddings for some input words, across the domains. To avoid cherry-picking the results, we chose the input words arbitrarily, subject to the following constraint. We considered each word to typically have a different syntactic category than the other choices from that domain. This constraint was used to present a diverse selection of words. Table TABREF5 displays the output behavior of models that we damaged to resemble the damage that causes aphasia. To avoid cherry-picking the results, we arbitrarily chose an input for each domain, subject to the following constraint. The input is not in the train set and the undamaged LLA-LSTM model produces a translation that we judge to be correct. For all inputs that we chose, damage to the analog of Broca's area (the LSTMs) results in an output that describes content only if it is described by the input. However, the output does not show understanding of the input's syntax. In the naturalistic domains, damage to the analog of Wernicke's area (the Lexicon Unit) results in an output with incorrect content that would be acceptable if the input had different words but the same syntax. These knowledge distortions are precisely those that are expected in the respective human aphasics BIBREF0. We also provide corpus-level results from the damaged models by presenting mean precision on the test sets. Because the output languages in all of our domains use tokens to represent meanings in many cases, it is expected that the analog to Wernicke's area is responsible for maintaining a high precision.
Experiments ::: Sequences of Color
The first experiment by BIBREF4 BIBREF4 included a dataset of 14 training pairs and 10 test pairs. In the dataset, an input is a sequence of words from an artificial language created by the authors. An output is a sequence of colored dots. Because the dataset is so small, we use the train set as the validation set. The input and output dictionary are 7 and 4 words, respectively (not including the stop, “$<s>$,” token). In their paper, the authors argue that it is clear that the words have meanings. Four of the words correspond to unique output tokens, and three of them correspond to functions of the output tokens (for example, repeating the same dot three times). The dataset showcases the contrast between human and standard neural network responses. Their paper shows that humans had high accuracy on the test set, whereas standard neural models scored essentially zero exact matches.
The LLA-LSTM model that we tested appears to achieve only insignificantly higher results in Table TABREF4. However, it has learned, from just 14 training examples, how to map some of the words to BIBREF4's interpretation of their context-invariant meanings. This is shown in Figure FIGREF6 (a). In the figure, “dax,” “lug,” “wif,” and “zup” are interpreted correctly to mean “r,” “g,” “b,” and “y,” respectively. Here, the letters correspond to the types of unique dots, which are red, green, blue, and yellow, respectively. The other words, “fep,” “kiki,” and “blicket,” are taken by BIBREF4 to have functional meanings, and so are correctly not associated strongly with any of the output tokens. The exceptions are two erroneous associations between “kiki” and blue and “blicket” and green. Also, every sentence has a stop token, so the LLA units learned that the context-invariant meanings of each word include it. The LLA units can handle cases where a word corresponds to multiple output tokens, and the output tokens need not be monolithic in the output sequence. As shown in tests from all of the other domains, these output token correspondences may or may not be relevant depending on the specific context of a word, but the recurrent component of the architecture is capable of determining which to use.
Experiments ::: Semantic Parsing
Geoquery (GEO) is a dataset where an input is an English geography query and the corresponding output is a parse that a computer could use to look up the answer in a database BIBREF5. We used the standard test set of 250 pairs from BIBREF12 BIBREF12. The remaining data were randomly split into a validation set of 100 pairs and a train set of 539 pairs. We tokenized the input data by splitting on the words and removing punctuation. We tokenized the output data by removing commas and splitting on words, parentheses, and variables. There are 283 tokens in the input dictionary and 177 tokens in the output dictionary, respectively.
Figure FIGREF6 (b) shows some weights for four input words, which are all relevant to the inputs. Many of the weights correspond directly to the correct predicates. Other tokens have high weights because they are typically important to any parse. These are parentheses, variables (A, B, C, and D), the “answer” token, and the stop token.
Experiments ::: Syntactic Parsing
The Wall Street Journal portion of the Penn Treebank is a dataset where English sentences from The Wall Street Journal are paired with human-generated phrase parses BIBREF6. We use the test, validation, and train set from BIBREF13's BIBREF13 paper. For efficiency, we only use sentences that have 10 or fewer words, lowercase all words, and modify BIBREF13's output data so that left parentheses are paired with their corresponding nonterminal and right parentheses are paired with their corresponding terminal. The input and output data were both tokenized by splitting where there is a space. The test, validation, and train set are 398, 258, and 6007 pairs, respectively. There are 9243 tokens in the input dictionary and 9486 tokens in the output dictionary.
Figure FIGREF6 (c) shows some weights for four input words. They all highlight the relevant terminal, and syntactic categories that are usually associated with that word. The associated categories typically are either those of that word, the phrases headed by the category of that word, or those that select or are selected by that word. The relevant nonterminal terminology is as follows BIBREF6: “(in” is a preposition or subordinating conjunction, “(np” is a noun phrase, “(pp” is a prepositional phrase, “(np-subj” is a noun phrase with a surface subject marking, “(vp” is a verb phrase, “(vbn” is a verb in the past participle, “(adjp” is an adjective phrase, “(vbp” is a non-3rd person singular present verb, “(prp” is a personal pronoun, “(rb” is an adverb, “(sq” is the main clause of a wh-question, or it indicates an inverted yes or no question, and “(s” is the root.
Experiments ::: English to Chinese
The Tatoeba BIBREF7 English to Chinese translation dataset, processed by BIBREF8 BIBREF8, is a product of a crowdsourced effort to translate sentences of a user's choice into another language. The data were split randomly into a test, validation, and train set of 1500, 1500, and 18205 pairs, respectively. The English was tokenized by splitting on punctuation and words. The Chinese was tokenized by splitting on punctuation and characters. There are 6919 and 3434 tokens in the input and output dictionary, respectively. There are often many acceptable outputs when translating one natural language to another. As a result, we use the corpus-level BLEU score BIBREF11 to test models and score them on the validation set.
Figure FIGREF6 (d) shows some weights for four input words. The listed Chinese words are an acceptable translation (depending on the context) and correspond roughly one-to-one with the English inputs. There are three exceptions. Although UTF8bsmi 么 is correctly given a low weight, its presence seems to be an error; it usually appears with another character to mean “what.” UTF8bsmi 我 們 and UTF8gbsn 我 们 typically translate to “we,” even though UTF8gbsn 我 alone translates to “me.” UTF8bsmi 們 is a plural marker and UTF8gbsn 们 is the same, but simplified; both versions evidently found their way into the dataset. The network has correctly learned to associate both Chinese words necessary to form the meaning of “we.” Also, UTF8gbsn 步散 means “walk,” but UTF8gbsn 散 generally does not appear alone to mean “walk.” Again, the network has learned to correctly associate all of the necessary characters with an input word.
The results from this dataset in Table TABREF5 warrant a discussion for readers who do not know Chinese. As in the other cases, the model demonstrates the expected knowledge and lack thereof when different types of artificial aphasia are induced. The outputs are also productions that Chinese aphasics are expected to make per BIBREF0's BIBREF0 description. When the model is undamaged, its output is a correct translation for “I ate some fish.” When the model's LSTMs are damaged (simulating the conditions for Broca's aphasia), the production has incorrect syntax, and translates word for word to “eat I ...” These are both correct content words. When the model's Lexicon Unit is damaged (simulating the conditions for Wernicke's aphasia), the production has correct syntax. Impressively, the Chinese actually has the same syntax as the correct translation for “I ate some fish.” However, the content is nonsensical. The English translation is “I took the utterance.” Compared to the correct Mandarin translation, this incorrect one has the same subject and the same past-tense marker, UTF8gbsn 了 , for the verb. However it uses a different verb, object, and determiner.
Related Work
There is evidence that generic attention mechanisms for machine translation already utilize the thesis that words have meanings that are independent of syntax. They learn correspondences between output tokens and a hidden state produced immediately after an encoder reads a particular input word BIBREF14. But the same mechanism is not at play in our model. Generic attention mechanisms do not necessarily impose a constraint on the input's syntax representation. Additionally, typical objective functions do not explicitly link input words with invariance in the output. Finally, one does not need to choose either LLA units or attention. LLA units can be incorporated into recurrent neural network systems with attention or other machine transduction architectures such as transformers BIBREF15.
Recent work has incorporated some of the ideas in our paper into a neural machine translation model with the use of a specific attention mechanism BIBREF16. But the authors only demonstrate success on a single artificial dataset with a lexicon of about ten words, and they did not explore the effects of damaging parts of their model. Their optimization procedure also does not prohibit context-invariant lexical information from passing through the recurrent portion of their model. This incorrectly allows the possibility for a representation to be learned that gives every input word its own syntactic category. Lastly, their architecture provides a softer constraint than the one that we demonstrate, as information from several input words can aggregate and pass through the non-recurrent module that they use.
There are other attempts to incorporate theories about human language to regularize a transduction model, but many have not scaled to the level of generality that the LLA units and some attention architectures show. These include synchronous grammars BIBREF17, data augmentation BIBREF18, Meta learning BIBREF19, and hard-coded maps or copying capabilities from input to output BIBREF20 BIBREF21. All require hard-coded rules that are often broken by the real world.
Conclusion
Neural and cognitive theories provide an imperative for computational models to understand human language by separating representations of word meanings from those of syntax. Using this constraint, we introduced new neural units that can provide this separation for the purpose of translating human languages. When added to an LSTM encoder and decoder, our units showed improvements in all of our experiment domains over the typical model. The domains were a small artificial diagnostic dataset, semantic parsing, syntactic parsing, and English to Mandarin Chinese translation. We also showed that the model learns a representation of human language that is similar to that of our brains. When damaged, the model displays the same knowledge distortions that aphasics do.
Acknowledgments
NOT INCLUDED IN DRAFT SUBMISSION
.125in - | Damage to neural modules is done by randomly initializing their weights, causing the loss of all learned information. |
74eb363ce30c44d318078cc1a46f8decf7db3ade | 74eb363ce30c44d318078cc1a46f8decf7db3ade_0 | Q: Which weights from their model do they analyze?
Text: Introduction
Studies of Broca's and Wernicke's aphasia provide evidence that our brains understand an utterance by creating separate representations for word meaning and word arrangement BIBREF0. There is a related thesis about human language, present across many theories of semantics, which is that syntactic categories are partially agnostic to the identity of words BIBREF1. This regularity in how humans derive meaning from an utterance is applicable to the task of natural language translation. This is because, by definition, translation necessitates the creation of a meaning representation for an input. According to the cognitive and neural imperative, we introduce new units to regularize an artificial neural encoder and decoder BIBREF2. These are called the Lexicon and Lexicon-Adversary units (collectively, LLA). Tests are done on a diagnostic task, and naturalistic tasks including semantic parsing, syntactic parsing, and English to Mandarin Chinese translation. We evaluate a Long Short-Term Memory (LSTM) BIBREF3 encoder and decoder, with and without the LLA units, and show that the LLA version achieves superior translation performance. In addition, we examine our model's weights, and its performance when some of its neurons are damaged. We find that the model exhibits the knowledge and the lack thereof expected of a Broca's aphasic BIBREF0 when one module's weights are corrupted. It also exhibits that expected of a Wernicke's aphasic BIBREF0 when another module's weights are corrupted.
Neural Motivation
BIBREF0 BIBREF0 showed that Broca's aphasics were able to understand “the apple that the boy is eating is red” with significantly higher accuracy than “the cow that the monkey is scaring is yellow,” along with similar pairs. The critical difference between these sentences is that, due to semantic constraints from the words, the first can be understood if it is presented as a set of words. The second cannot. This experiment provides evidence that the rest of the language neurons in the brain (largely Wernicke's area) can yield an understanding of word meanings but not how words are arranged. This also suggests that Broca's area builds a representation of the syntax.
In the same study, Wernicke's aphasics performed poorly regardless of the sentence type. This provides evidence that Broca's area cannot yield an understanding of word meanings.
Taken together, the two experiments support the theory that Broca's area creates a representation of the syntax without encoding complete word meanings. These other lexical aspects are represented separately in Wernicke's area, which does not encode syntax.
Cognitive Motivation
A tenet of generative grammar theories is that different words can share the same syntactic category BIBREF1. It is possible, for example, to know that the syntax for an utterance is a noun phrase that is composed of a determiner and a noun, followed by a verb phrase that is composed of a verb. One can know this without knowing the words. This also means that there are aspects of a word's meaning that the syntax does not determine; by definition, these aspects are invariant to word arrangement.
Model
In a natural language translation setting, suppose that an input word corresponds to a set of output tokens independently of its context. Even though this information might be useful to determine the syntax of the input utterance in the first place, the syntax does not determine this knowledge at all (by supposition). So, we can impose the constraint that our model's representation of the input's syntax cannot contain this context-invariant information. This regularization is strictly preferable to allowing all aspects of word meaning to propagate into the input's syntax representation. Without such a constraint, all inputs could, in principle, be given their own syntactic categories. This scenario is refuted by cognitive and neural theories. We incorporate the regularization with neural units that can separate representations of word meaning and arrangement.
With the exception of the equations that we list below, the encoding and decoding follows standard paradigms BIBREF2. The input at a time step to the LSTM encoder is a vector embedding for the input token. The final hidden and cell states of the encoder are the starting hidden and cell states of the LSTM decoder. The decoder does not take tokens as inputs; it decodes by relying solely on its hidden and cell states. The $t$th output, $o_t$, from the decoder is Softmax$(W(h_t))$, where $W$ is a fully connected layer and $h_t$ is the decoder's $t$th hidden state. $o_t$ is the length of the output dictionary. $o_t$'s index with the highest value corresponds to the token choice. The encoder and decoder's weights are optimized with the negative log likelihood loss. The inputs to the loss function are the log of the model's output and the ground-truth at each time step. Below, we describe our modifications. l = ((w1, w2, , wm))
la = (Wa2(ReLU(Wa1(GradReverse(hece)))))
o't = l ot
Where:
$m$ is the number of input tokens.
$w_i$ is a vector embedding for the $i$th input token, and is the length of the output dictionary. It is not the same embedding used by the encoder LSTM.
$\sigma $ is the Sigmoid function.
$\vee $ is the max pooling of a sequence of vectors of the same length. The weight at the output vector's $i$th index is the max of all input vectors' weights at their $i$th indices.
$h_e$ and $c_e$ are the final hidden and cell states of the encoder.
$W_{a_1}$ and $W_{a_2}$ are fully connected layers.
$\frown $ is concatenation.
$\odot $ is the elementwise product.
GradReverse multiplies the gradient by a negative number upon backpropagation.
$l$ is the output of the Lexicon Unit. Due to the max pooling, only one input token can be responsible for the value at a particular index of the output vector. The weights, $w_i$, are optimized solely by computing the binary cross entropy (BCE) loss between $l$ and the indicator vector where the $k$th element is 1 if the $k$th token in the output dictionary is in the output and 0 otherwise. This procedure forces a $w_i$ to represent the output tokens that are associated with its respective input token, without relying on aggregated contributions from the presence of several input tokens, and independently of the input word order.
$l_a$ is the output of the Lexicon-Adversary Unit. Its weights are optimized according to the BCE loss with $l$ as the target. This means that $l_a$ is the Lexicon-Adversary Unit's approximation of $l$. Because $h_e$ and $c_e$ are passed through a gradient reversal layer, the LSTM encoder is regularized to produce a representation that does not include information from $l$. Consequently, the LSTM decoder does not have this information either.
$o^{\prime }_t$ is the $t$th output of our model. It can be converted to a token by finding the index with the highest weight. It is the result of combining $l$ via an elementwise product with the information from the regularized decoder.
The recurrent encoder and decoder are the only modules that can represent the syntax, but they lack the expressivity to encode all potential aspects of word meaning. So, they are not always capable of producing a theoretically denied representation by giving all words their own syntactic category. The Lexicon Unit can represent these missing lexical aspects, but it lacks the expressivity to represent the syntax. See Figure FIGREF3 for the model.
Experiments
We used BIBREF4's BIBREF4 small diagnostic, the Geoquery semantic parsing dataset BIBREF5, the Wall Street Journal syntactic parsing dataset of sentences up to length 10 BIBREF6, and the Tatoeba BIBREF7 English to Chinese translation dataset processed by BIBREF8 BIBREF8.
To avoid the biases that can be introduced with hyperparameter tuning, we used the same hyperparameters with every model on every domain. These were chosen arbitrarily and kept after they enabled all models to reach a similar train accuracy (typically, close to 100 percent) and after they enabled all models to achieve a peak validation performance and then gradually yield worse validation scores. The hyperparameters are as follows. LSTM hidden size = 300, Lexicon Unit batch size = 1, batch size for other modules = 30, epoch to stop training the Lexicon Unit and start training other modules = 30, epoch to stop training = 1000, and Lexicon-Adversary Unit hidden size = 1000. The optimizer used for the Lexicon Unit was a sparse implementation of Adam BIBREF9 with a learning rate of 0.1 and otherwise the default PyTorch settings BIBREF10. In the other cases it was Adam BIBREF9 with the default PyTorch settings BIBREF10. The gradient through the encoder from the adversary's gradient reversal layer is multiplied by -0.0001. Additionally, the validation score is calculated after each train epoch and the model with the best is tested. To compute the Lexicon Unit to use, we measure its loss (BCE) on the validation set. Unless otherwise stated, we use the mean number of exact matches as the validation metric for the full model.
To judge overall translation performance, we compared the LLA-LSTM encoder and decoder with the standard LSTM encoder and decoder. We also compared our model with one that does not have the adversary but is otherwise identical. The LLA-LSTM model shows improvements over the standard model on many or all of the metrics for every naturalistic domain. Many of the improvements over the other models are several percentage points. In the few scenarios where the LLA-LSTM model does not improve upon the standard model, the discrepancy between the models is small. The discrepancy is also small when the LLA-LSTM model with no adversary performs better than the LLA-LSTM model. Table TABREF4 displays the test results across the domains.
Additionally, we provide evidence that the model learns knowledge of a separation between syntax and the lexicon that is similar to that of a human. Figure FIGREF6 displays the learned $\sigma (w)$ embeddings for some input words, across the domains. To avoid cherry-picking the results, we chose the input words arbitrarily, subject to the following constraint. We considered each word to typically have a different syntactic category than the other choices from that domain. This constraint was used to present a diverse selection of words. Table TABREF5 displays the output behavior of models that we damaged to resemble the damage that causes aphasia. To avoid cherry-picking the results, we arbitrarily chose an input for each domain, subject to the following constraint. The input is not in the train set and the undamaged LLA-LSTM model produces a translation that we judge to be correct. For all inputs that we chose, damage to the analog of Broca's area (the LSTMs) results in an output that describes content only if it is described by the input. However, the output does not show understanding of the input's syntax. In the naturalistic domains, damage to the analog of Wernicke's area (the Lexicon Unit) results in an output with incorrect content that would be acceptable if the input had different words but the same syntax. These knowledge distortions are precisely those that are expected in the respective human aphasics BIBREF0. We also provide corpus-level results from the damaged models by presenting mean precision on the test sets. Because the output languages in all of our domains use tokens to represent meanings in many cases, it is expected that the analog to Wernicke's area is responsible for maintaining a high precision.
Experiments ::: Sequences of Color
The first experiment by BIBREF4 BIBREF4 included a dataset of 14 training pairs and 10 test pairs. In the dataset, an input is a sequence of words from an artificial language created by the authors. An output is a sequence of colored dots. Because the dataset is so small, we use the train set as the validation set. The input and output dictionary are 7 and 4 words, respectively (not including the stop, “$<s>$,” token). In their paper, the authors argue that it is clear that the words have meanings. Four of the words correspond to unique output tokens, and three of them correspond to functions of the output tokens (for example, repeating the same dot three times). The dataset showcases the contrast between human and standard neural network responses. Their paper shows that humans had high accuracy on the test set, whereas standard neural models scored essentially zero exact matches.
The LLA-LSTM model that we tested appears to achieve only insignificantly higher results in Table TABREF4. However, it has learned, from just 14 training examples, how to map some of the words to BIBREF4's interpretation of their context-invariant meanings. This is shown in Figure FIGREF6 (a). In the figure, “dax,” “lug,” “wif,” and “zup” are interpreted correctly to mean “r,” “g,” “b,” and “y,” respectively. Here, the letters correspond to the types of unique dots, which are red, green, blue, and yellow, respectively. The other words, “fep,” “kiki,” and “blicket,” are taken by BIBREF4 to have functional meanings, and so are correctly not associated strongly with any of the output tokens. The exceptions are two erroneous associations between “kiki” and blue and “blicket” and green. Also, every sentence has a stop token, so the LLA units learned that the context-invariant meanings of each word include it. The LLA units can handle cases where a word corresponds to multiple output tokens, and the output tokens need not be monolithic in the output sequence. As shown in tests from all of the other domains, these output token correspondences may or may not be relevant depending on the specific context of a word, but the recurrent component of the architecture is capable of determining which to use.
Experiments ::: Semantic Parsing
Geoquery (GEO) is a dataset where an input is an English geography query and the corresponding output is a parse that a computer could use to look up the answer in a database BIBREF5. We used the standard test set of 250 pairs from BIBREF12 BIBREF12. The remaining data were randomly split into a validation set of 100 pairs and a train set of 539 pairs. We tokenized the input data by splitting on the words and removing punctuation. We tokenized the output data by removing commas and splitting on words, parentheses, and variables. There are 283 tokens in the input dictionary and 177 tokens in the output dictionary, respectively.
Figure FIGREF6 (b) shows some weights for four input words, which are all relevant to the inputs. Many of the weights correspond directly to the correct predicates. Other tokens have high weights because they are typically important to any parse. These are parentheses, variables (A, B, C, and D), the “answer” token, and the stop token.
Experiments ::: Syntactic Parsing
The Wall Street Journal portion of the Penn Treebank is a dataset where English sentences from The Wall Street Journal are paired with human-generated phrase parses BIBREF6. We use the test, validation, and train set from BIBREF13's BIBREF13 paper. For efficiency, we only use sentences that have 10 or fewer words, lowercase all words, and modify BIBREF13's output data so that left parentheses are paired with their corresponding nonterminal and right parentheses are paired with their corresponding terminal. The input and output data were both tokenized by splitting where there is a space. The test, validation, and train set are 398, 258, and 6007 pairs, respectively. There are 9243 tokens in the input dictionary and 9486 tokens in the output dictionary.
Figure FIGREF6 (c) shows some weights for four input words. They all highlight the relevant terminal, and syntactic categories that are usually associated with that word. The associated categories typically are either those of that word, the phrases headed by the category of that word, or those that select or are selected by that word. The relevant nonterminal terminology is as follows BIBREF6: “(in” is a preposition or subordinating conjunction, “(np” is a noun phrase, “(pp” is a prepositional phrase, “(np-subj” is a noun phrase with a surface subject marking, “(vp” is a verb phrase, “(vbn” is a verb in the past participle, “(adjp” is an adjective phrase, “(vbp” is a non-3rd person singular present verb, “(prp” is a personal pronoun, “(rb” is an adverb, “(sq” is the main clause of a wh-question, or it indicates an inverted yes or no question, and “(s” is the root.
Experiments ::: English to Chinese
The Tatoeba BIBREF7 English to Chinese translation dataset, processed by BIBREF8 BIBREF8, is a product of a crowdsourced effort to translate sentences of a user's choice into another language. The data were split randomly into a test, validation, and train set of 1500, 1500, and 18205 pairs, respectively. The English was tokenized by splitting on punctuation and words. The Chinese was tokenized by splitting on punctuation and characters. There are 6919 and 3434 tokens in the input and output dictionary, respectively. There are often many acceptable outputs when translating one natural language to another. As a result, we use the corpus-level BLEU score BIBREF11 to test models and score them on the validation set.
Figure FIGREF6 (d) shows some weights for four input words. The listed Chinese words are an acceptable translation (depending on the context) and correspond roughly one-to-one with the English inputs. There are three exceptions. Although UTF8bsmi 么 is correctly given a low weight, its presence seems to be an error; it usually appears with another character to mean “what.” UTF8bsmi 我 們 and UTF8gbsn 我 们 typically translate to “we,” even though UTF8gbsn 我 alone translates to “me.” UTF8bsmi 們 is a plural marker and UTF8gbsn 们 is the same, but simplified; both versions evidently found their way into the dataset. The network has correctly learned to associate both Chinese words necessary to form the meaning of “we.” Also, UTF8gbsn 步散 means “walk,” but UTF8gbsn 散 generally does not appear alone to mean “walk.” Again, the network has learned to correctly associate all of the necessary characters with an input word.
The results from this dataset in Table TABREF5 warrant a discussion for readers who do not know Chinese. As in the other cases, the model demonstrates the expected knowledge and lack thereof when different types of artificial aphasia are induced. The outputs are also productions that Chinese aphasics are expected to make per BIBREF0's BIBREF0 description. When the model is undamaged, its output is a correct translation for “I ate some fish.” When the model's LSTMs are damaged (simulating the conditions for Broca's aphasia), the production has incorrect syntax, and translates word for word to “eat I ...” These are both correct content words. When the model's Lexicon Unit is damaged (simulating the conditions for Wernicke's aphasia), the production has correct syntax. Impressively, the Chinese actually has the same syntax as the correct translation for “I ate some fish.” However, the content is nonsensical. The English translation is “I took the utterance.” Compared to the correct Mandarin translation, this incorrect one has the same subject and the same past-tense marker, UTF8gbsn 了 , for the verb. However it uses a different verb, object, and determiner.
Related Work
There is evidence that generic attention mechanisms for machine translation already utilize the thesis that words have meanings that are independent of syntax. They learn correspondences between output tokens and a hidden state produced immediately after an encoder reads a particular input word BIBREF14. But the same mechanism is not at play in our model. Generic attention mechanisms do not necessarily impose a constraint on the input's syntax representation. Additionally, typical objective functions do not explicitly link input words with invariance in the output. Finally, one does not need to choose either LLA units or attention. LLA units can be incorporated into recurrent neural network systems with attention or other machine transduction architectures such as transformers BIBREF15.
Recent work has incorporated some of the ideas in our paper into a neural machine translation model with the use of a specific attention mechanism BIBREF16. But the authors only demonstrate success on a single artificial dataset with a lexicon of about ten words, and they did not explore the effects of damaging parts of their model. Their optimization procedure also does not prohibit context-invariant lexical information from passing through the recurrent portion of their model. This incorrectly allows the possibility for a representation to be learned that gives every input word its own syntactic category. Lastly, their architecture provides a softer constraint than the one that we demonstrate, as information from several input words can aggregate and pass through the non-recurrent module that they use.
There are other attempts to incorporate theories about human language to regularize a transduction model, but many have not scaled to the level of generality that the LLA units and some attention architectures show. These include synchronous grammars BIBREF17, data augmentation BIBREF18, Meta learning BIBREF19, and hard-coded maps or copying capabilities from input to output BIBREF20 BIBREF21. All require hard-coded rules that are often broken by the real world.
Conclusion
Neural and cognitive theories provide an imperative for computational models to understand human language by separating representations of word meanings from those of syntax. Using this constraint, we introduced new neural units that can provide this separation for the purpose of translating human languages. When added to an LSTM encoder and decoder, our units showed improvements in all of our experiment domains over the typical model. The domains were a small artificial diagnostic dataset, semantic parsing, syntactic parsing, and English to Mandarin Chinese translation. We also showed that the model learns a representation of human language that is similar to that of our brains. When damaged, the model displays the same knowledge distortions that aphasics do.
Acknowledgments
NOT INCLUDED IN DRAFT SUBMISSION
.125in - | Unanswerable |
0b9021cefca71081e617a362e7e3995c5f1d2a88 | 0b9021cefca71081e617a362e7e3995c5f1d2a88_0 | Q: What are the other models they compare to?
Text: Introduction
Text is important in many artificial intelligence applications. Among various text mining techniques, sentiment analysis is a key component in applications such as public opinion monitoring and comparative analysis. Sentiment analysis can be divided into three problems according to input texts, namely, sentence, paragraph, and document levels. This study focuses on sentence and paragraph levels.
Text sentiment analysis is usually considered a text classification problem. Almost all existing text classification techniques are applied to text sentiment analysis BIBREF0 . Typical techniques include bag-of-words (BOW)-based BIBREF1 , deep learning-based BIBREF2 , and lexicon-based (or rule-based) methods BIBREF3 .
Although many achievements have been made and sentiment analysis has been successfully used in various commercial applications, its accuracy can be further improved. The construction of a high-accuracy sentiment classification model usually entails the challenging compilation of training sets with numerous samples and sufficiently accurate labels. The reason behind this difficulty is two-fold. First, sentiment is somewhat subjective, and a sample may receive different labels from different users. Second, some texts contain complex sentiment representations, and a single label is difficult to provide. We conduct a statistical analysis of public Chinese sentiment text sets in GitHub. The results show that the average label error is larger than 10%. This error value reflects the degree of difficulty of sentiment labeling.
Privative and interrogative sentences are difficult to classify when deep learning-based methods are applied. Although lexicon-based methods can deal with particular types of privative sentences, their generalization capability is poor.
We address the above issues with a new methodology. First, we introduce a two-stage labeling strategy for sentiment texts. In the first stage, annotators are invited to label a large number of short texts with relatively pure sentiment orientations. Each sample is labeled by only one annotator. In the second stage, a relatively small number of text samples with mixed sentiment orientations are annotated, and each sample is labeled by multiple annotators. Second, we propose a two-level long short-term memory (LSTM) BIBREF4 network to achieve two-level feature representation and classify the sentiment orientations of a text sample to utilize two labeled data sets. Lastly, in the proposed two-level LSTM network, lexicon embedding is leveraged to incorporate linguistic features used in lexicon-based methods.
Three Chinese sentiment data sets are compiled to investigate the performance of the proposed methodology. The experimental results demonstrate the effectiveness of the proposed methods. Our work is new in the following aspects.
The rest of this paper is organized as follows. Section 2 briefly reviews related work. Section 3 describes our methodology. Section 4 reports the experimental results, and Section 5 concludes the study.
Text Sentiment Analysis
Sentiment analysis aims to predict the sentiment polarity of an input text sample. Sentiment polarity can be divided into negative, neutral, and positive in many applications.
Existing sentiment classification methods can be roughly divided into two categories, namely, lexicon-based and machine learning-based methods BIBREF5 . Lexicon-based methods BIBREF6 construct polar and privative word dictionaries. A set of rules for polar and privative words is compiled to judge the sentiment orientation of a text document. This method cannot effectively predict implicit orientations. Machine learning-based methods BIBREF7 utilize a standard binary or multi-category classification approach. Different feature extraction algorithms, including BOW BIBREF8 and part of speech (POS) BIBREF7 , are used. Word embedding and deep neural networks have recently been applied to sentiment analysis, and promising results have been obtained BIBREF9 BIBREF10 .
Lexion-based Sentiment Classification
Lexicon-based methods are actually in implemented in an unsupervised manner. They infer the sentiment categories of input texts on the basis of polar and privative words. The primary advantage of these methods is that they do not require labeled training data. The key of lexicon-based methods is the lexical resource construction, which maps words into a category (positive, negative, neutral, or privative). Senti-WordNet BIBREF11 is a lexical resource for English text sentiment classification. For Chinese texts, Senti-HowNet is usually used.
Fig. 1 characterizes a typical lexicon-based sentiment classification approach. The approach iteratively checks each word in an input sentence from left to right. The weight score of each word is calculated according to the procedure shown in Fig. 1. The final sentiment score is the average score of the words with weight scores. The scores of positive, neutral, and negative sentiments are denoted as “+1",“0", and “-1", respectively. According to the lexicon-based algorithm shown in Fig. 1, the sentiment score of “it is not bad" is 0.25, and the sentiment score of “it is good" is 1. However, the score of “it is not so bad" is -0.75, and this score is definitely wrong. Therefore, machine learning (including feature learning) methodologies have become mainstream in sentiment analysis.
Deep Learning-based Sentiment Classification
Deep learning (including word embedding BIBREF12 ) has been applied to almost all text-related applications, such as translation BIBREF13 , quality assurance BIBREF14 , recommendation BIBREF15 , and categorization BIBREF16 . Popular deep neural networks are divided into convolutional neural networks (CNNs) BIBREF17 and recurrent neural network (RNNs) BIBREF18 BIBREF19 . Both are utilized in sentiment classification BIBREF20 . Kim investigated the use of CNN in sentence sentiment classification and achieved promising results BIBREF2 . LSTM BIBREF21 , a classical type of RNN, is the most popular network used for sentiment classification. A binary-directional LSTM BIBREF22 with an attention mechanism is demonstrated to be effective in sentiment analysis.
Deep learning-based methods rarely utilize the useful resources adopted in lexicon-based methods. Qiao et al. BIBREF23 incorporated lexicon-based cues into the training of an LSTM-based model. Their proposed method relies on a new loss function that considers the relationships between polar or certain types of words (e.g., privative) and those words next to them in input texts. Our study also combines lexical cues into LSTM. Nevertheless, unlike Qiao et al.'s study that implicitly used lexical cues, the present work explicitly uses lexical cues in the LSTM network. Shin et al. BIBREF24 combined the lexicon embeddings of polar words with word embeddings for sentiment classification. The difference between our approach an the method proposed by Shin et al. the is discussed in Section 3.3.5.
Numerous studies on aspect-level sentiment analysis exist BIBREF25 . This problem is different from the sentiment classification investigated in this study.
METHODOLOGY
This section first introduces our two-stage labeling procedure. A two-level LSTM is then proposed. Lexicon embedding is finally leveraged to incorporate lexical cues.
Two-stage Labeling
As stated earlier, sentiment is subjective, and texts usually contain mixed sentiment orientations. Therefore, texts¡¯ sentiment orientations are difficult to label. In our study, three sentiment labels, namely, positive, neutral, and negative, are used. The following sentences are taken as examples.
The service is poor. The taste is good, but the rest is not so bad.
The quality of the phone is good, but the appearance is just so-so.
In user annotation, the labels of these two sentences depend on users. If a user is concerned about service, then the label of S1 may be “negative". By contrast, for another user who does not care about service, the label may be “positive". Similarly, a user may label S2 as “positive" if he cares about quality. Another user may label it as “negative" if the conjunction “but" attracts the user¡¯s attention more. Another user may label it as “neutral" if they are concerned about quality and appearance.
The underlying reason is that sentiment is more subjective than semantics. In related research on subjective categorization, such as visual aesthetics, each sample is usually repeatedly annotated by multiple annotators, and the average label is taken as the final label of the sample. This labeling strategy can also be applied to text sentiment annotation. However, we argue that this strategy is unsuitable for a (relatively) large number of samples. The reason lies in the following two aspects.
Multiple annotators for a large number of data sets require a large budget.
In our practice, annotators claim that their judgment criteria on sentiment become fused on texts with mixed sentiment orientations (e.g., S1 and S2) over time during labeling, and they become bored accordingly.
A two-stage labeling strategy is adopted in this study. In the first stage, each sentence/paragraph is divided into several clauses according to punctuation. The sentiment of each partitioned clause is relatively easy to annotate; therefore, each clause is labeled by only one user. In the second stage, a relatively small-sized sentence/paragraph set is labeled, and each sentence is labeled by all involved annotators. We still take the two sentences, S1 and S2, as examples. S1 and S2 are split into clauses, as shown below.
S1:
S1.1: The service is poor
S1.2: The taste is good
S1.3: but the rest is not so bad.
S2:
S2.1: The quality of the phone is good
S2.2: but the appearance is just so-so.
Each of the above clauses is labeled by only one annotator. The annotation in the first stage is easy to perform; thus, the number of clauses can be larger than the number of sentences used in the second labeling stage.
Two-level LSTM
Given two training data sets (denoted by T1 and T2), a new learning model should be utilized. LSTM is a widely used deep neural network in deep learning-based text classification.
LSTM is a typical RNN model for short-term memory, which can last for a long period of time. An LSTM is applicable to classify, process, and predict time series information with given time lags of unknown size. A common LSTM block is composed of a cell, an input gate, an output gate, and a forget gate. The forward computation of an LSTM block at time INLINEFORM0 or position INLINEFORM1 is as follows BIBREF21 : DISPLAYFORM0
where INLINEFORM0 is the input vector at time INLINEFORM1 (or position INLINEFORM2 ); INLINEFORM3 and INLINEFORM4 are the input vectors of the input unit and input gate, respectively; INLINEFORM5 and INLINEFORM6 are the output and hidden vectors at time INLINEFORM7 , respectively; INLINEFORM8 is the output of the forget gate at time INLINEFORM9 ; INLINEFORM10 is the internal state of the memory cell in an LSTM block at time INLINEFORM11 ; and INLINEFORM12 is the sigmoid active function.
When LSTM is used to classify an input sentence, the hidden vectors of each input vector are summed to form a dense vector that can be considered the feature representation of the input sentence, i.e., DISPLAYFORM0
In many applications, a bi-directional LSTM (bi-LSTM) structure is usually used, as shown in Fig. 2(a). In bi-LSTM, forward and backward information are considered for information at time INLINEFORM0 ; hence, the context is modeled. Bi-LSTM is thus significantly reasonable for text processing tasks. In our two-level LSTM, bi-LSTM is used in each level.
The output hidden state at time INLINEFORM0 of a bi-LSTM block can be described as follows: DISPLAYFORM0
where INLINEFORM0 , INLINEFORM1 , and INLINEFORM2 are the corresponding vectors at time INLINEFORM3 in the forward LSTM block; and INLINEFORM4 , INLINEFORM5 , and INLINEFORM6 are the corresponding vectors at time INLINEFORM7 in the backward LSTM block. INLINEFORM8 . When attention is used, the dense feature vector INLINEFORM9 of an input sentence is calculated as follows: DISPLAYFORM0
where INLINEFORM0 is the vector that consists of attention weights. The bi-LSTM with attention is shown in Fig. 2(b).
Our proposed network consists of two levels of LSTM network. In the first level, a bi-LSTM is used and learned on the basis of the first training set T1. This level is a conventional sentiment classification process. The input of this level is a clause, and the input INLINEFORM0 is the embedding of the basic unit of the input texts. The network is shown in Fig. 3(a).
In the second level, a bi-LSTM is also used and learned on the basis of the second training set T2. The input of this level is a sentence or a paragraph. The input INLINEFORM0 consists of two parts. The first part is the feature vector of the INLINEFORM1 -th clause. The feature vector is generated by the first-level network. In other words, the dense feature shown in Fig. 3(a) ( INLINEFORM2 ) is used. The second part is the sentiment score (not predicted label) output by the first-level network. The sentence S1 (The service is poor. The taste is good, but the rest is not so bad.) used in Subsection 3.1 is taken as an illustrative example. S1 contains three clauses. Therefore, the input vector of S1 can be represented by INLINEFORM3
where DISPLAYFORM0
where INLINEFORM0 is the output score of the INLINEFORM1 th clause by the first-level LSTM and INLINEFORM2 is the feature representation of the INLINEFORM3 th clause by the first LSTM. The network of the whole two-level network is shown in Fig. 3(b).
Lexical Embedding
The proposed lexicon embedding is based on INLINEFORM0 -hot encoding. Therefore, INLINEFORM1 -hot encoding is first described.
For categorical data, one-hot encoding is the most widely used encoding strategy when different categories are independent. For example, if one-hot encoding is used to represent three categories, namely, positive, neutral, and negative, the encoding vectors for the three categories are INLINEFORM0 , INLINEFORM1 , and INLINEFORM2 , respectively.
In this work, many lexical cues are categorical data, and different categories are independent. These lexical cues can directly be represented by one-hot encoding. The encoded vectors for lexical cues are then concatenated with other vectors, such as character/word embedding. However, one-hot encoding presents two main limitations when the encoded vector is concatenated with other vectors.
The value difference between the elements of one-hot encoded vectors and those of other encoded vectors (e.g., word embedding vectors) may be large. Fig. 4 shows the histogram of the values of the elements of the word embedding vectors. The magnitude of most elements are smaller than 1.
The lengths of one-hot encoded vectors are usually shorter than those of other encoded vectors. Consequently, the proportion of one-hot encoded part is small in the concatenated vectors.
The above two limitations affect the final sentiment analysis performance. To this end, we propose a new encoding strategy. DISPLAYFORM0
where INLINEFORM0 is the INLINEFORM1 -hot encoded vector, INLINEFORM2 is the proportion parameter, INLINEFORM3 is the one-hot encoded vector, and INLINEFORM4 is an INLINEFORM5 -dimensional vector. If INLINEFORM6 and INLINEFORM7 are equal to 1, then INLINEFORM8 -hot encoding is reduced to one-hot encoding. The parameter INLINEFORM9 is applied to increase the length of the final encoded vector.
Most lexicon-based sentiment methods rely on four types of words, namely, positive, negative, neutral, and privative. These words are useful cues for predicting the sentiment labels of input texts. The incorporation of these words should also be useful. A previous study has shown that a typical document comprises approximately 8% of such sentences BIBREF26 . Sentiments expressed in a conditional sentence can be difficult to determine due to the semantic condition. The sentiment polarities of interrogative sentences are also difficult to classify according to our empirical study.
Five types of words, namely, positive (Pos), negative (Neg), privative (Pri), suppositive (Sup), and interrogative (Int), are represented by the proposed encoding method. The rest words, which do not belong to any of the above five types, are named “others (Oth)" instead of “neutral" because some words, such as “the", are unrelated to “sentiment". The value of INLINEFORM0 in Eq. (6) is set as 10. The encoded vectors are as follows. INLINEFORM1
In the proposed INLINEFORM0 -hot embedding, the parameter INLINEFORM1 can be learned during training. The representation of the third clause (“but the rest is not so bad") of S1 in Subsection 3.1 is taken as an illustrative example. The new embedding of each word in this clause is as follows. DISPLAYFORM0
Certain types (e.g., positive, negative, and privative) of words should play more important roles than other words do in texts; therefore, their embeddings are also used in the attention layer. A new LSTM based on our lexicon embedding is proposed, as shown in Fig. 5. The attention layer and final dense vector of the network in Fig. 3(a) are calculated as follows. DISPLAYFORM0
where INLINEFORM0 is the attention weight for the INLINEFORM1 -th input, lt is the lexicon embedding for key lexical words for the INLINEFORM2 -th input, and INLINEFORM3 is the final dense vector. Eq. (2) is used in the first-level LSTM.
POS is usually used as a key cue in sentiment analysis BIBREF27 . To this end, we use additional lexicon embedding. The new lexicon embedding includes several major types of POS, namely, interrogative, exclamatory, and others. This new lexicon embedding is also applied to the attention layer. The motivation lies in that certain types of POS should play important roles in sentiment.
The proposed INLINEFORM0 -hot embedding is still applied to POS types in this study. According to our initial case studies, eight POS types are considered. They are noun, adjective, verb, pronoun, adverb, preposition, accessory, and others. The eight POS types are represented by the proposed INLINEFORM1 -hot encoding. We let INLINEFORM2 in Eq. (6) be 10. The first three POS types are as follows. INLINEFORM3
When POS embedding is used, the attention layer and final outputs of the network in Eq. (3) become DISPLAYFORM0
where INLINEFORM0 is the lexicon embedding for key lexical words for the INLINEFORM1 -th input.
Conjunction words play important roles in sentiment analysis BIBREF28 . For example, conjunctions such as “but" and “moreover" usually indicate the focus of texts and attract readers¡¯ attention. Therefore, conjunctions are considered in the input of the second-level LSTM.
Once a set of conjunction words is compiled, INLINEFORM0 -hot embedding is used. In our experiments, the number of conjunction words is 169. Therefore, the parameter INLINEFORM1 in Eq. (2) is set as 1.
When conjunction embedding is used for the second-level layer, the attention layer and final outputs of the network in Fig. 3(b) are calculated as follows. DISPLAYFORM0
where INLINEFORM0 is the attention weight for the INLINEFORM1 -th input clause; INLINEFORM2 is the hidden vector of the second-level LSTM; INLINEFORM3 and INLINEFORM4 are the conjunction embeddings for the first and last words in the INLINEFORM5 -th input clause, respectively; and INLINEFORM6 is the final dense vector used for the final classification.
Shin et al. BIBREF24 also embedded lexical information into sentiment analysis. Three major differences exist between our method and the method proposed by Shin et al. BIBREF24 .
The lexicon embedding proposed by Shin et al. us-es one-hot encoding, whereas the proposed method uses a new encoding strategy that can be considered a soft one-hot encoding.
The lexicon embedding proposed by Shin et al. ex-tends the length of raw encoded vectors. However, the extension aims to keep the lengths of lexical and word embeddings equal. Their extension method also only relies on zero padding and is thus different from the proposed method.
Only sentimental words are considered in the lexicon embedding proposed by Shin et al. On the contrary, sentimental words, POS, and conjunctions are considered in our work.
The Learning Procedure
The algorithmic steps of the entire learning procedure for the proposed INLINEFORM0 -hot lexicon embedding-based two-level LSTM (called INLINEFORM1 Tl-LSTM) are shown in Algorithm 1. In Algorithm 1, T1 refers to the training data that consist of clauses and the labels obtained in the first-stage labeling procedure. T2 refers to the training data that consist of sentences and the labels obtained in the second-stage labeling procedure. The structure of INLINEFORM2 Tl-LSTM is presented in Fig. 6.
INLINEFORM0 Tl-LSTM Input: Training sets T1 and T2; dictionary of key lexical words; POS for each word; dictionary of conjunction words; character/word embeddings for each character/word.
Output: A trained two-level LSTM for sentiment classification.
Steps:
Construct the embedding vector for each character (including punctuation) in the clauses in T1. The embeddings include the character/word and lexicon embeddings of each character/word;
Train the first-level LSTM on the basis of the input embedding vectors and labels of the T1 text clauses;
Run the learned first-level LSTM on each clause of the text samples in T2. Record the predicted score INLINEFORM0 and the final dense vector INLINEFORM1 for each clause;
Construct the embedding vector for each clause in the text samples in T2. Each embedding vector consists of INLINEFORM0 , INLINEFORM1 , and the lexicon embedding of conjunctions of each clause;
Train the second-level LSTM on the basis of the input embedding vectors and labels of the T2 text samples.
The first-level and second-level LSTM networks consist of the final two-level LSTM.
The proposed two-level LSTM can be applied to texts with arbitrary languages. Word information is required in lexical construction regardless of whether character or word embedding is used. The reason is that the three types of lexicon embeddings are performed at the word level. Therefore, when character embedding is used, the lexicon embedding of each character is the lexicon embedding of the word containing it.
This section shows the evaluation of the proposed methodology in terms of the two-level LSTM network and each part of the lexicon embedding.
We compile three Chinese text corpora from online data for three domains, namely, “hotel", “mobile phone (mobile)", and “travel". All texts are about user reviews. Each text sample collected is first partitioned into clauses according to Chinese tokens. Three clause sets are subsequently obtained from the three text corpora.
The labels “+1", “0.5", and “0" correspond to the three sentiment classes “positive", “neutral", and “negative", respectively. The text data are labeled according to our two-stage labeling strategy.
In the first stage, only one user is invited to label each clause sample as the sentiment orientations for clauses (or sub-sentences) are easy to label.
In the second stage, five users are invited to label each text sample in the three raw data sets. The average score of the five users on each sample is calculated. Samples with average scores located in [0.6, 1] are labeled as “positive". Samples with average scores located in [0, 0.4] are labeled as “negative". Others are labeled as “neutral". The details of the labeling results are shown in Table 1.
All the training and test data and the labels are available online.
In our experiments, the five types of key lexical words introduced in Subsection 3.3.2 are manually constructed. The details of the five types of words are listed in Table 2. The conjunction words are also manually constructed. The number of conjunction words used in the experiments is 169.
In each experimental run, the training set is compiled on the basis of the training data listed in Table 1. The compiling rule is specified before each experimental run. The test data are fixed to facilitate experimental duplication and comparison by other researchers.
In our experiments, three competing algorithms, namely, BOW, CNN, and (conventional) LSTM, are used.
For BOW, term frequency-inverse document frequency is utilized to construct features. Ridge regression BIBREF29 is used as a classifier. For CNN, a three-channel CNN is used. For LSTM, one-layer and two-layer bi-LSTM with attention are adopted, and the results of the network with superior performance are presented. CNN and LSTM are performed on TensorFlow, and default parameter settings are followed.
The key parameters are searched as follows. The embedding dimensions of characters and words are searched in [100, 150, 200, 250, 300]. The parameter INLINEFORM0 in INLINEFORM1 -hot encoding is searched in INLINEFORM2 .
In this subsubsection, each of the three raw data sets (associated with their labels) shown in Table 1 is used. The clause data are not used. In other words, the training data used in this subsubsection are the same as those used in previous studies. For each data corpus, 1000 raw data samples are used as the test data, and the rest are used as the training data. The involved algorithms are detailed as follows.
CNN-C denotes the CNN with (Chinese) character embedding.
CNN-W denotes the CNN with (Chinese) word embedding.
CNN-Lex-C denotes the algorithm which also integrates polar words in CNN which is proposed by Shin et al. BIBREF24 . The (Chinese) character embedding is used.
CNN-Lex-W denotes the algorithm which also integrates polar words in CNN which is proposed by Shin et al. BIBREF24 . The (Chinese) word embedding is used.
Bi-LSTM-C denotes the BI-LSTM with (Chinese) character embedding.
Bi-LSTM-W denotes the Bi-LSTM with (Chinese) word embedding.
Lex-rule denotes the rule-based approach shows in Fig. 1. This approach is unsupervised.
BOW denotes the conventional algorithm which is based of bag-of-words features.
The accuracies of the above algorithms are listed in Table 3. Overall, Bi-LSTM outperforms CNN and BOW. This conclusion is in accordance with the conclusion that RNN performs efficiently against CNN in a broad range of natural language processing (NLP) tasks on the basis of extensive comparative studies BIBREF30 . In addition, CNN-lex outperforms CNN under both character and word embeddings, which suggests that lexicon cues are useful in sentiment analysis. Lex-rule achieves the lowest accuracies on all the three data sets. Considering that the performances of (traditional) CNN, Lex-rule, and BOW are relatively poor, they are not applied in the remaining parts.
In this experimental comparison, the proposed two-level LSTM is evaluated, whereas lexicon embedding is not used in the entire network. The primary goal is to test whether the introduced two-stage labeling and the two-level network structure are useful for sentiment analysis.
The raw and clause data listed in Table 1 are used to perform the two-level LSTM. Tl-LSTM denotes the two-level LSTM. “R+C" refer to the mixed data of raw and clause data. The test data are still the 1000 samples used in section 4.3.1 for each corpus. Table 4 shows the classification accuracies. To ensure that the results differ from those in Table 3, we explicitly add “R+C" after each algorithm in Table 4. In the last line of Table 4, the base results for each corpus in Table 3 are also listed.
On all the three data corpora, the proposed two-level network (without lexicon embedding) with character embedding, Tl-LSTM-C, outperforms all the other involved algorithms. On the travel and the mobile corpora, TI-LSTM-W outperforms Bi-LSTM-W. The results in Table 4 indicate that the performances of Tl-LSTM on the mixed training and test data (R+C) are better than those of Bi-LSTM. This comparison indicates that the proposed two-level LSTM is effective.
In addition, for the involved algorithms, most results achieved on “R+C" are better than the best results only achieved on `R'listed in Table 3. This comparison suggests that the introduced two-stage labeling is useful.
The results also show that in the two-level LSTM, character embedding is more effective than word embedding.
In this experimental run, lexicon embedding is used in the proposed two-level LSTM or INLINEFORM0 Tl-LSTM. Table 5 shows the results. The optimal parameter INLINEFORM1 is about 11.
The performances of TI-LSTM with lexicon embedding (i.e., INLINEFORM0 Tl-LSTM) are consistently better than those of TI-LSTM without lexicon embedding (i.e., Tl-LSTM) listed in Table 5. The improved accuracies of INLINEFORM1 TI-LSTM over Tl-LSTM on the three data corpora are explicitly listed in Table 6.
The experimental evaluation discussed in Subsection 4.3 verifies the effectiveness of the proposed method, INLINEFORM0 Tl-LSTM. Unlike the conventional RNN, INLINEFORM1 Tl-LSTM contains lexicon embedding that consists of new technique and components, including INLINEFORM2 -hot encoding, embedding for polar words, embedding for POS, and embedding for conjunctions. Therefore, this subsection evaluates the performances of the involved technique and embeddings separately.
Our INLINEFORM0 -hot encoding differs from one-hot encoding in two aspects. The first aspect is that the nonzero values in one-hot encoding are only equal to 1, whereas the nonzero values in INLINEFORM1 -hot encoding are INLINEFORM2 . The second aspect is that only one element in one-hot encoding is nonzero, whereas n elements in INLINEFORM3 -hot encoding are nonzero.
In this experiment, we test whether INLINEFORM0 -hot encoding is useful in two experimental runs. In the first run, the value of INLINEFORM1 is manually set to 0.5 and 1 in the experimental run without optimization. The parameter INLINEFORM2 in Eq. (6) is set as 15. The classification accuracies vary according to different INLINEFORM3 values on all the three data corpora. When INLINEFORM4 equals 1, the accuracies are the lowest in most cases shown in Fig. 7.
The results shown in Fig. 7 indicate that the value of INLINEFORM0 does affect the performance of the entire network. Consequently, the classical one-hot encoding, which fixes the value of nonzero elements as 1, is ineffective. In our experiments, the learned value of INLINEFORM1 is approximate 0.4.
In the second run, the performances under different INLINEFORM0 (i.e., 1, 5, 10, 15) are tested. Table 7 shows the comparison results. The value of INLINEFORM1 does affect the performance of the entire network, thereby indicating that the introduction of the INLINEFORM2 -duplicated strategy in encoding is effective. In the experiments, when INLINEFORM3 is increasing, the accuracies first increase and then decrease. The main reason may lie in the fact that when INLINEFORM4 becomes large, the proportion of lexicon embedding becomes large accordingly. An over-length input feature vector may incur “curse of dimensionality" and thus weaken the performance of the proposed two-level network.
In this experimental run, we test whether the labeled polar (negative and positive) words do affect the performance of the entire method when they are used in lexicon embedding. To this end, we order the polar words according to their frequencies in the training data. Top 0%, 50%, 100% polar words are used. The corresponding classification accuracies are depicted in Fig. 8.
In most cases, the accuracies are the lowest when no polar words are used in the lexicon embedding. When all polar words are used, the proposed network achieves the highest accuracies.
In the experiment, only one user is invited to manually compile the dictionary for a data corpus. One and a half hour is needed for each data corpus. In our viewpoint, it is worth manually compiling the polar words for sentiment analysis by considering the performance improvement and time-consumption.
In this experimental run, we test whether POS cues do play positive roles in the entire model. To this end, we remove POS in the lexicon embedding of the proposed method. The results are shown in Fig. 9.
In most cases, the accuracies with POS embedding are greater than those without POS embedding, thereby indicating that the application of POS to lexicon embedding is useful.
In this experimental run, we test whether conjunction cues do play positive roles in the entire model. To this end, the lexicon embedding for conjunction words is also removed from the proposed method. The results are shown in Fig. 10.
The algorithm with conjunction embedding outperforms that without conjunction embedding consistently, thereby indicating that the application of conjunction to lexicon embedding is useful.
High-quality labels are crucial for learning systems. Nevertheless, texts with mixed sentiments are difficult for humans to label in text sentiment classification. In this study, a new labeling strategy is introduced to partition texts into those with pure and mixed sentiment orientations. These two categories of texts are labeled using different processes. A two-level network is accordingly proposed to utilize the two labeled data in our two-stage labeling strategy. Lexical cues (e.g., polar words, POS, conjunction words) are particularly useful in sentiment analysis. These lexical cues are used in our two-level network, and a new encoding strategy, that is, INLINEFORM0 -hot encoding, is introduced. INLINEFORM1 -hot encoding is motivated by one-hot encoding. However, the former alleviates the drawbacks of the latter. Three Chinese sentiment text data corpora are compiled to verify the effectiveness of the proposed methodology. Our proposed method achieves the highest accuracies on these three data corpora.
The proposed two-level network and lexicon embedding can also be applied to other types of deep neural networks. In our future work, we will extend our main idea into several networks and text mining applications.
The authors wish to thank Zefeng Han, Qing Yin, Lei Yang, Xiaonan Wang, Nan Chen, Rujing Yao, Lihong Guo, Pinglong Zhao for the labeling of the experimental data. | CNN-C, CNN-W, CNN-Lex-C, CNN-Lex-W, Bi-LSTM-C , Bi-LSTM-W, Lex-rule, BOW |
6ad92aad46d2e52f4e7f3020723922255fd2b603 | 6ad92aad46d2e52f4e7f3020723922255fd2b603_0 | Q: What is the agreement value for each dataset?
Text: Introduction
Text is important in many artificial intelligence applications. Among various text mining techniques, sentiment analysis is a key component in applications such as public opinion monitoring and comparative analysis. Sentiment analysis can be divided into three problems according to input texts, namely, sentence, paragraph, and document levels. This study focuses on sentence and paragraph levels.
Text sentiment analysis is usually considered a text classification problem. Almost all existing text classification techniques are applied to text sentiment analysis BIBREF0 . Typical techniques include bag-of-words (BOW)-based BIBREF1 , deep learning-based BIBREF2 , and lexicon-based (or rule-based) methods BIBREF3 .
Although many achievements have been made and sentiment analysis has been successfully used in various commercial applications, its accuracy can be further improved. The construction of a high-accuracy sentiment classification model usually entails the challenging compilation of training sets with numerous samples and sufficiently accurate labels. The reason behind this difficulty is two-fold. First, sentiment is somewhat subjective, and a sample may receive different labels from different users. Second, some texts contain complex sentiment representations, and a single label is difficult to provide. We conduct a statistical analysis of public Chinese sentiment text sets in GitHub. The results show that the average label error is larger than 10%. This error value reflects the degree of difficulty of sentiment labeling.
Privative and interrogative sentences are difficult to classify when deep learning-based methods are applied. Although lexicon-based methods can deal with particular types of privative sentences, their generalization capability is poor.
We address the above issues with a new methodology. First, we introduce a two-stage labeling strategy for sentiment texts. In the first stage, annotators are invited to label a large number of short texts with relatively pure sentiment orientations. Each sample is labeled by only one annotator. In the second stage, a relatively small number of text samples with mixed sentiment orientations are annotated, and each sample is labeled by multiple annotators. Second, we propose a two-level long short-term memory (LSTM) BIBREF4 network to achieve two-level feature representation and classify the sentiment orientations of a text sample to utilize two labeled data sets. Lastly, in the proposed two-level LSTM network, lexicon embedding is leveraged to incorporate linguistic features used in lexicon-based methods.
Three Chinese sentiment data sets are compiled to investigate the performance of the proposed methodology. The experimental results demonstrate the effectiveness of the proposed methods. Our work is new in the following aspects.
The rest of this paper is organized as follows. Section 2 briefly reviews related work. Section 3 describes our methodology. Section 4 reports the experimental results, and Section 5 concludes the study.
Text Sentiment Analysis
Sentiment analysis aims to predict the sentiment polarity of an input text sample. Sentiment polarity can be divided into negative, neutral, and positive in many applications.
Existing sentiment classification methods can be roughly divided into two categories, namely, lexicon-based and machine learning-based methods BIBREF5 . Lexicon-based methods BIBREF6 construct polar and privative word dictionaries. A set of rules for polar and privative words is compiled to judge the sentiment orientation of a text document. This method cannot effectively predict implicit orientations. Machine learning-based methods BIBREF7 utilize a standard binary or multi-category classification approach. Different feature extraction algorithms, including BOW BIBREF8 and part of speech (POS) BIBREF7 , are used. Word embedding and deep neural networks have recently been applied to sentiment analysis, and promising results have been obtained BIBREF9 BIBREF10 .
Lexion-based Sentiment Classification
Lexicon-based methods are actually in implemented in an unsupervised manner. They infer the sentiment categories of input texts on the basis of polar and privative words. The primary advantage of these methods is that they do not require labeled training data. The key of lexicon-based methods is the lexical resource construction, which maps words into a category (positive, negative, neutral, or privative). Senti-WordNet BIBREF11 is a lexical resource for English text sentiment classification. For Chinese texts, Senti-HowNet is usually used.
Fig. 1 characterizes a typical lexicon-based sentiment classification approach. The approach iteratively checks each word in an input sentence from left to right. The weight score of each word is calculated according to the procedure shown in Fig. 1. The final sentiment score is the average score of the words with weight scores. The scores of positive, neutral, and negative sentiments are denoted as “+1",“0", and “-1", respectively. According to the lexicon-based algorithm shown in Fig. 1, the sentiment score of “it is not bad" is 0.25, and the sentiment score of “it is good" is 1. However, the score of “it is not so bad" is -0.75, and this score is definitely wrong. Therefore, machine learning (including feature learning) methodologies have become mainstream in sentiment analysis.
Deep Learning-based Sentiment Classification
Deep learning (including word embedding BIBREF12 ) has been applied to almost all text-related applications, such as translation BIBREF13 , quality assurance BIBREF14 , recommendation BIBREF15 , and categorization BIBREF16 . Popular deep neural networks are divided into convolutional neural networks (CNNs) BIBREF17 and recurrent neural network (RNNs) BIBREF18 BIBREF19 . Both are utilized in sentiment classification BIBREF20 . Kim investigated the use of CNN in sentence sentiment classification and achieved promising results BIBREF2 . LSTM BIBREF21 , a classical type of RNN, is the most popular network used for sentiment classification. A binary-directional LSTM BIBREF22 with an attention mechanism is demonstrated to be effective in sentiment analysis.
Deep learning-based methods rarely utilize the useful resources adopted in lexicon-based methods. Qiao et al. BIBREF23 incorporated lexicon-based cues into the training of an LSTM-based model. Their proposed method relies on a new loss function that considers the relationships between polar or certain types of words (e.g., privative) and those words next to them in input texts. Our study also combines lexical cues into LSTM. Nevertheless, unlike Qiao et al.'s study that implicitly used lexical cues, the present work explicitly uses lexical cues in the LSTM network. Shin et al. BIBREF24 combined the lexicon embeddings of polar words with word embeddings for sentiment classification. The difference between our approach an the method proposed by Shin et al. the is discussed in Section 3.3.5.
Numerous studies on aspect-level sentiment analysis exist BIBREF25 . This problem is different from the sentiment classification investigated in this study.
METHODOLOGY
This section first introduces our two-stage labeling procedure. A two-level LSTM is then proposed. Lexicon embedding is finally leveraged to incorporate lexical cues.
Two-stage Labeling
As stated earlier, sentiment is subjective, and texts usually contain mixed sentiment orientations. Therefore, texts¡¯ sentiment orientations are difficult to label. In our study, three sentiment labels, namely, positive, neutral, and negative, are used. The following sentences are taken as examples.
The service is poor. The taste is good, but the rest is not so bad.
The quality of the phone is good, but the appearance is just so-so.
In user annotation, the labels of these two sentences depend on users. If a user is concerned about service, then the label of S1 may be “negative". By contrast, for another user who does not care about service, the label may be “positive". Similarly, a user may label S2 as “positive" if he cares about quality. Another user may label it as “negative" if the conjunction “but" attracts the user¡¯s attention more. Another user may label it as “neutral" if they are concerned about quality and appearance.
The underlying reason is that sentiment is more subjective than semantics. In related research on subjective categorization, such as visual aesthetics, each sample is usually repeatedly annotated by multiple annotators, and the average label is taken as the final label of the sample. This labeling strategy can also be applied to text sentiment annotation. However, we argue that this strategy is unsuitable for a (relatively) large number of samples. The reason lies in the following two aspects.
Multiple annotators for a large number of data sets require a large budget.
In our practice, annotators claim that their judgment criteria on sentiment become fused on texts with mixed sentiment orientations (e.g., S1 and S2) over time during labeling, and they become bored accordingly.
A two-stage labeling strategy is adopted in this study. In the first stage, each sentence/paragraph is divided into several clauses according to punctuation. The sentiment of each partitioned clause is relatively easy to annotate; therefore, each clause is labeled by only one user. In the second stage, a relatively small-sized sentence/paragraph set is labeled, and each sentence is labeled by all involved annotators. We still take the two sentences, S1 and S2, as examples. S1 and S2 are split into clauses, as shown below.
S1:
S1.1: The service is poor
S1.2: The taste is good
S1.3: but the rest is not so bad.
S2:
S2.1: The quality of the phone is good
S2.2: but the appearance is just so-so.
Each of the above clauses is labeled by only one annotator. The annotation in the first stage is easy to perform; thus, the number of clauses can be larger than the number of sentences used in the second labeling stage.
Two-level LSTM
Given two training data sets (denoted by T1 and T2), a new learning model should be utilized. LSTM is a widely used deep neural network in deep learning-based text classification.
LSTM is a typical RNN model for short-term memory, which can last for a long period of time. An LSTM is applicable to classify, process, and predict time series information with given time lags of unknown size. A common LSTM block is composed of a cell, an input gate, an output gate, and a forget gate. The forward computation of an LSTM block at time INLINEFORM0 or position INLINEFORM1 is as follows BIBREF21 : DISPLAYFORM0
where INLINEFORM0 is the input vector at time INLINEFORM1 (or position INLINEFORM2 ); INLINEFORM3 and INLINEFORM4 are the input vectors of the input unit and input gate, respectively; INLINEFORM5 and INLINEFORM6 are the output and hidden vectors at time INLINEFORM7 , respectively; INLINEFORM8 is the output of the forget gate at time INLINEFORM9 ; INLINEFORM10 is the internal state of the memory cell in an LSTM block at time INLINEFORM11 ; and INLINEFORM12 is the sigmoid active function.
When LSTM is used to classify an input sentence, the hidden vectors of each input vector are summed to form a dense vector that can be considered the feature representation of the input sentence, i.e., DISPLAYFORM0
In many applications, a bi-directional LSTM (bi-LSTM) structure is usually used, as shown in Fig. 2(a). In bi-LSTM, forward and backward information are considered for information at time INLINEFORM0 ; hence, the context is modeled. Bi-LSTM is thus significantly reasonable for text processing tasks. In our two-level LSTM, bi-LSTM is used in each level.
The output hidden state at time INLINEFORM0 of a bi-LSTM block can be described as follows: DISPLAYFORM0
where INLINEFORM0 , INLINEFORM1 , and INLINEFORM2 are the corresponding vectors at time INLINEFORM3 in the forward LSTM block; and INLINEFORM4 , INLINEFORM5 , and INLINEFORM6 are the corresponding vectors at time INLINEFORM7 in the backward LSTM block. INLINEFORM8 . When attention is used, the dense feature vector INLINEFORM9 of an input sentence is calculated as follows: DISPLAYFORM0
where INLINEFORM0 is the vector that consists of attention weights. The bi-LSTM with attention is shown in Fig. 2(b).
Our proposed network consists of two levels of LSTM network. In the first level, a bi-LSTM is used and learned on the basis of the first training set T1. This level is a conventional sentiment classification process. The input of this level is a clause, and the input INLINEFORM0 is the embedding of the basic unit of the input texts. The network is shown in Fig. 3(a).
In the second level, a bi-LSTM is also used and learned on the basis of the second training set T2. The input of this level is a sentence or a paragraph. The input INLINEFORM0 consists of two parts. The first part is the feature vector of the INLINEFORM1 -th clause. The feature vector is generated by the first-level network. In other words, the dense feature shown in Fig. 3(a) ( INLINEFORM2 ) is used. The second part is the sentiment score (not predicted label) output by the first-level network. The sentence S1 (The service is poor. The taste is good, but the rest is not so bad.) used in Subsection 3.1 is taken as an illustrative example. S1 contains three clauses. Therefore, the input vector of S1 can be represented by INLINEFORM3
where DISPLAYFORM0
where INLINEFORM0 is the output score of the INLINEFORM1 th clause by the first-level LSTM and INLINEFORM2 is the feature representation of the INLINEFORM3 th clause by the first LSTM. The network of the whole two-level network is shown in Fig. 3(b).
Lexical Embedding
The proposed lexicon embedding is based on INLINEFORM0 -hot encoding. Therefore, INLINEFORM1 -hot encoding is first described.
For categorical data, one-hot encoding is the most widely used encoding strategy when different categories are independent. For example, if one-hot encoding is used to represent three categories, namely, positive, neutral, and negative, the encoding vectors for the three categories are INLINEFORM0 , INLINEFORM1 , and INLINEFORM2 , respectively.
In this work, many lexical cues are categorical data, and different categories are independent. These lexical cues can directly be represented by one-hot encoding. The encoded vectors for lexical cues are then concatenated with other vectors, such as character/word embedding. However, one-hot encoding presents two main limitations when the encoded vector is concatenated with other vectors.
The value difference between the elements of one-hot encoded vectors and those of other encoded vectors (e.g., word embedding vectors) may be large. Fig. 4 shows the histogram of the values of the elements of the word embedding vectors. The magnitude of most elements are smaller than 1.
The lengths of one-hot encoded vectors are usually shorter than those of other encoded vectors. Consequently, the proportion of one-hot encoded part is small in the concatenated vectors.
The above two limitations affect the final sentiment analysis performance. To this end, we propose a new encoding strategy. DISPLAYFORM0
where INLINEFORM0 is the INLINEFORM1 -hot encoded vector, INLINEFORM2 is the proportion parameter, INLINEFORM3 is the one-hot encoded vector, and INLINEFORM4 is an INLINEFORM5 -dimensional vector. If INLINEFORM6 and INLINEFORM7 are equal to 1, then INLINEFORM8 -hot encoding is reduced to one-hot encoding. The parameter INLINEFORM9 is applied to increase the length of the final encoded vector.
Most lexicon-based sentiment methods rely on four types of words, namely, positive, negative, neutral, and privative. These words are useful cues for predicting the sentiment labels of input texts. The incorporation of these words should also be useful. A previous study has shown that a typical document comprises approximately 8% of such sentences BIBREF26 . Sentiments expressed in a conditional sentence can be difficult to determine due to the semantic condition. The sentiment polarities of interrogative sentences are also difficult to classify according to our empirical study.
Five types of words, namely, positive (Pos), negative (Neg), privative (Pri), suppositive (Sup), and interrogative (Int), are represented by the proposed encoding method. The rest words, which do not belong to any of the above five types, are named “others (Oth)" instead of “neutral" because some words, such as “the", are unrelated to “sentiment". The value of INLINEFORM0 in Eq. (6) is set as 10. The encoded vectors are as follows. INLINEFORM1
In the proposed INLINEFORM0 -hot embedding, the parameter INLINEFORM1 can be learned during training. The representation of the third clause (“but the rest is not so bad") of S1 in Subsection 3.1 is taken as an illustrative example. The new embedding of each word in this clause is as follows. DISPLAYFORM0
Certain types (e.g., positive, negative, and privative) of words should play more important roles than other words do in texts; therefore, their embeddings are also used in the attention layer. A new LSTM based on our lexicon embedding is proposed, as shown in Fig. 5. The attention layer and final dense vector of the network in Fig. 3(a) are calculated as follows. DISPLAYFORM0
where INLINEFORM0 is the attention weight for the INLINEFORM1 -th input, lt is the lexicon embedding for key lexical words for the INLINEFORM2 -th input, and INLINEFORM3 is the final dense vector. Eq. (2) is used in the first-level LSTM.
POS is usually used as a key cue in sentiment analysis BIBREF27 . To this end, we use additional lexicon embedding. The new lexicon embedding includes several major types of POS, namely, interrogative, exclamatory, and others. This new lexicon embedding is also applied to the attention layer. The motivation lies in that certain types of POS should play important roles in sentiment.
The proposed INLINEFORM0 -hot embedding is still applied to POS types in this study. According to our initial case studies, eight POS types are considered. They are noun, adjective, verb, pronoun, adverb, preposition, accessory, and others. The eight POS types are represented by the proposed INLINEFORM1 -hot encoding. We let INLINEFORM2 in Eq. (6) be 10. The first three POS types are as follows. INLINEFORM3
When POS embedding is used, the attention layer and final outputs of the network in Eq. (3) become DISPLAYFORM0
where INLINEFORM0 is the lexicon embedding for key lexical words for the INLINEFORM1 -th input.
Conjunction words play important roles in sentiment analysis BIBREF28 . For example, conjunctions such as “but" and “moreover" usually indicate the focus of texts and attract readers¡¯ attention. Therefore, conjunctions are considered in the input of the second-level LSTM.
Once a set of conjunction words is compiled, INLINEFORM0 -hot embedding is used. In our experiments, the number of conjunction words is 169. Therefore, the parameter INLINEFORM1 in Eq. (2) is set as 1.
When conjunction embedding is used for the second-level layer, the attention layer and final outputs of the network in Fig. 3(b) are calculated as follows. DISPLAYFORM0
where INLINEFORM0 is the attention weight for the INLINEFORM1 -th input clause; INLINEFORM2 is the hidden vector of the second-level LSTM; INLINEFORM3 and INLINEFORM4 are the conjunction embeddings for the first and last words in the INLINEFORM5 -th input clause, respectively; and INLINEFORM6 is the final dense vector used for the final classification.
Shin et al. BIBREF24 also embedded lexical information into sentiment analysis. Three major differences exist between our method and the method proposed by Shin et al. BIBREF24 .
The lexicon embedding proposed by Shin et al. us-es one-hot encoding, whereas the proposed method uses a new encoding strategy that can be considered a soft one-hot encoding.
The lexicon embedding proposed by Shin et al. ex-tends the length of raw encoded vectors. However, the extension aims to keep the lengths of lexical and word embeddings equal. Their extension method also only relies on zero padding and is thus different from the proposed method.
Only sentimental words are considered in the lexicon embedding proposed by Shin et al. On the contrary, sentimental words, POS, and conjunctions are considered in our work.
The Learning Procedure
The algorithmic steps of the entire learning procedure for the proposed INLINEFORM0 -hot lexicon embedding-based two-level LSTM (called INLINEFORM1 Tl-LSTM) are shown in Algorithm 1. In Algorithm 1, T1 refers to the training data that consist of clauses and the labels obtained in the first-stage labeling procedure. T2 refers to the training data that consist of sentences and the labels obtained in the second-stage labeling procedure. The structure of INLINEFORM2 Tl-LSTM is presented in Fig. 6.
INLINEFORM0 Tl-LSTM Input: Training sets T1 and T2; dictionary of key lexical words; POS for each word; dictionary of conjunction words; character/word embeddings for each character/word.
Output: A trained two-level LSTM for sentiment classification.
Steps:
Construct the embedding vector for each character (including punctuation) in the clauses in T1. The embeddings include the character/word and lexicon embeddings of each character/word;
Train the first-level LSTM on the basis of the input embedding vectors and labels of the T1 text clauses;
Run the learned first-level LSTM on each clause of the text samples in T2. Record the predicted score INLINEFORM0 and the final dense vector INLINEFORM1 for each clause;
Construct the embedding vector for each clause in the text samples in T2. Each embedding vector consists of INLINEFORM0 , INLINEFORM1 , and the lexicon embedding of conjunctions of each clause;
Train the second-level LSTM on the basis of the input embedding vectors and labels of the T2 text samples.
The first-level and second-level LSTM networks consist of the final two-level LSTM.
The proposed two-level LSTM can be applied to texts with arbitrary languages. Word information is required in lexical construction regardless of whether character or word embedding is used. The reason is that the three types of lexicon embeddings are performed at the word level. Therefore, when character embedding is used, the lexicon embedding of each character is the lexicon embedding of the word containing it.
This section shows the evaluation of the proposed methodology in terms of the two-level LSTM network and each part of the lexicon embedding.
We compile three Chinese text corpora from online data for three domains, namely, “hotel", “mobile phone (mobile)", and “travel". All texts are about user reviews. Each text sample collected is first partitioned into clauses according to Chinese tokens. Three clause sets are subsequently obtained from the three text corpora.
The labels “+1", “0.5", and “0" correspond to the three sentiment classes “positive", “neutral", and “negative", respectively. The text data are labeled according to our two-stage labeling strategy.
In the first stage, only one user is invited to label each clause sample as the sentiment orientations for clauses (or sub-sentences) are easy to label.
In the second stage, five users are invited to label each text sample in the three raw data sets. The average score of the five users on each sample is calculated. Samples with average scores located in [0.6, 1] are labeled as “positive". Samples with average scores located in [0, 0.4] are labeled as “negative". Others are labeled as “neutral". The details of the labeling results are shown in Table 1.
All the training and test data and the labels are available online.
In our experiments, the five types of key lexical words introduced in Subsection 3.3.2 are manually constructed. The details of the five types of words are listed in Table 2. The conjunction words are also manually constructed. The number of conjunction words used in the experiments is 169.
In each experimental run, the training set is compiled on the basis of the training data listed in Table 1. The compiling rule is specified before each experimental run. The test data are fixed to facilitate experimental duplication and comparison by other researchers.
In our experiments, three competing algorithms, namely, BOW, CNN, and (conventional) LSTM, are used.
For BOW, term frequency-inverse document frequency is utilized to construct features. Ridge regression BIBREF29 is used as a classifier. For CNN, a three-channel CNN is used. For LSTM, one-layer and two-layer bi-LSTM with attention are adopted, and the results of the network with superior performance are presented. CNN and LSTM are performed on TensorFlow, and default parameter settings are followed.
The key parameters are searched as follows. The embedding dimensions of characters and words are searched in [100, 150, 200, 250, 300]. The parameter INLINEFORM0 in INLINEFORM1 -hot encoding is searched in INLINEFORM2 .
In this subsubsection, each of the three raw data sets (associated with their labels) shown in Table 1 is used. The clause data are not used. In other words, the training data used in this subsubsection are the same as those used in previous studies. For each data corpus, 1000 raw data samples are used as the test data, and the rest are used as the training data. The involved algorithms are detailed as follows.
CNN-C denotes the CNN with (Chinese) character embedding.
CNN-W denotes the CNN with (Chinese) word embedding.
CNN-Lex-C denotes the algorithm which also integrates polar words in CNN which is proposed by Shin et al. BIBREF24 . The (Chinese) character embedding is used.
CNN-Lex-W denotes the algorithm which also integrates polar words in CNN which is proposed by Shin et al. BIBREF24 . The (Chinese) word embedding is used.
Bi-LSTM-C denotes the BI-LSTM with (Chinese) character embedding.
Bi-LSTM-W denotes the Bi-LSTM with (Chinese) word embedding.
Lex-rule denotes the rule-based approach shows in Fig. 1. This approach is unsupervised.
BOW denotes the conventional algorithm which is based of bag-of-words features.
The accuracies of the above algorithms are listed in Table 3. Overall, Bi-LSTM outperforms CNN and BOW. This conclusion is in accordance with the conclusion that RNN performs efficiently against CNN in a broad range of natural language processing (NLP) tasks on the basis of extensive comparative studies BIBREF30 . In addition, CNN-lex outperforms CNN under both character and word embeddings, which suggests that lexicon cues are useful in sentiment analysis. Lex-rule achieves the lowest accuracies on all the three data sets. Considering that the performances of (traditional) CNN, Lex-rule, and BOW are relatively poor, they are not applied in the remaining parts.
In this experimental comparison, the proposed two-level LSTM is evaluated, whereas lexicon embedding is not used in the entire network. The primary goal is to test whether the introduced two-stage labeling and the two-level network structure are useful for sentiment analysis.
The raw and clause data listed in Table 1 are used to perform the two-level LSTM. Tl-LSTM denotes the two-level LSTM. “R+C" refer to the mixed data of raw and clause data. The test data are still the 1000 samples used in section 4.3.1 for each corpus. Table 4 shows the classification accuracies. To ensure that the results differ from those in Table 3, we explicitly add “R+C" after each algorithm in Table 4. In the last line of Table 4, the base results for each corpus in Table 3 are also listed.
On all the three data corpora, the proposed two-level network (without lexicon embedding) with character embedding, Tl-LSTM-C, outperforms all the other involved algorithms. On the travel and the mobile corpora, TI-LSTM-W outperforms Bi-LSTM-W. The results in Table 4 indicate that the performances of Tl-LSTM on the mixed training and test data (R+C) are better than those of Bi-LSTM. This comparison indicates that the proposed two-level LSTM is effective.
In addition, for the involved algorithms, most results achieved on “R+C" are better than the best results only achieved on `R'listed in Table 3. This comparison suggests that the introduced two-stage labeling is useful.
The results also show that in the two-level LSTM, character embedding is more effective than word embedding.
In this experimental run, lexicon embedding is used in the proposed two-level LSTM or INLINEFORM0 Tl-LSTM. Table 5 shows the results. The optimal parameter INLINEFORM1 is about 11.
The performances of TI-LSTM with lexicon embedding (i.e., INLINEFORM0 Tl-LSTM) are consistently better than those of TI-LSTM without lexicon embedding (i.e., Tl-LSTM) listed in Table 5. The improved accuracies of INLINEFORM1 TI-LSTM over Tl-LSTM on the three data corpora are explicitly listed in Table 6.
The experimental evaluation discussed in Subsection 4.3 verifies the effectiveness of the proposed method, INLINEFORM0 Tl-LSTM. Unlike the conventional RNN, INLINEFORM1 Tl-LSTM contains lexicon embedding that consists of new technique and components, including INLINEFORM2 -hot encoding, embedding for polar words, embedding for POS, and embedding for conjunctions. Therefore, this subsection evaluates the performances of the involved technique and embeddings separately.
Our INLINEFORM0 -hot encoding differs from one-hot encoding in two aspects. The first aspect is that the nonzero values in one-hot encoding are only equal to 1, whereas the nonzero values in INLINEFORM1 -hot encoding are INLINEFORM2 . The second aspect is that only one element in one-hot encoding is nonzero, whereas n elements in INLINEFORM3 -hot encoding are nonzero.
In this experiment, we test whether INLINEFORM0 -hot encoding is useful in two experimental runs. In the first run, the value of INLINEFORM1 is manually set to 0.5 and 1 in the experimental run without optimization. The parameter INLINEFORM2 in Eq. (6) is set as 15. The classification accuracies vary according to different INLINEFORM3 values on all the three data corpora. When INLINEFORM4 equals 1, the accuracies are the lowest in most cases shown in Fig. 7.
The results shown in Fig. 7 indicate that the value of INLINEFORM0 does affect the performance of the entire network. Consequently, the classical one-hot encoding, which fixes the value of nonzero elements as 1, is ineffective. In our experiments, the learned value of INLINEFORM1 is approximate 0.4.
In the second run, the performances under different INLINEFORM0 (i.e., 1, 5, 10, 15) are tested. Table 7 shows the comparison results. The value of INLINEFORM1 does affect the performance of the entire network, thereby indicating that the introduction of the INLINEFORM2 -duplicated strategy in encoding is effective. In the experiments, when INLINEFORM3 is increasing, the accuracies first increase and then decrease. The main reason may lie in the fact that when INLINEFORM4 becomes large, the proportion of lexicon embedding becomes large accordingly. An over-length input feature vector may incur “curse of dimensionality" and thus weaken the performance of the proposed two-level network.
In this experimental run, we test whether the labeled polar (negative and positive) words do affect the performance of the entire method when they are used in lexicon embedding. To this end, we order the polar words according to their frequencies in the training data. Top 0%, 50%, 100% polar words are used. The corresponding classification accuracies are depicted in Fig. 8.
In most cases, the accuracies are the lowest when no polar words are used in the lexicon embedding. When all polar words are used, the proposed network achieves the highest accuracies.
In the experiment, only one user is invited to manually compile the dictionary for a data corpus. One and a half hour is needed for each data corpus. In our viewpoint, it is worth manually compiling the polar words for sentiment analysis by considering the performance improvement and time-consumption.
In this experimental run, we test whether POS cues do play positive roles in the entire model. To this end, we remove POS in the lexicon embedding of the proposed method. The results are shown in Fig. 9.
In most cases, the accuracies with POS embedding are greater than those without POS embedding, thereby indicating that the application of POS to lexicon embedding is useful.
In this experimental run, we test whether conjunction cues do play positive roles in the entire model. To this end, the lexicon embedding for conjunction words is also removed from the proposed method. The results are shown in Fig. 10.
The algorithm with conjunction embedding outperforms that without conjunction embedding consistently, thereby indicating that the application of conjunction to lexicon embedding is useful.
High-quality labels are crucial for learning systems. Nevertheless, texts with mixed sentiments are difficult for humans to label in text sentiment classification. In this study, a new labeling strategy is introduced to partition texts into those with pure and mixed sentiment orientations. These two categories of texts are labeled using different processes. A two-level network is accordingly proposed to utilize the two labeled data in our two-stage labeling strategy. Lexical cues (e.g., polar words, POS, conjunction words) are particularly useful in sentiment analysis. These lexical cues are used in our two-level network, and a new encoding strategy, that is, INLINEFORM0 -hot encoding, is introduced. INLINEFORM1 -hot encoding is motivated by one-hot encoding. However, the former alleviates the drawbacks of the latter. Three Chinese sentiment text data corpora are compiled to verify the effectiveness of the proposed methodology. Our proposed method achieves the highest accuracies on these three data corpora.
The proposed two-level network and lexicon embedding can also be applied to other types of deep neural networks. In our future work, we will extend our main idea into several networks and text mining applications.
The authors wish to thank Zefeng Han, Qing Yin, Lei Yang, Xiaonan Wang, Nan Chen, Rujing Yao, Lihong Guo, Pinglong Zhao for the labeling of the experimental data. | Unanswerable |
4fdc707fae5747fceae68199851e3c3186ab8307 | 4fdc707fae5747fceae68199851e3c3186ab8307_0 | Q: How many annotators participated?
Text: Introduction
Text is important in many artificial intelligence applications. Among various text mining techniques, sentiment analysis is a key component in applications such as public opinion monitoring and comparative analysis. Sentiment analysis can be divided into three problems according to input texts, namely, sentence, paragraph, and document levels. This study focuses on sentence and paragraph levels.
Text sentiment analysis is usually considered a text classification problem. Almost all existing text classification techniques are applied to text sentiment analysis BIBREF0 . Typical techniques include bag-of-words (BOW)-based BIBREF1 , deep learning-based BIBREF2 , and lexicon-based (or rule-based) methods BIBREF3 .
Although many achievements have been made and sentiment analysis has been successfully used in various commercial applications, its accuracy can be further improved. The construction of a high-accuracy sentiment classification model usually entails the challenging compilation of training sets with numerous samples and sufficiently accurate labels. The reason behind this difficulty is two-fold. First, sentiment is somewhat subjective, and a sample may receive different labels from different users. Second, some texts contain complex sentiment representations, and a single label is difficult to provide. We conduct a statistical analysis of public Chinese sentiment text sets in GitHub. The results show that the average label error is larger than 10%. This error value reflects the degree of difficulty of sentiment labeling.
Privative and interrogative sentences are difficult to classify when deep learning-based methods are applied. Although lexicon-based methods can deal with particular types of privative sentences, their generalization capability is poor.
We address the above issues with a new methodology. First, we introduce a two-stage labeling strategy for sentiment texts. In the first stage, annotators are invited to label a large number of short texts with relatively pure sentiment orientations. Each sample is labeled by only one annotator. In the second stage, a relatively small number of text samples with mixed sentiment orientations are annotated, and each sample is labeled by multiple annotators. Second, we propose a two-level long short-term memory (LSTM) BIBREF4 network to achieve two-level feature representation and classify the sentiment orientations of a text sample to utilize two labeled data sets. Lastly, in the proposed two-level LSTM network, lexicon embedding is leveraged to incorporate linguistic features used in lexicon-based methods.
Three Chinese sentiment data sets are compiled to investigate the performance of the proposed methodology. The experimental results demonstrate the effectiveness of the proposed methods. Our work is new in the following aspects.
The rest of this paper is organized as follows. Section 2 briefly reviews related work. Section 3 describes our methodology. Section 4 reports the experimental results, and Section 5 concludes the study.
Text Sentiment Analysis
Sentiment analysis aims to predict the sentiment polarity of an input text sample. Sentiment polarity can be divided into negative, neutral, and positive in many applications.
Existing sentiment classification methods can be roughly divided into two categories, namely, lexicon-based and machine learning-based methods BIBREF5 . Lexicon-based methods BIBREF6 construct polar and privative word dictionaries. A set of rules for polar and privative words is compiled to judge the sentiment orientation of a text document. This method cannot effectively predict implicit orientations. Machine learning-based methods BIBREF7 utilize a standard binary or multi-category classification approach. Different feature extraction algorithms, including BOW BIBREF8 and part of speech (POS) BIBREF7 , are used. Word embedding and deep neural networks have recently been applied to sentiment analysis, and promising results have been obtained BIBREF9 BIBREF10 .
Lexion-based Sentiment Classification
Lexicon-based methods are actually in implemented in an unsupervised manner. They infer the sentiment categories of input texts on the basis of polar and privative words. The primary advantage of these methods is that they do not require labeled training data. The key of lexicon-based methods is the lexical resource construction, which maps words into a category (positive, negative, neutral, or privative). Senti-WordNet BIBREF11 is a lexical resource for English text sentiment classification. For Chinese texts, Senti-HowNet is usually used.
Fig. 1 characterizes a typical lexicon-based sentiment classification approach. The approach iteratively checks each word in an input sentence from left to right. The weight score of each word is calculated according to the procedure shown in Fig. 1. The final sentiment score is the average score of the words with weight scores. The scores of positive, neutral, and negative sentiments are denoted as “+1",“0", and “-1", respectively. According to the lexicon-based algorithm shown in Fig. 1, the sentiment score of “it is not bad" is 0.25, and the sentiment score of “it is good" is 1. However, the score of “it is not so bad" is -0.75, and this score is definitely wrong. Therefore, machine learning (including feature learning) methodologies have become mainstream in sentiment analysis.
Deep Learning-based Sentiment Classification
Deep learning (including word embedding BIBREF12 ) has been applied to almost all text-related applications, such as translation BIBREF13 , quality assurance BIBREF14 , recommendation BIBREF15 , and categorization BIBREF16 . Popular deep neural networks are divided into convolutional neural networks (CNNs) BIBREF17 and recurrent neural network (RNNs) BIBREF18 BIBREF19 . Both are utilized in sentiment classification BIBREF20 . Kim investigated the use of CNN in sentence sentiment classification and achieved promising results BIBREF2 . LSTM BIBREF21 , a classical type of RNN, is the most popular network used for sentiment classification. A binary-directional LSTM BIBREF22 with an attention mechanism is demonstrated to be effective in sentiment analysis.
Deep learning-based methods rarely utilize the useful resources adopted in lexicon-based methods. Qiao et al. BIBREF23 incorporated lexicon-based cues into the training of an LSTM-based model. Their proposed method relies on a new loss function that considers the relationships between polar or certain types of words (e.g., privative) and those words next to them in input texts. Our study also combines lexical cues into LSTM. Nevertheless, unlike Qiao et al.'s study that implicitly used lexical cues, the present work explicitly uses lexical cues in the LSTM network. Shin et al. BIBREF24 combined the lexicon embeddings of polar words with word embeddings for sentiment classification. The difference between our approach an the method proposed by Shin et al. the is discussed in Section 3.3.5.
Numerous studies on aspect-level sentiment analysis exist BIBREF25 . This problem is different from the sentiment classification investigated in this study.
METHODOLOGY
This section first introduces our two-stage labeling procedure. A two-level LSTM is then proposed. Lexicon embedding is finally leveraged to incorporate lexical cues.
Two-stage Labeling
As stated earlier, sentiment is subjective, and texts usually contain mixed sentiment orientations. Therefore, texts¡¯ sentiment orientations are difficult to label. In our study, three sentiment labels, namely, positive, neutral, and negative, are used. The following sentences are taken as examples.
The service is poor. The taste is good, but the rest is not so bad.
The quality of the phone is good, but the appearance is just so-so.
In user annotation, the labels of these two sentences depend on users. If a user is concerned about service, then the label of S1 may be “negative". By contrast, for another user who does not care about service, the label may be “positive". Similarly, a user may label S2 as “positive" if he cares about quality. Another user may label it as “negative" if the conjunction “but" attracts the user¡¯s attention more. Another user may label it as “neutral" if they are concerned about quality and appearance.
The underlying reason is that sentiment is more subjective than semantics. In related research on subjective categorization, such as visual aesthetics, each sample is usually repeatedly annotated by multiple annotators, and the average label is taken as the final label of the sample. This labeling strategy can also be applied to text sentiment annotation. However, we argue that this strategy is unsuitable for a (relatively) large number of samples. The reason lies in the following two aspects.
Multiple annotators for a large number of data sets require a large budget.
In our practice, annotators claim that their judgment criteria on sentiment become fused on texts with mixed sentiment orientations (e.g., S1 and S2) over time during labeling, and they become bored accordingly.
A two-stage labeling strategy is adopted in this study. In the first stage, each sentence/paragraph is divided into several clauses according to punctuation. The sentiment of each partitioned clause is relatively easy to annotate; therefore, each clause is labeled by only one user. In the second stage, a relatively small-sized sentence/paragraph set is labeled, and each sentence is labeled by all involved annotators. We still take the two sentences, S1 and S2, as examples. S1 and S2 are split into clauses, as shown below.
S1:
S1.1: The service is poor
S1.2: The taste is good
S1.3: but the rest is not so bad.
S2:
S2.1: The quality of the phone is good
S2.2: but the appearance is just so-so.
Each of the above clauses is labeled by only one annotator. The annotation in the first stage is easy to perform; thus, the number of clauses can be larger than the number of sentences used in the second labeling stage.
Two-level LSTM
Given two training data sets (denoted by T1 and T2), a new learning model should be utilized. LSTM is a widely used deep neural network in deep learning-based text classification.
LSTM is a typical RNN model for short-term memory, which can last for a long period of time. An LSTM is applicable to classify, process, and predict time series information with given time lags of unknown size. A common LSTM block is composed of a cell, an input gate, an output gate, and a forget gate. The forward computation of an LSTM block at time INLINEFORM0 or position INLINEFORM1 is as follows BIBREF21 : DISPLAYFORM0
where INLINEFORM0 is the input vector at time INLINEFORM1 (or position INLINEFORM2 ); INLINEFORM3 and INLINEFORM4 are the input vectors of the input unit and input gate, respectively; INLINEFORM5 and INLINEFORM6 are the output and hidden vectors at time INLINEFORM7 , respectively; INLINEFORM8 is the output of the forget gate at time INLINEFORM9 ; INLINEFORM10 is the internal state of the memory cell in an LSTM block at time INLINEFORM11 ; and INLINEFORM12 is the sigmoid active function.
When LSTM is used to classify an input sentence, the hidden vectors of each input vector are summed to form a dense vector that can be considered the feature representation of the input sentence, i.e., DISPLAYFORM0
In many applications, a bi-directional LSTM (bi-LSTM) structure is usually used, as shown in Fig. 2(a). In bi-LSTM, forward and backward information are considered for information at time INLINEFORM0 ; hence, the context is modeled. Bi-LSTM is thus significantly reasonable for text processing tasks. In our two-level LSTM, bi-LSTM is used in each level.
The output hidden state at time INLINEFORM0 of a bi-LSTM block can be described as follows: DISPLAYFORM0
where INLINEFORM0 , INLINEFORM1 , and INLINEFORM2 are the corresponding vectors at time INLINEFORM3 in the forward LSTM block; and INLINEFORM4 , INLINEFORM5 , and INLINEFORM6 are the corresponding vectors at time INLINEFORM7 in the backward LSTM block. INLINEFORM8 . When attention is used, the dense feature vector INLINEFORM9 of an input sentence is calculated as follows: DISPLAYFORM0
where INLINEFORM0 is the vector that consists of attention weights. The bi-LSTM with attention is shown in Fig. 2(b).
Our proposed network consists of two levels of LSTM network. In the first level, a bi-LSTM is used and learned on the basis of the first training set T1. This level is a conventional sentiment classification process. The input of this level is a clause, and the input INLINEFORM0 is the embedding of the basic unit of the input texts. The network is shown in Fig. 3(a).
In the second level, a bi-LSTM is also used and learned on the basis of the second training set T2. The input of this level is a sentence or a paragraph. The input INLINEFORM0 consists of two parts. The first part is the feature vector of the INLINEFORM1 -th clause. The feature vector is generated by the first-level network. In other words, the dense feature shown in Fig. 3(a) ( INLINEFORM2 ) is used. The second part is the sentiment score (not predicted label) output by the first-level network. The sentence S1 (The service is poor. The taste is good, but the rest is not so bad.) used in Subsection 3.1 is taken as an illustrative example. S1 contains three clauses. Therefore, the input vector of S1 can be represented by INLINEFORM3
where DISPLAYFORM0
where INLINEFORM0 is the output score of the INLINEFORM1 th clause by the first-level LSTM and INLINEFORM2 is the feature representation of the INLINEFORM3 th clause by the first LSTM. The network of the whole two-level network is shown in Fig. 3(b).
Lexical Embedding
The proposed lexicon embedding is based on INLINEFORM0 -hot encoding. Therefore, INLINEFORM1 -hot encoding is first described.
For categorical data, one-hot encoding is the most widely used encoding strategy when different categories are independent. For example, if one-hot encoding is used to represent three categories, namely, positive, neutral, and negative, the encoding vectors for the three categories are INLINEFORM0 , INLINEFORM1 , and INLINEFORM2 , respectively.
In this work, many lexical cues are categorical data, and different categories are independent. These lexical cues can directly be represented by one-hot encoding. The encoded vectors for lexical cues are then concatenated with other vectors, such as character/word embedding. However, one-hot encoding presents two main limitations when the encoded vector is concatenated with other vectors.
The value difference between the elements of one-hot encoded vectors and those of other encoded vectors (e.g., word embedding vectors) may be large. Fig. 4 shows the histogram of the values of the elements of the word embedding vectors. The magnitude of most elements are smaller than 1.
The lengths of one-hot encoded vectors are usually shorter than those of other encoded vectors. Consequently, the proportion of one-hot encoded part is small in the concatenated vectors.
The above two limitations affect the final sentiment analysis performance. To this end, we propose a new encoding strategy. DISPLAYFORM0
where INLINEFORM0 is the INLINEFORM1 -hot encoded vector, INLINEFORM2 is the proportion parameter, INLINEFORM3 is the one-hot encoded vector, and INLINEFORM4 is an INLINEFORM5 -dimensional vector. If INLINEFORM6 and INLINEFORM7 are equal to 1, then INLINEFORM8 -hot encoding is reduced to one-hot encoding. The parameter INLINEFORM9 is applied to increase the length of the final encoded vector.
Most lexicon-based sentiment methods rely on four types of words, namely, positive, negative, neutral, and privative. These words are useful cues for predicting the sentiment labels of input texts. The incorporation of these words should also be useful. A previous study has shown that a typical document comprises approximately 8% of such sentences BIBREF26 . Sentiments expressed in a conditional sentence can be difficult to determine due to the semantic condition. The sentiment polarities of interrogative sentences are also difficult to classify according to our empirical study.
Five types of words, namely, positive (Pos), negative (Neg), privative (Pri), suppositive (Sup), and interrogative (Int), are represented by the proposed encoding method. The rest words, which do not belong to any of the above five types, are named “others (Oth)" instead of “neutral" because some words, such as “the", are unrelated to “sentiment". The value of INLINEFORM0 in Eq. (6) is set as 10. The encoded vectors are as follows. INLINEFORM1
In the proposed INLINEFORM0 -hot embedding, the parameter INLINEFORM1 can be learned during training. The representation of the third clause (“but the rest is not so bad") of S1 in Subsection 3.1 is taken as an illustrative example. The new embedding of each word in this clause is as follows. DISPLAYFORM0
Certain types (e.g., positive, negative, and privative) of words should play more important roles than other words do in texts; therefore, their embeddings are also used in the attention layer. A new LSTM based on our lexicon embedding is proposed, as shown in Fig. 5. The attention layer and final dense vector of the network in Fig. 3(a) are calculated as follows. DISPLAYFORM0
where INLINEFORM0 is the attention weight for the INLINEFORM1 -th input, lt is the lexicon embedding for key lexical words for the INLINEFORM2 -th input, and INLINEFORM3 is the final dense vector. Eq. (2) is used in the first-level LSTM.
POS is usually used as a key cue in sentiment analysis BIBREF27 . To this end, we use additional lexicon embedding. The new lexicon embedding includes several major types of POS, namely, interrogative, exclamatory, and others. This new lexicon embedding is also applied to the attention layer. The motivation lies in that certain types of POS should play important roles in sentiment.
The proposed INLINEFORM0 -hot embedding is still applied to POS types in this study. According to our initial case studies, eight POS types are considered. They are noun, adjective, verb, pronoun, adverb, preposition, accessory, and others. The eight POS types are represented by the proposed INLINEFORM1 -hot encoding. We let INLINEFORM2 in Eq. (6) be 10. The first three POS types are as follows. INLINEFORM3
When POS embedding is used, the attention layer and final outputs of the network in Eq. (3) become DISPLAYFORM0
where INLINEFORM0 is the lexicon embedding for key lexical words for the INLINEFORM1 -th input.
Conjunction words play important roles in sentiment analysis BIBREF28 . For example, conjunctions such as “but" and “moreover" usually indicate the focus of texts and attract readers¡¯ attention. Therefore, conjunctions are considered in the input of the second-level LSTM.
Once a set of conjunction words is compiled, INLINEFORM0 -hot embedding is used. In our experiments, the number of conjunction words is 169. Therefore, the parameter INLINEFORM1 in Eq. (2) is set as 1.
When conjunction embedding is used for the second-level layer, the attention layer and final outputs of the network in Fig. 3(b) are calculated as follows. DISPLAYFORM0
where INLINEFORM0 is the attention weight for the INLINEFORM1 -th input clause; INLINEFORM2 is the hidden vector of the second-level LSTM; INLINEFORM3 and INLINEFORM4 are the conjunction embeddings for the first and last words in the INLINEFORM5 -th input clause, respectively; and INLINEFORM6 is the final dense vector used for the final classification.
Shin et al. BIBREF24 also embedded lexical information into sentiment analysis. Three major differences exist between our method and the method proposed by Shin et al. BIBREF24 .
The lexicon embedding proposed by Shin et al. us-es one-hot encoding, whereas the proposed method uses a new encoding strategy that can be considered a soft one-hot encoding.
The lexicon embedding proposed by Shin et al. ex-tends the length of raw encoded vectors. However, the extension aims to keep the lengths of lexical and word embeddings equal. Their extension method also only relies on zero padding and is thus different from the proposed method.
Only sentimental words are considered in the lexicon embedding proposed by Shin et al. On the contrary, sentimental words, POS, and conjunctions are considered in our work.
The Learning Procedure
The algorithmic steps of the entire learning procedure for the proposed INLINEFORM0 -hot lexicon embedding-based two-level LSTM (called INLINEFORM1 Tl-LSTM) are shown in Algorithm 1. In Algorithm 1, T1 refers to the training data that consist of clauses and the labels obtained in the first-stage labeling procedure. T2 refers to the training data that consist of sentences and the labels obtained in the second-stage labeling procedure. The structure of INLINEFORM2 Tl-LSTM is presented in Fig. 6.
INLINEFORM0 Tl-LSTM Input: Training sets T1 and T2; dictionary of key lexical words; POS for each word; dictionary of conjunction words; character/word embeddings for each character/word.
Output: A trained two-level LSTM for sentiment classification.
Steps:
Construct the embedding vector for each character (including punctuation) in the clauses in T1. The embeddings include the character/word and lexicon embeddings of each character/word;
Train the first-level LSTM on the basis of the input embedding vectors and labels of the T1 text clauses;
Run the learned first-level LSTM on each clause of the text samples in T2. Record the predicted score INLINEFORM0 and the final dense vector INLINEFORM1 for each clause;
Construct the embedding vector for each clause in the text samples in T2. Each embedding vector consists of INLINEFORM0 , INLINEFORM1 , and the lexicon embedding of conjunctions of each clause;
Train the second-level LSTM on the basis of the input embedding vectors and labels of the T2 text samples.
The first-level and second-level LSTM networks consist of the final two-level LSTM.
The proposed two-level LSTM can be applied to texts with arbitrary languages. Word information is required in lexical construction regardless of whether character or word embedding is used. The reason is that the three types of lexicon embeddings are performed at the word level. Therefore, when character embedding is used, the lexicon embedding of each character is the lexicon embedding of the word containing it.
This section shows the evaluation of the proposed methodology in terms of the two-level LSTM network and each part of the lexicon embedding.
We compile three Chinese text corpora from online data for three domains, namely, “hotel", “mobile phone (mobile)", and “travel". All texts are about user reviews. Each text sample collected is first partitioned into clauses according to Chinese tokens. Three clause sets are subsequently obtained from the three text corpora.
The labels “+1", “0.5", and “0" correspond to the three sentiment classes “positive", “neutral", and “negative", respectively. The text data are labeled according to our two-stage labeling strategy.
In the first stage, only one user is invited to label each clause sample as the sentiment orientations for clauses (or sub-sentences) are easy to label.
In the second stage, five users are invited to label each text sample in the three raw data sets. The average score of the five users on each sample is calculated. Samples with average scores located in [0.6, 1] are labeled as “positive". Samples with average scores located in [0, 0.4] are labeled as “negative". Others are labeled as “neutral". The details of the labeling results are shown in Table 1.
All the training and test data and the labels are available online.
In our experiments, the five types of key lexical words introduced in Subsection 3.3.2 are manually constructed. The details of the five types of words are listed in Table 2. The conjunction words are also manually constructed. The number of conjunction words used in the experiments is 169.
In each experimental run, the training set is compiled on the basis of the training data listed in Table 1. The compiling rule is specified before each experimental run. The test data are fixed to facilitate experimental duplication and comparison by other researchers.
In our experiments, three competing algorithms, namely, BOW, CNN, and (conventional) LSTM, are used.
For BOW, term frequency-inverse document frequency is utilized to construct features. Ridge regression BIBREF29 is used as a classifier. For CNN, a three-channel CNN is used. For LSTM, one-layer and two-layer bi-LSTM with attention are adopted, and the results of the network with superior performance are presented. CNN and LSTM are performed on TensorFlow, and default parameter settings are followed.
The key parameters are searched as follows. The embedding dimensions of characters and words are searched in [100, 150, 200, 250, 300]. The parameter INLINEFORM0 in INLINEFORM1 -hot encoding is searched in INLINEFORM2 .
In this subsubsection, each of the three raw data sets (associated with their labels) shown in Table 1 is used. The clause data are not used. In other words, the training data used in this subsubsection are the same as those used in previous studies. For each data corpus, 1000 raw data samples are used as the test data, and the rest are used as the training data. The involved algorithms are detailed as follows.
CNN-C denotes the CNN with (Chinese) character embedding.
CNN-W denotes the CNN with (Chinese) word embedding.
CNN-Lex-C denotes the algorithm which also integrates polar words in CNN which is proposed by Shin et al. BIBREF24 . The (Chinese) character embedding is used.
CNN-Lex-W denotes the algorithm which also integrates polar words in CNN which is proposed by Shin et al. BIBREF24 . The (Chinese) word embedding is used.
Bi-LSTM-C denotes the BI-LSTM with (Chinese) character embedding.
Bi-LSTM-W denotes the Bi-LSTM with (Chinese) word embedding.
Lex-rule denotes the rule-based approach shows in Fig. 1. This approach is unsupervised.
BOW denotes the conventional algorithm which is based of bag-of-words features.
The accuracies of the above algorithms are listed in Table 3. Overall, Bi-LSTM outperforms CNN and BOW. This conclusion is in accordance with the conclusion that RNN performs efficiently against CNN in a broad range of natural language processing (NLP) tasks on the basis of extensive comparative studies BIBREF30 . In addition, CNN-lex outperforms CNN under both character and word embeddings, which suggests that lexicon cues are useful in sentiment analysis. Lex-rule achieves the lowest accuracies on all the three data sets. Considering that the performances of (traditional) CNN, Lex-rule, and BOW are relatively poor, they are not applied in the remaining parts.
In this experimental comparison, the proposed two-level LSTM is evaluated, whereas lexicon embedding is not used in the entire network. The primary goal is to test whether the introduced two-stage labeling and the two-level network structure are useful for sentiment analysis.
The raw and clause data listed in Table 1 are used to perform the two-level LSTM. Tl-LSTM denotes the two-level LSTM. “R+C" refer to the mixed data of raw and clause data. The test data are still the 1000 samples used in section 4.3.1 for each corpus. Table 4 shows the classification accuracies. To ensure that the results differ from those in Table 3, we explicitly add “R+C" after each algorithm in Table 4. In the last line of Table 4, the base results for each corpus in Table 3 are also listed.
On all the three data corpora, the proposed two-level network (without lexicon embedding) with character embedding, Tl-LSTM-C, outperforms all the other involved algorithms. On the travel and the mobile corpora, TI-LSTM-W outperforms Bi-LSTM-W. The results in Table 4 indicate that the performances of Tl-LSTM on the mixed training and test data (R+C) are better than those of Bi-LSTM. This comparison indicates that the proposed two-level LSTM is effective.
In addition, for the involved algorithms, most results achieved on “R+C" are better than the best results only achieved on `R'listed in Table 3. This comparison suggests that the introduced two-stage labeling is useful.
The results also show that in the two-level LSTM, character embedding is more effective than word embedding.
In this experimental run, lexicon embedding is used in the proposed two-level LSTM or INLINEFORM0 Tl-LSTM. Table 5 shows the results. The optimal parameter INLINEFORM1 is about 11.
The performances of TI-LSTM with lexicon embedding (i.e., INLINEFORM0 Tl-LSTM) are consistently better than those of TI-LSTM without lexicon embedding (i.e., Tl-LSTM) listed in Table 5. The improved accuracies of INLINEFORM1 TI-LSTM over Tl-LSTM on the three data corpora are explicitly listed in Table 6.
The experimental evaluation discussed in Subsection 4.3 verifies the effectiveness of the proposed method, INLINEFORM0 Tl-LSTM. Unlike the conventional RNN, INLINEFORM1 Tl-LSTM contains lexicon embedding that consists of new technique and components, including INLINEFORM2 -hot encoding, embedding for polar words, embedding for POS, and embedding for conjunctions. Therefore, this subsection evaluates the performances of the involved technique and embeddings separately.
Our INLINEFORM0 -hot encoding differs from one-hot encoding in two aspects. The first aspect is that the nonzero values in one-hot encoding are only equal to 1, whereas the nonzero values in INLINEFORM1 -hot encoding are INLINEFORM2 . The second aspect is that only one element in one-hot encoding is nonzero, whereas n elements in INLINEFORM3 -hot encoding are nonzero.
In this experiment, we test whether INLINEFORM0 -hot encoding is useful in two experimental runs. In the first run, the value of INLINEFORM1 is manually set to 0.5 and 1 in the experimental run without optimization. The parameter INLINEFORM2 in Eq. (6) is set as 15. The classification accuracies vary according to different INLINEFORM3 values on all the three data corpora. When INLINEFORM4 equals 1, the accuracies are the lowest in most cases shown in Fig. 7.
The results shown in Fig. 7 indicate that the value of INLINEFORM0 does affect the performance of the entire network. Consequently, the classical one-hot encoding, which fixes the value of nonzero elements as 1, is ineffective. In our experiments, the learned value of INLINEFORM1 is approximate 0.4.
In the second run, the performances under different INLINEFORM0 (i.e., 1, 5, 10, 15) are tested. Table 7 shows the comparison results. The value of INLINEFORM1 does affect the performance of the entire network, thereby indicating that the introduction of the INLINEFORM2 -duplicated strategy in encoding is effective. In the experiments, when INLINEFORM3 is increasing, the accuracies first increase and then decrease. The main reason may lie in the fact that when INLINEFORM4 becomes large, the proportion of lexicon embedding becomes large accordingly. An over-length input feature vector may incur “curse of dimensionality" and thus weaken the performance of the proposed two-level network.
In this experimental run, we test whether the labeled polar (negative and positive) words do affect the performance of the entire method when they are used in lexicon embedding. To this end, we order the polar words according to their frequencies in the training data. Top 0%, 50%, 100% polar words are used. The corresponding classification accuracies are depicted in Fig. 8.
In most cases, the accuracies are the lowest when no polar words are used in the lexicon embedding. When all polar words are used, the proposed network achieves the highest accuracies.
In the experiment, only one user is invited to manually compile the dictionary for a data corpus. One and a half hour is needed for each data corpus. In our viewpoint, it is worth manually compiling the polar words for sentiment analysis by considering the performance improvement and time-consumption.
In this experimental run, we test whether POS cues do play positive roles in the entire model. To this end, we remove POS in the lexicon embedding of the proposed method. The results are shown in Fig. 9.
In most cases, the accuracies with POS embedding are greater than those without POS embedding, thereby indicating that the application of POS to lexicon embedding is useful.
In this experimental run, we test whether conjunction cues do play positive roles in the entire model. To this end, the lexicon embedding for conjunction words is also removed from the proposed method. The results are shown in Fig. 10.
The algorithm with conjunction embedding outperforms that without conjunction embedding consistently, thereby indicating that the application of conjunction to lexicon embedding is useful.
High-quality labels are crucial for learning systems. Nevertheless, texts with mixed sentiments are difficult for humans to label in text sentiment classification. In this study, a new labeling strategy is introduced to partition texts into those with pure and mixed sentiment orientations. These two categories of texts are labeled using different processes. A two-level network is accordingly proposed to utilize the two labeled data in our two-stage labeling strategy. Lexical cues (e.g., polar words, POS, conjunction words) are particularly useful in sentiment analysis. These lexical cues are used in our two-level network, and a new encoding strategy, that is, INLINEFORM0 -hot encoding, is introduced. INLINEFORM1 -hot encoding is motivated by one-hot encoding. However, the former alleviates the drawbacks of the latter. Three Chinese sentiment text data corpora are compiled to verify the effectiveness of the proposed methodology. Our proposed method achieves the highest accuracies on these three data corpora.
The proposed two-level network and lexicon embedding can also be applied to other types of deep neural networks. In our future work, we will extend our main idea into several networks and text mining applications.
The authors wish to thank Zefeng Han, Qing Yin, Lei Yang, Xiaonan Wang, Nan Chen, Rujing Yao, Lihong Guo, Pinglong Zhao for the labeling of the experimental data. | Unanswerable |
2d307b43746be9cedf897adac06d524419b0720b | 2d307b43746be9cedf897adac06d524419b0720b_0 | Q: How long are the datasets?
Text: Introduction
Text is important in many artificial intelligence applications. Among various text mining techniques, sentiment analysis is a key component in applications such as public opinion monitoring and comparative analysis. Sentiment analysis can be divided into three problems according to input texts, namely, sentence, paragraph, and document levels. This study focuses on sentence and paragraph levels.
Text sentiment analysis is usually considered a text classification problem. Almost all existing text classification techniques are applied to text sentiment analysis BIBREF0 . Typical techniques include bag-of-words (BOW)-based BIBREF1 , deep learning-based BIBREF2 , and lexicon-based (or rule-based) methods BIBREF3 .
Although many achievements have been made and sentiment analysis has been successfully used in various commercial applications, its accuracy can be further improved. The construction of a high-accuracy sentiment classification model usually entails the challenging compilation of training sets with numerous samples and sufficiently accurate labels. The reason behind this difficulty is two-fold. First, sentiment is somewhat subjective, and a sample may receive different labels from different users. Second, some texts contain complex sentiment representations, and a single label is difficult to provide. We conduct a statistical analysis of public Chinese sentiment text sets in GitHub. The results show that the average label error is larger than 10%. This error value reflects the degree of difficulty of sentiment labeling.
Privative and interrogative sentences are difficult to classify when deep learning-based methods are applied. Although lexicon-based methods can deal with particular types of privative sentences, their generalization capability is poor.
We address the above issues with a new methodology. First, we introduce a two-stage labeling strategy for sentiment texts. In the first stage, annotators are invited to label a large number of short texts with relatively pure sentiment orientations. Each sample is labeled by only one annotator. In the second stage, a relatively small number of text samples with mixed sentiment orientations are annotated, and each sample is labeled by multiple annotators. Second, we propose a two-level long short-term memory (LSTM) BIBREF4 network to achieve two-level feature representation and classify the sentiment orientations of a text sample to utilize two labeled data sets. Lastly, in the proposed two-level LSTM network, lexicon embedding is leveraged to incorporate linguistic features used in lexicon-based methods.
Three Chinese sentiment data sets are compiled to investigate the performance of the proposed methodology. The experimental results demonstrate the effectiveness of the proposed methods. Our work is new in the following aspects.
The rest of this paper is organized as follows. Section 2 briefly reviews related work. Section 3 describes our methodology. Section 4 reports the experimental results, and Section 5 concludes the study.
Text Sentiment Analysis
Sentiment analysis aims to predict the sentiment polarity of an input text sample. Sentiment polarity can be divided into negative, neutral, and positive in many applications.
Existing sentiment classification methods can be roughly divided into two categories, namely, lexicon-based and machine learning-based methods BIBREF5 . Lexicon-based methods BIBREF6 construct polar and privative word dictionaries. A set of rules for polar and privative words is compiled to judge the sentiment orientation of a text document. This method cannot effectively predict implicit orientations. Machine learning-based methods BIBREF7 utilize a standard binary or multi-category classification approach. Different feature extraction algorithms, including BOW BIBREF8 and part of speech (POS) BIBREF7 , are used. Word embedding and deep neural networks have recently been applied to sentiment analysis, and promising results have been obtained BIBREF9 BIBREF10 .
Lexion-based Sentiment Classification
Lexicon-based methods are actually in implemented in an unsupervised manner. They infer the sentiment categories of input texts on the basis of polar and privative words. The primary advantage of these methods is that they do not require labeled training data. The key of lexicon-based methods is the lexical resource construction, which maps words into a category (positive, negative, neutral, or privative). Senti-WordNet BIBREF11 is a lexical resource for English text sentiment classification. For Chinese texts, Senti-HowNet is usually used.
Fig. 1 characterizes a typical lexicon-based sentiment classification approach. The approach iteratively checks each word in an input sentence from left to right. The weight score of each word is calculated according to the procedure shown in Fig. 1. The final sentiment score is the average score of the words with weight scores. The scores of positive, neutral, and negative sentiments are denoted as “+1",“0", and “-1", respectively. According to the lexicon-based algorithm shown in Fig. 1, the sentiment score of “it is not bad" is 0.25, and the sentiment score of “it is good" is 1. However, the score of “it is not so bad" is -0.75, and this score is definitely wrong. Therefore, machine learning (including feature learning) methodologies have become mainstream in sentiment analysis.
Deep Learning-based Sentiment Classification
Deep learning (including word embedding BIBREF12 ) has been applied to almost all text-related applications, such as translation BIBREF13 , quality assurance BIBREF14 , recommendation BIBREF15 , and categorization BIBREF16 . Popular deep neural networks are divided into convolutional neural networks (CNNs) BIBREF17 and recurrent neural network (RNNs) BIBREF18 BIBREF19 . Both are utilized in sentiment classification BIBREF20 . Kim investigated the use of CNN in sentence sentiment classification and achieved promising results BIBREF2 . LSTM BIBREF21 , a classical type of RNN, is the most popular network used for sentiment classification. A binary-directional LSTM BIBREF22 with an attention mechanism is demonstrated to be effective in sentiment analysis.
Deep learning-based methods rarely utilize the useful resources adopted in lexicon-based methods. Qiao et al. BIBREF23 incorporated lexicon-based cues into the training of an LSTM-based model. Their proposed method relies on a new loss function that considers the relationships between polar or certain types of words (e.g., privative) and those words next to them in input texts. Our study also combines lexical cues into LSTM. Nevertheless, unlike Qiao et al.'s study that implicitly used lexical cues, the present work explicitly uses lexical cues in the LSTM network. Shin et al. BIBREF24 combined the lexicon embeddings of polar words with word embeddings for sentiment classification. The difference between our approach an the method proposed by Shin et al. the is discussed in Section 3.3.5.
Numerous studies on aspect-level sentiment analysis exist BIBREF25 . This problem is different from the sentiment classification investigated in this study.
METHODOLOGY
This section first introduces our two-stage labeling procedure. A two-level LSTM is then proposed. Lexicon embedding is finally leveraged to incorporate lexical cues.
Two-stage Labeling
As stated earlier, sentiment is subjective, and texts usually contain mixed sentiment orientations. Therefore, texts¡¯ sentiment orientations are difficult to label. In our study, three sentiment labels, namely, positive, neutral, and negative, are used. The following sentences are taken as examples.
The service is poor. The taste is good, but the rest is not so bad.
The quality of the phone is good, but the appearance is just so-so.
In user annotation, the labels of these two sentences depend on users. If a user is concerned about service, then the label of S1 may be “negative". By contrast, for another user who does not care about service, the label may be “positive". Similarly, a user may label S2 as “positive" if he cares about quality. Another user may label it as “negative" if the conjunction “but" attracts the user¡¯s attention more. Another user may label it as “neutral" if they are concerned about quality and appearance.
The underlying reason is that sentiment is more subjective than semantics. In related research on subjective categorization, such as visual aesthetics, each sample is usually repeatedly annotated by multiple annotators, and the average label is taken as the final label of the sample. This labeling strategy can also be applied to text sentiment annotation. However, we argue that this strategy is unsuitable for a (relatively) large number of samples. The reason lies in the following two aspects.
Multiple annotators for a large number of data sets require a large budget.
In our practice, annotators claim that their judgment criteria on sentiment become fused on texts with mixed sentiment orientations (e.g., S1 and S2) over time during labeling, and they become bored accordingly.
A two-stage labeling strategy is adopted in this study. In the first stage, each sentence/paragraph is divided into several clauses according to punctuation. The sentiment of each partitioned clause is relatively easy to annotate; therefore, each clause is labeled by only one user. In the second stage, a relatively small-sized sentence/paragraph set is labeled, and each sentence is labeled by all involved annotators. We still take the two sentences, S1 and S2, as examples. S1 and S2 are split into clauses, as shown below.
S1:
S1.1: The service is poor
S1.2: The taste is good
S1.3: but the rest is not so bad.
S2:
S2.1: The quality of the phone is good
S2.2: but the appearance is just so-so.
Each of the above clauses is labeled by only one annotator. The annotation in the first stage is easy to perform; thus, the number of clauses can be larger than the number of sentences used in the second labeling stage.
Two-level LSTM
Given two training data sets (denoted by T1 and T2), a new learning model should be utilized. LSTM is a widely used deep neural network in deep learning-based text classification.
LSTM is a typical RNN model for short-term memory, which can last for a long period of time. An LSTM is applicable to classify, process, and predict time series information with given time lags of unknown size. A common LSTM block is composed of a cell, an input gate, an output gate, and a forget gate. The forward computation of an LSTM block at time INLINEFORM0 or position INLINEFORM1 is as follows BIBREF21 : DISPLAYFORM0
where INLINEFORM0 is the input vector at time INLINEFORM1 (or position INLINEFORM2 ); INLINEFORM3 and INLINEFORM4 are the input vectors of the input unit and input gate, respectively; INLINEFORM5 and INLINEFORM6 are the output and hidden vectors at time INLINEFORM7 , respectively; INLINEFORM8 is the output of the forget gate at time INLINEFORM9 ; INLINEFORM10 is the internal state of the memory cell in an LSTM block at time INLINEFORM11 ; and INLINEFORM12 is the sigmoid active function.
When LSTM is used to classify an input sentence, the hidden vectors of each input vector are summed to form a dense vector that can be considered the feature representation of the input sentence, i.e., DISPLAYFORM0
In many applications, a bi-directional LSTM (bi-LSTM) structure is usually used, as shown in Fig. 2(a). In bi-LSTM, forward and backward information are considered for information at time INLINEFORM0 ; hence, the context is modeled. Bi-LSTM is thus significantly reasonable for text processing tasks. In our two-level LSTM, bi-LSTM is used in each level.
The output hidden state at time INLINEFORM0 of a bi-LSTM block can be described as follows: DISPLAYFORM0
where INLINEFORM0 , INLINEFORM1 , and INLINEFORM2 are the corresponding vectors at time INLINEFORM3 in the forward LSTM block; and INLINEFORM4 , INLINEFORM5 , and INLINEFORM6 are the corresponding vectors at time INLINEFORM7 in the backward LSTM block. INLINEFORM8 . When attention is used, the dense feature vector INLINEFORM9 of an input sentence is calculated as follows: DISPLAYFORM0
where INLINEFORM0 is the vector that consists of attention weights. The bi-LSTM with attention is shown in Fig. 2(b).
Our proposed network consists of two levels of LSTM network. In the first level, a bi-LSTM is used and learned on the basis of the first training set T1. This level is a conventional sentiment classification process. The input of this level is a clause, and the input INLINEFORM0 is the embedding of the basic unit of the input texts. The network is shown in Fig. 3(a).
In the second level, a bi-LSTM is also used and learned on the basis of the second training set T2. The input of this level is a sentence or a paragraph. The input INLINEFORM0 consists of two parts. The first part is the feature vector of the INLINEFORM1 -th clause. The feature vector is generated by the first-level network. In other words, the dense feature shown in Fig. 3(a) ( INLINEFORM2 ) is used. The second part is the sentiment score (not predicted label) output by the first-level network. The sentence S1 (The service is poor. The taste is good, but the rest is not so bad.) used in Subsection 3.1 is taken as an illustrative example. S1 contains three clauses. Therefore, the input vector of S1 can be represented by INLINEFORM3
where DISPLAYFORM0
where INLINEFORM0 is the output score of the INLINEFORM1 th clause by the first-level LSTM and INLINEFORM2 is the feature representation of the INLINEFORM3 th clause by the first LSTM. The network of the whole two-level network is shown in Fig. 3(b).
Lexical Embedding
The proposed lexicon embedding is based on INLINEFORM0 -hot encoding. Therefore, INLINEFORM1 -hot encoding is first described.
For categorical data, one-hot encoding is the most widely used encoding strategy when different categories are independent. For example, if one-hot encoding is used to represent three categories, namely, positive, neutral, and negative, the encoding vectors for the three categories are INLINEFORM0 , INLINEFORM1 , and INLINEFORM2 , respectively.
In this work, many lexical cues are categorical data, and different categories are independent. These lexical cues can directly be represented by one-hot encoding. The encoded vectors for lexical cues are then concatenated with other vectors, such as character/word embedding. However, one-hot encoding presents two main limitations when the encoded vector is concatenated with other vectors.
The value difference between the elements of one-hot encoded vectors and those of other encoded vectors (e.g., word embedding vectors) may be large. Fig. 4 shows the histogram of the values of the elements of the word embedding vectors. The magnitude of most elements are smaller than 1.
The lengths of one-hot encoded vectors are usually shorter than those of other encoded vectors. Consequently, the proportion of one-hot encoded part is small in the concatenated vectors.
The above two limitations affect the final sentiment analysis performance. To this end, we propose a new encoding strategy. DISPLAYFORM0
where INLINEFORM0 is the INLINEFORM1 -hot encoded vector, INLINEFORM2 is the proportion parameter, INLINEFORM3 is the one-hot encoded vector, and INLINEFORM4 is an INLINEFORM5 -dimensional vector. If INLINEFORM6 and INLINEFORM7 are equal to 1, then INLINEFORM8 -hot encoding is reduced to one-hot encoding. The parameter INLINEFORM9 is applied to increase the length of the final encoded vector.
Most lexicon-based sentiment methods rely on four types of words, namely, positive, negative, neutral, and privative. These words are useful cues for predicting the sentiment labels of input texts. The incorporation of these words should also be useful. A previous study has shown that a typical document comprises approximately 8% of such sentences BIBREF26 . Sentiments expressed in a conditional sentence can be difficult to determine due to the semantic condition. The sentiment polarities of interrogative sentences are also difficult to classify according to our empirical study.
Five types of words, namely, positive (Pos), negative (Neg), privative (Pri), suppositive (Sup), and interrogative (Int), are represented by the proposed encoding method. The rest words, which do not belong to any of the above five types, are named “others (Oth)" instead of “neutral" because some words, such as “the", are unrelated to “sentiment". The value of INLINEFORM0 in Eq. (6) is set as 10. The encoded vectors are as follows. INLINEFORM1
In the proposed INLINEFORM0 -hot embedding, the parameter INLINEFORM1 can be learned during training. The representation of the third clause (“but the rest is not so bad") of S1 in Subsection 3.1 is taken as an illustrative example. The new embedding of each word in this clause is as follows. DISPLAYFORM0
Certain types (e.g., positive, negative, and privative) of words should play more important roles than other words do in texts; therefore, their embeddings are also used in the attention layer. A new LSTM based on our lexicon embedding is proposed, as shown in Fig. 5. The attention layer and final dense vector of the network in Fig. 3(a) are calculated as follows. DISPLAYFORM0
where INLINEFORM0 is the attention weight for the INLINEFORM1 -th input, lt is the lexicon embedding for key lexical words for the INLINEFORM2 -th input, and INLINEFORM3 is the final dense vector. Eq. (2) is used in the first-level LSTM.
POS is usually used as a key cue in sentiment analysis BIBREF27 . To this end, we use additional lexicon embedding. The new lexicon embedding includes several major types of POS, namely, interrogative, exclamatory, and others. This new lexicon embedding is also applied to the attention layer. The motivation lies in that certain types of POS should play important roles in sentiment.
The proposed INLINEFORM0 -hot embedding is still applied to POS types in this study. According to our initial case studies, eight POS types are considered. They are noun, adjective, verb, pronoun, adverb, preposition, accessory, and others. The eight POS types are represented by the proposed INLINEFORM1 -hot encoding. We let INLINEFORM2 in Eq. (6) be 10. The first three POS types are as follows. INLINEFORM3
When POS embedding is used, the attention layer and final outputs of the network in Eq. (3) become DISPLAYFORM0
where INLINEFORM0 is the lexicon embedding for key lexical words for the INLINEFORM1 -th input.
Conjunction words play important roles in sentiment analysis BIBREF28 . For example, conjunctions such as “but" and “moreover" usually indicate the focus of texts and attract readers¡¯ attention. Therefore, conjunctions are considered in the input of the second-level LSTM.
Once a set of conjunction words is compiled, INLINEFORM0 -hot embedding is used. In our experiments, the number of conjunction words is 169. Therefore, the parameter INLINEFORM1 in Eq. (2) is set as 1.
When conjunction embedding is used for the second-level layer, the attention layer and final outputs of the network in Fig. 3(b) are calculated as follows. DISPLAYFORM0
where INLINEFORM0 is the attention weight for the INLINEFORM1 -th input clause; INLINEFORM2 is the hidden vector of the second-level LSTM; INLINEFORM3 and INLINEFORM4 are the conjunction embeddings for the first and last words in the INLINEFORM5 -th input clause, respectively; and INLINEFORM6 is the final dense vector used for the final classification.
Shin et al. BIBREF24 also embedded lexical information into sentiment analysis. Three major differences exist between our method and the method proposed by Shin et al. BIBREF24 .
The lexicon embedding proposed by Shin et al. us-es one-hot encoding, whereas the proposed method uses a new encoding strategy that can be considered a soft one-hot encoding.
The lexicon embedding proposed by Shin et al. ex-tends the length of raw encoded vectors. However, the extension aims to keep the lengths of lexical and word embeddings equal. Their extension method also only relies on zero padding and is thus different from the proposed method.
Only sentimental words are considered in the lexicon embedding proposed by Shin et al. On the contrary, sentimental words, POS, and conjunctions are considered in our work.
The Learning Procedure
The algorithmic steps of the entire learning procedure for the proposed INLINEFORM0 -hot lexicon embedding-based two-level LSTM (called INLINEFORM1 Tl-LSTM) are shown in Algorithm 1. In Algorithm 1, T1 refers to the training data that consist of clauses and the labels obtained in the first-stage labeling procedure. T2 refers to the training data that consist of sentences and the labels obtained in the second-stage labeling procedure. The structure of INLINEFORM2 Tl-LSTM is presented in Fig. 6.
INLINEFORM0 Tl-LSTM Input: Training sets T1 and T2; dictionary of key lexical words; POS for each word; dictionary of conjunction words; character/word embeddings for each character/word.
Output: A trained two-level LSTM for sentiment classification.
Steps:
Construct the embedding vector for each character (including punctuation) in the clauses in T1. The embeddings include the character/word and lexicon embeddings of each character/word;
Train the first-level LSTM on the basis of the input embedding vectors and labels of the T1 text clauses;
Run the learned first-level LSTM on each clause of the text samples in T2. Record the predicted score INLINEFORM0 and the final dense vector INLINEFORM1 for each clause;
Construct the embedding vector for each clause in the text samples in T2. Each embedding vector consists of INLINEFORM0 , INLINEFORM1 , and the lexicon embedding of conjunctions of each clause;
Train the second-level LSTM on the basis of the input embedding vectors and labels of the T2 text samples.
The first-level and second-level LSTM networks consist of the final two-level LSTM.
The proposed two-level LSTM can be applied to texts with arbitrary languages. Word information is required in lexical construction regardless of whether character or word embedding is used. The reason is that the three types of lexicon embeddings are performed at the word level. Therefore, when character embedding is used, the lexicon embedding of each character is the lexicon embedding of the word containing it.
This section shows the evaluation of the proposed methodology in terms of the two-level LSTM network and each part of the lexicon embedding.
We compile three Chinese text corpora from online data for three domains, namely, “hotel", “mobile phone (mobile)", and “travel". All texts are about user reviews. Each text sample collected is first partitioned into clauses according to Chinese tokens. Three clause sets are subsequently obtained from the three text corpora.
The labels “+1", “0.5", and “0" correspond to the three sentiment classes “positive", “neutral", and “negative", respectively. The text data are labeled according to our two-stage labeling strategy.
In the first stage, only one user is invited to label each clause sample as the sentiment orientations for clauses (or sub-sentences) are easy to label.
In the second stage, five users are invited to label each text sample in the three raw data sets. The average score of the five users on each sample is calculated. Samples with average scores located in [0.6, 1] are labeled as “positive". Samples with average scores located in [0, 0.4] are labeled as “negative". Others are labeled as “neutral". The details of the labeling results are shown in Table 1.
All the training and test data and the labels are available online.
In our experiments, the five types of key lexical words introduced in Subsection 3.3.2 are manually constructed. The details of the five types of words are listed in Table 2. The conjunction words are also manually constructed. The number of conjunction words used in the experiments is 169.
In each experimental run, the training set is compiled on the basis of the training data listed in Table 1. The compiling rule is specified before each experimental run. The test data are fixed to facilitate experimental duplication and comparison by other researchers.
In our experiments, three competing algorithms, namely, BOW, CNN, and (conventional) LSTM, are used.
For BOW, term frequency-inverse document frequency is utilized to construct features. Ridge regression BIBREF29 is used as a classifier. For CNN, a three-channel CNN is used. For LSTM, one-layer and two-layer bi-LSTM with attention are adopted, and the results of the network with superior performance are presented. CNN and LSTM are performed on TensorFlow, and default parameter settings are followed.
The key parameters are searched as follows. The embedding dimensions of characters and words are searched in [100, 150, 200, 250, 300]. The parameter INLINEFORM0 in INLINEFORM1 -hot encoding is searched in INLINEFORM2 .
In this subsubsection, each of the three raw data sets (associated with their labels) shown in Table 1 is used. The clause data are not used. In other words, the training data used in this subsubsection are the same as those used in previous studies. For each data corpus, 1000 raw data samples are used as the test data, and the rest are used as the training data. The involved algorithms are detailed as follows.
CNN-C denotes the CNN with (Chinese) character embedding.
CNN-W denotes the CNN with (Chinese) word embedding.
CNN-Lex-C denotes the algorithm which also integrates polar words in CNN which is proposed by Shin et al. BIBREF24 . The (Chinese) character embedding is used.
CNN-Lex-W denotes the algorithm which also integrates polar words in CNN which is proposed by Shin et al. BIBREF24 . The (Chinese) word embedding is used.
Bi-LSTM-C denotes the BI-LSTM with (Chinese) character embedding.
Bi-LSTM-W denotes the Bi-LSTM with (Chinese) word embedding.
Lex-rule denotes the rule-based approach shows in Fig. 1. This approach is unsupervised.
BOW denotes the conventional algorithm which is based of bag-of-words features.
The accuracies of the above algorithms are listed in Table 3. Overall, Bi-LSTM outperforms CNN and BOW. This conclusion is in accordance with the conclusion that RNN performs efficiently against CNN in a broad range of natural language processing (NLP) tasks on the basis of extensive comparative studies BIBREF30 . In addition, CNN-lex outperforms CNN under both character and word embeddings, which suggests that lexicon cues are useful in sentiment analysis. Lex-rule achieves the lowest accuracies on all the three data sets. Considering that the performances of (traditional) CNN, Lex-rule, and BOW are relatively poor, they are not applied in the remaining parts.
In this experimental comparison, the proposed two-level LSTM is evaluated, whereas lexicon embedding is not used in the entire network. The primary goal is to test whether the introduced two-stage labeling and the two-level network structure are useful for sentiment analysis.
The raw and clause data listed in Table 1 are used to perform the two-level LSTM. Tl-LSTM denotes the two-level LSTM. “R+C" refer to the mixed data of raw and clause data. The test data are still the 1000 samples used in section 4.3.1 for each corpus. Table 4 shows the classification accuracies. To ensure that the results differ from those in Table 3, we explicitly add “R+C" after each algorithm in Table 4. In the last line of Table 4, the base results for each corpus in Table 3 are also listed.
On all the three data corpora, the proposed two-level network (without lexicon embedding) with character embedding, Tl-LSTM-C, outperforms all the other involved algorithms. On the travel and the mobile corpora, TI-LSTM-W outperforms Bi-LSTM-W. The results in Table 4 indicate that the performances of Tl-LSTM on the mixed training and test data (R+C) are better than those of Bi-LSTM. This comparison indicates that the proposed two-level LSTM is effective.
In addition, for the involved algorithms, most results achieved on “R+C" are better than the best results only achieved on `R'listed in Table 3. This comparison suggests that the introduced two-stage labeling is useful.
The results also show that in the two-level LSTM, character embedding is more effective than word embedding.
In this experimental run, lexicon embedding is used in the proposed two-level LSTM or INLINEFORM0 Tl-LSTM. Table 5 shows the results. The optimal parameter INLINEFORM1 is about 11.
The performances of TI-LSTM with lexicon embedding (i.e., INLINEFORM0 Tl-LSTM) are consistently better than those of TI-LSTM without lexicon embedding (i.e., Tl-LSTM) listed in Table 5. The improved accuracies of INLINEFORM1 TI-LSTM over Tl-LSTM on the three data corpora are explicitly listed in Table 6.
The experimental evaluation discussed in Subsection 4.3 verifies the effectiveness of the proposed method, INLINEFORM0 Tl-LSTM. Unlike the conventional RNN, INLINEFORM1 Tl-LSTM contains lexicon embedding that consists of new technique and components, including INLINEFORM2 -hot encoding, embedding for polar words, embedding for POS, and embedding for conjunctions. Therefore, this subsection evaluates the performances of the involved technique and embeddings separately.
Our INLINEFORM0 -hot encoding differs from one-hot encoding in two aspects. The first aspect is that the nonzero values in one-hot encoding are only equal to 1, whereas the nonzero values in INLINEFORM1 -hot encoding are INLINEFORM2 . The second aspect is that only one element in one-hot encoding is nonzero, whereas n elements in INLINEFORM3 -hot encoding are nonzero.
In this experiment, we test whether INLINEFORM0 -hot encoding is useful in two experimental runs. In the first run, the value of INLINEFORM1 is manually set to 0.5 and 1 in the experimental run without optimization. The parameter INLINEFORM2 in Eq. (6) is set as 15. The classification accuracies vary according to different INLINEFORM3 values on all the three data corpora. When INLINEFORM4 equals 1, the accuracies are the lowest in most cases shown in Fig. 7.
The results shown in Fig. 7 indicate that the value of INLINEFORM0 does affect the performance of the entire network. Consequently, the classical one-hot encoding, which fixes the value of nonzero elements as 1, is ineffective. In our experiments, the learned value of INLINEFORM1 is approximate 0.4.
In the second run, the performances under different INLINEFORM0 (i.e., 1, 5, 10, 15) are tested. Table 7 shows the comparison results. The value of INLINEFORM1 does affect the performance of the entire network, thereby indicating that the introduction of the INLINEFORM2 -duplicated strategy in encoding is effective. In the experiments, when INLINEFORM3 is increasing, the accuracies first increase and then decrease. The main reason may lie in the fact that when INLINEFORM4 becomes large, the proportion of lexicon embedding becomes large accordingly. An over-length input feature vector may incur “curse of dimensionality" and thus weaken the performance of the proposed two-level network.
In this experimental run, we test whether the labeled polar (negative and positive) words do affect the performance of the entire method when they are used in lexicon embedding. To this end, we order the polar words according to their frequencies in the training data. Top 0%, 50%, 100% polar words are used. The corresponding classification accuracies are depicted in Fig. 8.
In most cases, the accuracies are the lowest when no polar words are used in the lexicon embedding. When all polar words are used, the proposed network achieves the highest accuracies.
In the experiment, only one user is invited to manually compile the dictionary for a data corpus. One and a half hour is needed for each data corpus. In our viewpoint, it is worth manually compiling the polar words for sentiment analysis by considering the performance improvement and time-consumption.
In this experimental run, we test whether POS cues do play positive roles in the entire model. To this end, we remove POS in the lexicon embedding of the proposed method. The results are shown in Fig. 9.
In most cases, the accuracies with POS embedding are greater than those without POS embedding, thereby indicating that the application of POS to lexicon embedding is useful.
In this experimental run, we test whether conjunction cues do play positive roles in the entire model. To this end, the lexicon embedding for conjunction words is also removed from the proposed method. The results are shown in Fig. 10.
The algorithm with conjunction embedding outperforms that without conjunction embedding consistently, thereby indicating that the application of conjunction to lexicon embedding is useful.
High-quality labels are crucial for learning systems. Nevertheless, texts with mixed sentiments are difficult for humans to label in text sentiment classification. In this study, a new labeling strategy is introduced to partition texts into those with pure and mixed sentiment orientations. These two categories of texts are labeled using different processes. A two-level network is accordingly proposed to utilize the two labeled data in our two-stage labeling strategy. Lexical cues (e.g., polar words, POS, conjunction words) are particularly useful in sentiment analysis. These lexical cues are used in our two-level network, and a new encoding strategy, that is, INLINEFORM0 -hot encoding, is introduced. INLINEFORM1 -hot encoding is motivated by one-hot encoding. However, the former alleviates the drawbacks of the latter. Three Chinese sentiment text data corpora are compiled to verify the effectiveness of the proposed methodology. Our proposed method achieves the highest accuracies on these three data corpora.
The proposed two-level network and lexicon embedding can also be applied to other types of deep neural networks. In our future work, we will extend our main idea into several networks and text mining applications.
The authors wish to thank Zefeng Han, Qing Yin, Lei Yang, Xiaonan Wang, Nan Chen, Rujing Yao, Lihong Guo, Pinglong Zhao for the labeling of the experimental data. | Travel dataset contains 4100 raw samples, 11291 clauses, Hotel dataset contains 3825 raw samples, 11264 clauses, and the Mobile dataset contains 3483 raw samples and 8118 clauses |
fe90eec1e3cdaa41d2da55864c86f6b6f042a56c | fe90eec1e3cdaa41d2da55864c86f6b6f042a56c_0 | Q: What are the sources of the data?
Text: Introduction
Text is important in many artificial intelligence applications. Among various text mining techniques, sentiment analysis is a key component in applications such as public opinion monitoring and comparative analysis. Sentiment analysis can be divided into three problems according to input texts, namely, sentence, paragraph, and document levels. This study focuses on sentence and paragraph levels.
Text sentiment analysis is usually considered a text classification problem. Almost all existing text classification techniques are applied to text sentiment analysis BIBREF0 . Typical techniques include bag-of-words (BOW)-based BIBREF1 , deep learning-based BIBREF2 , and lexicon-based (or rule-based) methods BIBREF3 .
Although many achievements have been made and sentiment analysis has been successfully used in various commercial applications, its accuracy can be further improved. The construction of a high-accuracy sentiment classification model usually entails the challenging compilation of training sets with numerous samples and sufficiently accurate labels. The reason behind this difficulty is two-fold. First, sentiment is somewhat subjective, and a sample may receive different labels from different users. Second, some texts contain complex sentiment representations, and a single label is difficult to provide. We conduct a statistical analysis of public Chinese sentiment text sets in GitHub. The results show that the average label error is larger than 10%. This error value reflects the degree of difficulty of sentiment labeling.
Privative and interrogative sentences are difficult to classify when deep learning-based methods are applied. Although lexicon-based methods can deal with particular types of privative sentences, their generalization capability is poor.
We address the above issues with a new methodology. First, we introduce a two-stage labeling strategy for sentiment texts. In the first stage, annotators are invited to label a large number of short texts with relatively pure sentiment orientations. Each sample is labeled by only one annotator. In the second stage, a relatively small number of text samples with mixed sentiment orientations are annotated, and each sample is labeled by multiple annotators. Second, we propose a two-level long short-term memory (LSTM) BIBREF4 network to achieve two-level feature representation and classify the sentiment orientations of a text sample to utilize two labeled data sets. Lastly, in the proposed two-level LSTM network, lexicon embedding is leveraged to incorporate linguistic features used in lexicon-based methods.
Three Chinese sentiment data sets are compiled to investigate the performance of the proposed methodology. The experimental results demonstrate the effectiveness of the proposed methods. Our work is new in the following aspects.
The rest of this paper is organized as follows. Section 2 briefly reviews related work. Section 3 describes our methodology. Section 4 reports the experimental results, and Section 5 concludes the study.
Text Sentiment Analysis
Sentiment analysis aims to predict the sentiment polarity of an input text sample. Sentiment polarity can be divided into negative, neutral, and positive in many applications.
Existing sentiment classification methods can be roughly divided into two categories, namely, lexicon-based and machine learning-based methods BIBREF5 . Lexicon-based methods BIBREF6 construct polar and privative word dictionaries. A set of rules for polar and privative words is compiled to judge the sentiment orientation of a text document. This method cannot effectively predict implicit orientations. Machine learning-based methods BIBREF7 utilize a standard binary or multi-category classification approach. Different feature extraction algorithms, including BOW BIBREF8 and part of speech (POS) BIBREF7 , are used. Word embedding and deep neural networks have recently been applied to sentiment analysis, and promising results have been obtained BIBREF9 BIBREF10 .
Lexion-based Sentiment Classification
Lexicon-based methods are actually in implemented in an unsupervised manner. They infer the sentiment categories of input texts on the basis of polar and privative words. The primary advantage of these methods is that they do not require labeled training data. The key of lexicon-based methods is the lexical resource construction, which maps words into a category (positive, negative, neutral, or privative). Senti-WordNet BIBREF11 is a lexical resource for English text sentiment classification. For Chinese texts, Senti-HowNet is usually used.
Fig. 1 characterizes a typical lexicon-based sentiment classification approach. The approach iteratively checks each word in an input sentence from left to right. The weight score of each word is calculated according to the procedure shown in Fig. 1. The final sentiment score is the average score of the words with weight scores. The scores of positive, neutral, and negative sentiments are denoted as “+1",“0", and “-1", respectively. According to the lexicon-based algorithm shown in Fig. 1, the sentiment score of “it is not bad" is 0.25, and the sentiment score of “it is good" is 1. However, the score of “it is not so bad" is -0.75, and this score is definitely wrong. Therefore, machine learning (including feature learning) methodologies have become mainstream in sentiment analysis.
Deep Learning-based Sentiment Classification
Deep learning (including word embedding BIBREF12 ) has been applied to almost all text-related applications, such as translation BIBREF13 , quality assurance BIBREF14 , recommendation BIBREF15 , and categorization BIBREF16 . Popular deep neural networks are divided into convolutional neural networks (CNNs) BIBREF17 and recurrent neural network (RNNs) BIBREF18 BIBREF19 . Both are utilized in sentiment classification BIBREF20 . Kim investigated the use of CNN in sentence sentiment classification and achieved promising results BIBREF2 . LSTM BIBREF21 , a classical type of RNN, is the most popular network used for sentiment classification. A binary-directional LSTM BIBREF22 with an attention mechanism is demonstrated to be effective in sentiment analysis.
Deep learning-based methods rarely utilize the useful resources adopted in lexicon-based methods. Qiao et al. BIBREF23 incorporated lexicon-based cues into the training of an LSTM-based model. Their proposed method relies on a new loss function that considers the relationships between polar or certain types of words (e.g., privative) and those words next to them in input texts. Our study also combines lexical cues into LSTM. Nevertheless, unlike Qiao et al.'s study that implicitly used lexical cues, the present work explicitly uses lexical cues in the LSTM network. Shin et al. BIBREF24 combined the lexicon embeddings of polar words with word embeddings for sentiment classification. The difference between our approach an the method proposed by Shin et al. the is discussed in Section 3.3.5.
Numerous studies on aspect-level sentiment analysis exist BIBREF25 . This problem is different from the sentiment classification investigated in this study.
METHODOLOGY
This section first introduces our two-stage labeling procedure. A two-level LSTM is then proposed. Lexicon embedding is finally leveraged to incorporate lexical cues.
Two-stage Labeling
As stated earlier, sentiment is subjective, and texts usually contain mixed sentiment orientations. Therefore, texts¡¯ sentiment orientations are difficult to label. In our study, three sentiment labels, namely, positive, neutral, and negative, are used. The following sentences are taken as examples.
The service is poor. The taste is good, but the rest is not so bad.
The quality of the phone is good, but the appearance is just so-so.
In user annotation, the labels of these two sentences depend on users. If a user is concerned about service, then the label of S1 may be “negative". By contrast, for another user who does not care about service, the label may be “positive". Similarly, a user may label S2 as “positive" if he cares about quality. Another user may label it as “negative" if the conjunction “but" attracts the user¡¯s attention more. Another user may label it as “neutral" if they are concerned about quality and appearance.
The underlying reason is that sentiment is more subjective than semantics. In related research on subjective categorization, such as visual aesthetics, each sample is usually repeatedly annotated by multiple annotators, and the average label is taken as the final label of the sample. This labeling strategy can also be applied to text sentiment annotation. However, we argue that this strategy is unsuitable for a (relatively) large number of samples. The reason lies in the following two aspects.
Multiple annotators for a large number of data sets require a large budget.
In our practice, annotators claim that their judgment criteria on sentiment become fused on texts with mixed sentiment orientations (e.g., S1 and S2) over time during labeling, and they become bored accordingly.
A two-stage labeling strategy is adopted in this study. In the first stage, each sentence/paragraph is divided into several clauses according to punctuation. The sentiment of each partitioned clause is relatively easy to annotate; therefore, each clause is labeled by only one user. In the second stage, a relatively small-sized sentence/paragraph set is labeled, and each sentence is labeled by all involved annotators. We still take the two sentences, S1 and S2, as examples. S1 and S2 are split into clauses, as shown below.
S1:
S1.1: The service is poor
S1.2: The taste is good
S1.3: but the rest is not so bad.
S2:
S2.1: The quality of the phone is good
S2.2: but the appearance is just so-so.
Each of the above clauses is labeled by only one annotator. The annotation in the first stage is easy to perform; thus, the number of clauses can be larger than the number of sentences used in the second labeling stage.
Two-level LSTM
Given two training data sets (denoted by T1 and T2), a new learning model should be utilized. LSTM is a widely used deep neural network in deep learning-based text classification.
LSTM is a typical RNN model for short-term memory, which can last for a long period of time. An LSTM is applicable to classify, process, and predict time series information with given time lags of unknown size. A common LSTM block is composed of a cell, an input gate, an output gate, and a forget gate. The forward computation of an LSTM block at time INLINEFORM0 or position INLINEFORM1 is as follows BIBREF21 : DISPLAYFORM0
where INLINEFORM0 is the input vector at time INLINEFORM1 (or position INLINEFORM2 ); INLINEFORM3 and INLINEFORM4 are the input vectors of the input unit and input gate, respectively; INLINEFORM5 and INLINEFORM6 are the output and hidden vectors at time INLINEFORM7 , respectively; INLINEFORM8 is the output of the forget gate at time INLINEFORM9 ; INLINEFORM10 is the internal state of the memory cell in an LSTM block at time INLINEFORM11 ; and INLINEFORM12 is the sigmoid active function.
When LSTM is used to classify an input sentence, the hidden vectors of each input vector are summed to form a dense vector that can be considered the feature representation of the input sentence, i.e., DISPLAYFORM0
In many applications, a bi-directional LSTM (bi-LSTM) structure is usually used, as shown in Fig. 2(a). In bi-LSTM, forward and backward information are considered for information at time INLINEFORM0 ; hence, the context is modeled. Bi-LSTM is thus significantly reasonable for text processing tasks. In our two-level LSTM, bi-LSTM is used in each level.
The output hidden state at time INLINEFORM0 of a bi-LSTM block can be described as follows: DISPLAYFORM0
where INLINEFORM0 , INLINEFORM1 , and INLINEFORM2 are the corresponding vectors at time INLINEFORM3 in the forward LSTM block; and INLINEFORM4 , INLINEFORM5 , and INLINEFORM6 are the corresponding vectors at time INLINEFORM7 in the backward LSTM block. INLINEFORM8 . When attention is used, the dense feature vector INLINEFORM9 of an input sentence is calculated as follows: DISPLAYFORM0
where INLINEFORM0 is the vector that consists of attention weights. The bi-LSTM with attention is shown in Fig. 2(b).
Our proposed network consists of two levels of LSTM network. In the first level, a bi-LSTM is used and learned on the basis of the first training set T1. This level is a conventional sentiment classification process. The input of this level is a clause, and the input INLINEFORM0 is the embedding of the basic unit of the input texts. The network is shown in Fig. 3(a).
In the second level, a bi-LSTM is also used and learned on the basis of the second training set T2. The input of this level is a sentence or a paragraph. The input INLINEFORM0 consists of two parts. The first part is the feature vector of the INLINEFORM1 -th clause. The feature vector is generated by the first-level network. In other words, the dense feature shown in Fig. 3(a) ( INLINEFORM2 ) is used. The second part is the sentiment score (not predicted label) output by the first-level network. The sentence S1 (The service is poor. The taste is good, but the rest is not so bad.) used in Subsection 3.1 is taken as an illustrative example. S1 contains three clauses. Therefore, the input vector of S1 can be represented by INLINEFORM3
where DISPLAYFORM0
where INLINEFORM0 is the output score of the INLINEFORM1 th clause by the first-level LSTM and INLINEFORM2 is the feature representation of the INLINEFORM3 th clause by the first LSTM. The network of the whole two-level network is shown in Fig. 3(b).
Lexical Embedding
The proposed lexicon embedding is based on INLINEFORM0 -hot encoding. Therefore, INLINEFORM1 -hot encoding is first described.
For categorical data, one-hot encoding is the most widely used encoding strategy when different categories are independent. For example, if one-hot encoding is used to represent three categories, namely, positive, neutral, and negative, the encoding vectors for the three categories are INLINEFORM0 , INLINEFORM1 , and INLINEFORM2 , respectively.
In this work, many lexical cues are categorical data, and different categories are independent. These lexical cues can directly be represented by one-hot encoding. The encoded vectors for lexical cues are then concatenated with other vectors, such as character/word embedding. However, one-hot encoding presents two main limitations when the encoded vector is concatenated with other vectors.
The value difference between the elements of one-hot encoded vectors and those of other encoded vectors (e.g., word embedding vectors) may be large. Fig. 4 shows the histogram of the values of the elements of the word embedding vectors. The magnitude of most elements are smaller than 1.
The lengths of one-hot encoded vectors are usually shorter than those of other encoded vectors. Consequently, the proportion of one-hot encoded part is small in the concatenated vectors.
The above two limitations affect the final sentiment analysis performance. To this end, we propose a new encoding strategy. DISPLAYFORM0
where INLINEFORM0 is the INLINEFORM1 -hot encoded vector, INLINEFORM2 is the proportion parameter, INLINEFORM3 is the one-hot encoded vector, and INLINEFORM4 is an INLINEFORM5 -dimensional vector. If INLINEFORM6 and INLINEFORM7 are equal to 1, then INLINEFORM8 -hot encoding is reduced to one-hot encoding. The parameter INLINEFORM9 is applied to increase the length of the final encoded vector.
Most lexicon-based sentiment methods rely on four types of words, namely, positive, negative, neutral, and privative. These words are useful cues for predicting the sentiment labels of input texts. The incorporation of these words should also be useful. A previous study has shown that a typical document comprises approximately 8% of such sentences BIBREF26 . Sentiments expressed in a conditional sentence can be difficult to determine due to the semantic condition. The sentiment polarities of interrogative sentences are also difficult to classify according to our empirical study.
Five types of words, namely, positive (Pos), negative (Neg), privative (Pri), suppositive (Sup), and interrogative (Int), are represented by the proposed encoding method. The rest words, which do not belong to any of the above five types, are named “others (Oth)" instead of “neutral" because some words, such as “the", are unrelated to “sentiment". The value of INLINEFORM0 in Eq. (6) is set as 10. The encoded vectors are as follows. INLINEFORM1
In the proposed INLINEFORM0 -hot embedding, the parameter INLINEFORM1 can be learned during training. The representation of the third clause (“but the rest is not so bad") of S1 in Subsection 3.1 is taken as an illustrative example. The new embedding of each word in this clause is as follows. DISPLAYFORM0
Certain types (e.g., positive, negative, and privative) of words should play more important roles than other words do in texts; therefore, their embeddings are also used in the attention layer. A new LSTM based on our lexicon embedding is proposed, as shown in Fig. 5. The attention layer and final dense vector of the network in Fig. 3(a) are calculated as follows. DISPLAYFORM0
where INLINEFORM0 is the attention weight for the INLINEFORM1 -th input, lt is the lexicon embedding for key lexical words for the INLINEFORM2 -th input, and INLINEFORM3 is the final dense vector. Eq. (2) is used in the first-level LSTM.
POS is usually used as a key cue in sentiment analysis BIBREF27 . To this end, we use additional lexicon embedding. The new lexicon embedding includes several major types of POS, namely, interrogative, exclamatory, and others. This new lexicon embedding is also applied to the attention layer. The motivation lies in that certain types of POS should play important roles in sentiment.
The proposed INLINEFORM0 -hot embedding is still applied to POS types in this study. According to our initial case studies, eight POS types are considered. They are noun, adjective, verb, pronoun, adverb, preposition, accessory, and others. The eight POS types are represented by the proposed INLINEFORM1 -hot encoding. We let INLINEFORM2 in Eq. (6) be 10. The first three POS types are as follows. INLINEFORM3
When POS embedding is used, the attention layer and final outputs of the network in Eq. (3) become DISPLAYFORM0
where INLINEFORM0 is the lexicon embedding for key lexical words for the INLINEFORM1 -th input.
Conjunction words play important roles in sentiment analysis BIBREF28 . For example, conjunctions such as “but" and “moreover" usually indicate the focus of texts and attract readers¡¯ attention. Therefore, conjunctions are considered in the input of the second-level LSTM.
Once a set of conjunction words is compiled, INLINEFORM0 -hot embedding is used. In our experiments, the number of conjunction words is 169. Therefore, the parameter INLINEFORM1 in Eq. (2) is set as 1.
When conjunction embedding is used for the second-level layer, the attention layer and final outputs of the network in Fig. 3(b) are calculated as follows. DISPLAYFORM0
where INLINEFORM0 is the attention weight for the INLINEFORM1 -th input clause; INLINEFORM2 is the hidden vector of the second-level LSTM; INLINEFORM3 and INLINEFORM4 are the conjunction embeddings for the first and last words in the INLINEFORM5 -th input clause, respectively; and INLINEFORM6 is the final dense vector used for the final classification.
Shin et al. BIBREF24 also embedded lexical information into sentiment analysis. Three major differences exist between our method and the method proposed by Shin et al. BIBREF24 .
The lexicon embedding proposed by Shin et al. us-es one-hot encoding, whereas the proposed method uses a new encoding strategy that can be considered a soft one-hot encoding.
The lexicon embedding proposed by Shin et al. ex-tends the length of raw encoded vectors. However, the extension aims to keep the lengths of lexical and word embeddings equal. Their extension method also only relies on zero padding and is thus different from the proposed method.
Only sentimental words are considered in the lexicon embedding proposed by Shin et al. On the contrary, sentimental words, POS, and conjunctions are considered in our work.
The Learning Procedure
The algorithmic steps of the entire learning procedure for the proposed INLINEFORM0 -hot lexicon embedding-based two-level LSTM (called INLINEFORM1 Tl-LSTM) are shown in Algorithm 1. In Algorithm 1, T1 refers to the training data that consist of clauses and the labels obtained in the first-stage labeling procedure. T2 refers to the training data that consist of sentences and the labels obtained in the second-stage labeling procedure. The structure of INLINEFORM2 Tl-LSTM is presented in Fig. 6.
INLINEFORM0 Tl-LSTM Input: Training sets T1 and T2; dictionary of key lexical words; POS for each word; dictionary of conjunction words; character/word embeddings for each character/word.
Output: A trained two-level LSTM for sentiment classification.
Steps:
Construct the embedding vector for each character (including punctuation) in the clauses in T1. The embeddings include the character/word and lexicon embeddings of each character/word;
Train the first-level LSTM on the basis of the input embedding vectors and labels of the T1 text clauses;
Run the learned first-level LSTM on each clause of the text samples in T2. Record the predicted score INLINEFORM0 and the final dense vector INLINEFORM1 for each clause;
Construct the embedding vector for each clause in the text samples in T2. Each embedding vector consists of INLINEFORM0 , INLINEFORM1 , and the lexicon embedding of conjunctions of each clause;
Train the second-level LSTM on the basis of the input embedding vectors and labels of the T2 text samples.
The first-level and second-level LSTM networks consist of the final two-level LSTM.
The proposed two-level LSTM can be applied to texts with arbitrary languages. Word information is required in lexical construction regardless of whether character or word embedding is used. The reason is that the three types of lexicon embeddings are performed at the word level. Therefore, when character embedding is used, the lexicon embedding of each character is the lexicon embedding of the word containing it.
This section shows the evaluation of the proposed methodology in terms of the two-level LSTM network and each part of the lexicon embedding.
We compile three Chinese text corpora from online data for three domains, namely, “hotel", “mobile phone (mobile)", and “travel". All texts are about user reviews. Each text sample collected is first partitioned into clauses according to Chinese tokens. Three clause sets are subsequently obtained from the three text corpora.
The labels “+1", “0.5", and “0" correspond to the three sentiment classes “positive", “neutral", and “negative", respectively. The text data are labeled according to our two-stage labeling strategy.
In the first stage, only one user is invited to label each clause sample as the sentiment orientations for clauses (or sub-sentences) are easy to label.
In the second stage, five users are invited to label each text sample in the three raw data sets. The average score of the five users on each sample is calculated. Samples with average scores located in [0.6, 1] are labeled as “positive". Samples with average scores located in [0, 0.4] are labeled as “negative". Others are labeled as “neutral". The details of the labeling results are shown in Table 1.
All the training and test data and the labels are available online.
In our experiments, the five types of key lexical words introduced in Subsection 3.3.2 are manually constructed. The details of the five types of words are listed in Table 2. The conjunction words are also manually constructed. The number of conjunction words used in the experiments is 169.
In each experimental run, the training set is compiled on the basis of the training data listed in Table 1. The compiling rule is specified before each experimental run. The test data are fixed to facilitate experimental duplication and comparison by other researchers.
In our experiments, three competing algorithms, namely, BOW, CNN, and (conventional) LSTM, are used.
For BOW, term frequency-inverse document frequency is utilized to construct features. Ridge regression BIBREF29 is used as a classifier. For CNN, a three-channel CNN is used. For LSTM, one-layer and two-layer bi-LSTM with attention are adopted, and the results of the network with superior performance are presented. CNN and LSTM are performed on TensorFlow, and default parameter settings are followed.
The key parameters are searched as follows. The embedding dimensions of characters and words are searched in [100, 150, 200, 250, 300]. The parameter INLINEFORM0 in INLINEFORM1 -hot encoding is searched in INLINEFORM2 .
In this subsubsection, each of the three raw data sets (associated with their labels) shown in Table 1 is used. The clause data are not used. In other words, the training data used in this subsubsection are the same as those used in previous studies. For each data corpus, 1000 raw data samples are used as the test data, and the rest are used as the training data. The involved algorithms are detailed as follows.
CNN-C denotes the CNN with (Chinese) character embedding.
CNN-W denotes the CNN with (Chinese) word embedding.
CNN-Lex-C denotes the algorithm which also integrates polar words in CNN which is proposed by Shin et al. BIBREF24 . The (Chinese) character embedding is used.
CNN-Lex-W denotes the algorithm which also integrates polar words in CNN which is proposed by Shin et al. BIBREF24 . The (Chinese) word embedding is used.
Bi-LSTM-C denotes the BI-LSTM with (Chinese) character embedding.
Bi-LSTM-W denotes the Bi-LSTM with (Chinese) word embedding.
Lex-rule denotes the rule-based approach shows in Fig. 1. This approach is unsupervised.
BOW denotes the conventional algorithm which is based of bag-of-words features.
The accuracies of the above algorithms are listed in Table 3. Overall, Bi-LSTM outperforms CNN and BOW. This conclusion is in accordance with the conclusion that RNN performs efficiently against CNN in a broad range of natural language processing (NLP) tasks on the basis of extensive comparative studies BIBREF30 . In addition, CNN-lex outperforms CNN under both character and word embeddings, which suggests that lexicon cues are useful in sentiment analysis. Lex-rule achieves the lowest accuracies on all the three data sets. Considering that the performances of (traditional) CNN, Lex-rule, and BOW are relatively poor, they are not applied in the remaining parts.
In this experimental comparison, the proposed two-level LSTM is evaluated, whereas lexicon embedding is not used in the entire network. The primary goal is to test whether the introduced two-stage labeling and the two-level network structure are useful for sentiment analysis.
The raw and clause data listed in Table 1 are used to perform the two-level LSTM. Tl-LSTM denotes the two-level LSTM. “R+C" refer to the mixed data of raw and clause data. The test data are still the 1000 samples used in section 4.3.1 for each corpus. Table 4 shows the classification accuracies. To ensure that the results differ from those in Table 3, we explicitly add “R+C" after each algorithm in Table 4. In the last line of Table 4, the base results for each corpus in Table 3 are also listed.
On all the three data corpora, the proposed two-level network (without lexicon embedding) with character embedding, Tl-LSTM-C, outperforms all the other involved algorithms. On the travel and the mobile corpora, TI-LSTM-W outperforms Bi-LSTM-W. The results in Table 4 indicate that the performances of Tl-LSTM on the mixed training and test data (R+C) are better than those of Bi-LSTM. This comparison indicates that the proposed two-level LSTM is effective.
In addition, for the involved algorithms, most results achieved on “R+C" are better than the best results only achieved on `R'listed in Table 3. This comparison suggests that the introduced two-stage labeling is useful.
The results also show that in the two-level LSTM, character embedding is more effective than word embedding.
In this experimental run, lexicon embedding is used in the proposed two-level LSTM or INLINEFORM0 Tl-LSTM. Table 5 shows the results. The optimal parameter INLINEFORM1 is about 11.
The performances of TI-LSTM with lexicon embedding (i.e., INLINEFORM0 Tl-LSTM) are consistently better than those of TI-LSTM without lexicon embedding (i.e., Tl-LSTM) listed in Table 5. The improved accuracies of INLINEFORM1 TI-LSTM over Tl-LSTM on the three data corpora are explicitly listed in Table 6.
The experimental evaluation discussed in Subsection 4.3 verifies the effectiveness of the proposed method, INLINEFORM0 Tl-LSTM. Unlike the conventional RNN, INLINEFORM1 Tl-LSTM contains lexicon embedding that consists of new technique and components, including INLINEFORM2 -hot encoding, embedding for polar words, embedding for POS, and embedding for conjunctions. Therefore, this subsection evaluates the performances of the involved technique and embeddings separately.
Our INLINEFORM0 -hot encoding differs from one-hot encoding in two aspects. The first aspect is that the nonzero values in one-hot encoding are only equal to 1, whereas the nonzero values in INLINEFORM1 -hot encoding are INLINEFORM2 . The second aspect is that only one element in one-hot encoding is nonzero, whereas n elements in INLINEFORM3 -hot encoding are nonzero.
In this experiment, we test whether INLINEFORM0 -hot encoding is useful in two experimental runs. In the first run, the value of INLINEFORM1 is manually set to 0.5 and 1 in the experimental run without optimization. The parameter INLINEFORM2 in Eq. (6) is set as 15. The classification accuracies vary according to different INLINEFORM3 values on all the three data corpora. When INLINEFORM4 equals 1, the accuracies are the lowest in most cases shown in Fig. 7.
The results shown in Fig. 7 indicate that the value of INLINEFORM0 does affect the performance of the entire network. Consequently, the classical one-hot encoding, which fixes the value of nonzero elements as 1, is ineffective. In our experiments, the learned value of INLINEFORM1 is approximate 0.4.
In the second run, the performances under different INLINEFORM0 (i.e., 1, 5, 10, 15) are tested. Table 7 shows the comparison results. The value of INLINEFORM1 does affect the performance of the entire network, thereby indicating that the introduction of the INLINEFORM2 -duplicated strategy in encoding is effective. In the experiments, when INLINEFORM3 is increasing, the accuracies first increase and then decrease. The main reason may lie in the fact that when INLINEFORM4 becomes large, the proportion of lexicon embedding becomes large accordingly. An over-length input feature vector may incur “curse of dimensionality" and thus weaken the performance of the proposed two-level network.
In this experimental run, we test whether the labeled polar (negative and positive) words do affect the performance of the entire method when they are used in lexicon embedding. To this end, we order the polar words according to their frequencies in the training data. Top 0%, 50%, 100% polar words are used. The corresponding classification accuracies are depicted in Fig. 8.
In most cases, the accuracies are the lowest when no polar words are used in the lexicon embedding. When all polar words are used, the proposed network achieves the highest accuracies.
In the experiment, only one user is invited to manually compile the dictionary for a data corpus. One and a half hour is needed for each data corpus. In our viewpoint, it is worth manually compiling the polar words for sentiment analysis by considering the performance improvement and time-consumption.
In this experimental run, we test whether POS cues do play positive roles in the entire model. To this end, we remove POS in the lexicon embedding of the proposed method. The results are shown in Fig. 9.
In most cases, the accuracies with POS embedding are greater than those without POS embedding, thereby indicating that the application of POS to lexicon embedding is useful.
In this experimental run, we test whether conjunction cues do play positive roles in the entire model. To this end, the lexicon embedding for conjunction words is also removed from the proposed method. The results are shown in Fig. 10.
The algorithm with conjunction embedding outperforms that without conjunction embedding consistently, thereby indicating that the application of conjunction to lexicon embedding is useful.
High-quality labels are crucial for learning systems. Nevertheless, texts with mixed sentiments are difficult for humans to label in text sentiment classification. In this study, a new labeling strategy is introduced to partition texts into those with pure and mixed sentiment orientations. These two categories of texts are labeled using different processes. A two-level network is accordingly proposed to utilize the two labeled data in our two-stage labeling strategy. Lexical cues (e.g., polar words, POS, conjunction words) are particularly useful in sentiment analysis. These lexical cues are used in our two-level network, and a new encoding strategy, that is, INLINEFORM0 -hot encoding, is introduced. INLINEFORM1 -hot encoding is motivated by one-hot encoding. However, the former alleviates the drawbacks of the latter. Three Chinese sentiment text data corpora are compiled to verify the effectiveness of the proposed methodology. Our proposed method achieves the highest accuracies on these three data corpora.
The proposed two-level network and lexicon embedding can also be applied to other types of deep neural networks. In our future work, we will extend our main idea into several networks and text mining applications.
The authors wish to thank Zefeng Han, Qing Yin, Lei Yang, Xiaonan Wang, Nan Chen, Rujing Yao, Lihong Guo, Pinglong Zhao for the labeling of the experimental data. | User reviews written in Chinese collected online for hotel, mobile phone, and travel domains |
9d5df9022cc9eb04b9f5c5a9d8308a332ebdf50c | 9d5df9022cc9eb04b9f5c5a9d8308a332ebdf50c_0 | Q: What is the new labeling strategy?
Text: Introduction
Text is important in many artificial intelligence applications. Among various text mining techniques, sentiment analysis is a key component in applications such as public opinion monitoring and comparative analysis. Sentiment analysis can be divided into three problems according to input texts, namely, sentence, paragraph, and document levels. This study focuses on sentence and paragraph levels.
Text sentiment analysis is usually considered a text classification problem. Almost all existing text classification techniques are applied to text sentiment analysis BIBREF0 . Typical techniques include bag-of-words (BOW)-based BIBREF1 , deep learning-based BIBREF2 , and lexicon-based (or rule-based) methods BIBREF3 .
Although many achievements have been made and sentiment analysis has been successfully used in various commercial applications, its accuracy can be further improved. The construction of a high-accuracy sentiment classification model usually entails the challenging compilation of training sets with numerous samples and sufficiently accurate labels. The reason behind this difficulty is two-fold. First, sentiment is somewhat subjective, and a sample may receive different labels from different users. Second, some texts contain complex sentiment representations, and a single label is difficult to provide. We conduct a statistical analysis of public Chinese sentiment text sets in GitHub. The results show that the average label error is larger than 10%. This error value reflects the degree of difficulty of sentiment labeling.
Privative and interrogative sentences are difficult to classify when deep learning-based methods are applied. Although lexicon-based methods can deal with particular types of privative sentences, their generalization capability is poor.
We address the above issues with a new methodology. First, we introduce a two-stage labeling strategy for sentiment texts. In the first stage, annotators are invited to label a large number of short texts with relatively pure sentiment orientations. Each sample is labeled by only one annotator. In the second stage, a relatively small number of text samples with mixed sentiment orientations are annotated, and each sample is labeled by multiple annotators. Second, we propose a two-level long short-term memory (LSTM) BIBREF4 network to achieve two-level feature representation and classify the sentiment orientations of a text sample to utilize two labeled data sets. Lastly, in the proposed two-level LSTM network, lexicon embedding is leveraged to incorporate linguistic features used in lexicon-based methods.
Three Chinese sentiment data sets are compiled to investigate the performance of the proposed methodology. The experimental results demonstrate the effectiveness of the proposed methods. Our work is new in the following aspects.
The rest of this paper is organized as follows. Section 2 briefly reviews related work. Section 3 describes our methodology. Section 4 reports the experimental results, and Section 5 concludes the study.
Text Sentiment Analysis
Sentiment analysis aims to predict the sentiment polarity of an input text sample. Sentiment polarity can be divided into negative, neutral, and positive in many applications.
Existing sentiment classification methods can be roughly divided into two categories, namely, lexicon-based and machine learning-based methods BIBREF5 . Lexicon-based methods BIBREF6 construct polar and privative word dictionaries. A set of rules for polar and privative words is compiled to judge the sentiment orientation of a text document. This method cannot effectively predict implicit orientations. Machine learning-based methods BIBREF7 utilize a standard binary or multi-category classification approach. Different feature extraction algorithms, including BOW BIBREF8 and part of speech (POS) BIBREF7 , are used. Word embedding and deep neural networks have recently been applied to sentiment analysis, and promising results have been obtained BIBREF9 BIBREF10 .
Lexion-based Sentiment Classification
Lexicon-based methods are actually in implemented in an unsupervised manner. They infer the sentiment categories of input texts on the basis of polar and privative words. The primary advantage of these methods is that they do not require labeled training data. The key of lexicon-based methods is the lexical resource construction, which maps words into a category (positive, negative, neutral, or privative). Senti-WordNet BIBREF11 is a lexical resource for English text sentiment classification. For Chinese texts, Senti-HowNet is usually used.
Fig. 1 characterizes a typical lexicon-based sentiment classification approach. The approach iteratively checks each word in an input sentence from left to right. The weight score of each word is calculated according to the procedure shown in Fig. 1. The final sentiment score is the average score of the words with weight scores. The scores of positive, neutral, and negative sentiments are denoted as “+1",“0", and “-1", respectively. According to the lexicon-based algorithm shown in Fig. 1, the sentiment score of “it is not bad" is 0.25, and the sentiment score of “it is good" is 1. However, the score of “it is not so bad" is -0.75, and this score is definitely wrong. Therefore, machine learning (including feature learning) methodologies have become mainstream in sentiment analysis.
Deep Learning-based Sentiment Classification
Deep learning (including word embedding BIBREF12 ) has been applied to almost all text-related applications, such as translation BIBREF13 , quality assurance BIBREF14 , recommendation BIBREF15 , and categorization BIBREF16 . Popular deep neural networks are divided into convolutional neural networks (CNNs) BIBREF17 and recurrent neural network (RNNs) BIBREF18 BIBREF19 . Both are utilized in sentiment classification BIBREF20 . Kim investigated the use of CNN in sentence sentiment classification and achieved promising results BIBREF2 . LSTM BIBREF21 , a classical type of RNN, is the most popular network used for sentiment classification. A binary-directional LSTM BIBREF22 with an attention mechanism is demonstrated to be effective in sentiment analysis.
Deep learning-based methods rarely utilize the useful resources adopted in lexicon-based methods. Qiao et al. BIBREF23 incorporated lexicon-based cues into the training of an LSTM-based model. Their proposed method relies on a new loss function that considers the relationships between polar or certain types of words (e.g., privative) and those words next to them in input texts. Our study also combines lexical cues into LSTM. Nevertheless, unlike Qiao et al.'s study that implicitly used lexical cues, the present work explicitly uses lexical cues in the LSTM network. Shin et al. BIBREF24 combined the lexicon embeddings of polar words with word embeddings for sentiment classification. The difference between our approach an the method proposed by Shin et al. the is discussed in Section 3.3.5.
Numerous studies on aspect-level sentiment analysis exist BIBREF25 . This problem is different from the sentiment classification investigated in this study.
METHODOLOGY
This section first introduces our two-stage labeling procedure. A two-level LSTM is then proposed. Lexicon embedding is finally leveraged to incorporate lexical cues.
Two-stage Labeling
As stated earlier, sentiment is subjective, and texts usually contain mixed sentiment orientations. Therefore, texts¡¯ sentiment orientations are difficult to label. In our study, three sentiment labels, namely, positive, neutral, and negative, are used. The following sentences are taken as examples.
The service is poor. The taste is good, but the rest is not so bad.
The quality of the phone is good, but the appearance is just so-so.
In user annotation, the labels of these two sentences depend on users. If a user is concerned about service, then the label of S1 may be “negative". By contrast, for another user who does not care about service, the label may be “positive". Similarly, a user may label S2 as “positive" if he cares about quality. Another user may label it as “negative" if the conjunction “but" attracts the user¡¯s attention more. Another user may label it as “neutral" if they are concerned about quality and appearance.
The underlying reason is that sentiment is more subjective than semantics. In related research on subjective categorization, such as visual aesthetics, each sample is usually repeatedly annotated by multiple annotators, and the average label is taken as the final label of the sample. This labeling strategy can also be applied to text sentiment annotation. However, we argue that this strategy is unsuitable for a (relatively) large number of samples. The reason lies in the following two aspects.
Multiple annotators for a large number of data sets require a large budget.
In our practice, annotators claim that their judgment criteria on sentiment become fused on texts with mixed sentiment orientations (e.g., S1 and S2) over time during labeling, and they become bored accordingly.
A two-stage labeling strategy is adopted in this study. In the first stage, each sentence/paragraph is divided into several clauses according to punctuation. The sentiment of each partitioned clause is relatively easy to annotate; therefore, each clause is labeled by only one user. In the second stage, a relatively small-sized sentence/paragraph set is labeled, and each sentence is labeled by all involved annotators. We still take the two sentences, S1 and S2, as examples. S1 and S2 are split into clauses, as shown below.
S1:
S1.1: The service is poor
S1.2: The taste is good
S1.3: but the rest is not so bad.
S2:
S2.1: The quality of the phone is good
S2.2: but the appearance is just so-so.
Each of the above clauses is labeled by only one annotator. The annotation in the first stage is easy to perform; thus, the number of clauses can be larger than the number of sentences used in the second labeling stage.
Two-level LSTM
Given two training data sets (denoted by T1 and T2), a new learning model should be utilized. LSTM is a widely used deep neural network in deep learning-based text classification.
LSTM is a typical RNN model for short-term memory, which can last for a long period of time. An LSTM is applicable to classify, process, and predict time series information with given time lags of unknown size. A common LSTM block is composed of a cell, an input gate, an output gate, and a forget gate. The forward computation of an LSTM block at time INLINEFORM0 or position INLINEFORM1 is as follows BIBREF21 : DISPLAYFORM0
where INLINEFORM0 is the input vector at time INLINEFORM1 (or position INLINEFORM2 ); INLINEFORM3 and INLINEFORM4 are the input vectors of the input unit and input gate, respectively; INLINEFORM5 and INLINEFORM6 are the output and hidden vectors at time INLINEFORM7 , respectively; INLINEFORM8 is the output of the forget gate at time INLINEFORM9 ; INLINEFORM10 is the internal state of the memory cell in an LSTM block at time INLINEFORM11 ; and INLINEFORM12 is the sigmoid active function.
When LSTM is used to classify an input sentence, the hidden vectors of each input vector are summed to form a dense vector that can be considered the feature representation of the input sentence, i.e., DISPLAYFORM0
In many applications, a bi-directional LSTM (bi-LSTM) structure is usually used, as shown in Fig. 2(a). In bi-LSTM, forward and backward information are considered for information at time INLINEFORM0 ; hence, the context is modeled. Bi-LSTM is thus significantly reasonable for text processing tasks. In our two-level LSTM, bi-LSTM is used in each level.
The output hidden state at time INLINEFORM0 of a bi-LSTM block can be described as follows: DISPLAYFORM0
where INLINEFORM0 , INLINEFORM1 , and INLINEFORM2 are the corresponding vectors at time INLINEFORM3 in the forward LSTM block; and INLINEFORM4 , INLINEFORM5 , and INLINEFORM6 are the corresponding vectors at time INLINEFORM7 in the backward LSTM block. INLINEFORM8 . When attention is used, the dense feature vector INLINEFORM9 of an input sentence is calculated as follows: DISPLAYFORM0
where INLINEFORM0 is the vector that consists of attention weights. The bi-LSTM with attention is shown in Fig. 2(b).
Our proposed network consists of two levels of LSTM network. In the first level, a bi-LSTM is used and learned on the basis of the first training set T1. This level is a conventional sentiment classification process. The input of this level is a clause, and the input INLINEFORM0 is the embedding of the basic unit of the input texts. The network is shown in Fig. 3(a).
In the second level, a bi-LSTM is also used and learned on the basis of the second training set T2. The input of this level is a sentence or a paragraph. The input INLINEFORM0 consists of two parts. The first part is the feature vector of the INLINEFORM1 -th clause. The feature vector is generated by the first-level network. In other words, the dense feature shown in Fig. 3(a) ( INLINEFORM2 ) is used. The second part is the sentiment score (not predicted label) output by the first-level network. The sentence S1 (The service is poor. The taste is good, but the rest is not so bad.) used in Subsection 3.1 is taken as an illustrative example. S1 contains three clauses. Therefore, the input vector of S1 can be represented by INLINEFORM3
where DISPLAYFORM0
where INLINEFORM0 is the output score of the INLINEFORM1 th clause by the first-level LSTM and INLINEFORM2 is the feature representation of the INLINEFORM3 th clause by the first LSTM. The network of the whole two-level network is shown in Fig. 3(b).
Lexical Embedding
The proposed lexicon embedding is based on INLINEFORM0 -hot encoding. Therefore, INLINEFORM1 -hot encoding is first described.
For categorical data, one-hot encoding is the most widely used encoding strategy when different categories are independent. For example, if one-hot encoding is used to represent three categories, namely, positive, neutral, and negative, the encoding vectors for the three categories are INLINEFORM0 , INLINEFORM1 , and INLINEFORM2 , respectively.
In this work, many lexical cues are categorical data, and different categories are independent. These lexical cues can directly be represented by one-hot encoding. The encoded vectors for lexical cues are then concatenated with other vectors, such as character/word embedding. However, one-hot encoding presents two main limitations when the encoded vector is concatenated with other vectors.
The value difference between the elements of one-hot encoded vectors and those of other encoded vectors (e.g., word embedding vectors) may be large. Fig. 4 shows the histogram of the values of the elements of the word embedding vectors. The magnitude of most elements are smaller than 1.
The lengths of one-hot encoded vectors are usually shorter than those of other encoded vectors. Consequently, the proportion of one-hot encoded part is small in the concatenated vectors.
The above two limitations affect the final sentiment analysis performance. To this end, we propose a new encoding strategy. DISPLAYFORM0
where INLINEFORM0 is the INLINEFORM1 -hot encoded vector, INLINEFORM2 is the proportion parameter, INLINEFORM3 is the one-hot encoded vector, and INLINEFORM4 is an INLINEFORM5 -dimensional vector. If INLINEFORM6 and INLINEFORM7 are equal to 1, then INLINEFORM8 -hot encoding is reduced to one-hot encoding. The parameter INLINEFORM9 is applied to increase the length of the final encoded vector.
Most lexicon-based sentiment methods rely on four types of words, namely, positive, negative, neutral, and privative. These words are useful cues for predicting the sentiment labels of input texts. The incorporation of these words should also be useful. A previous study has shown that a typical document comprises approximately 8% of such sentences BIBREF26 . Sentiments expressed in a conditional sentence can be difficult to determine due to the semantic condition. The sentiment polarities of interrogative sentences are also difficult to classify according to our empirical study.
Five types of words, namely, positive (Pos), negative (Neg), privative (Pri), suppositive (Sup), and interrogative (Int), are represented by the proposed encoding method. The rest words, which do not belong to any of the above five types, are named “others (Oth)" instead of “neutral" because some words, such as “the", are unrelated to “sentiment". The value of INLINEFORM0 in Eq. (6) is set as 10. The encoded vectors are as follows. INLINEFORM1
In the proposed INLINEFORM0 -hot embedding, the parameter INLINEFORM1 can be learned during training. The representation of the third clause (“but the rest is not so bad") of S1 in Subsection 3.1 is taken as an illustrative example. The new embedding of each word in this clause is as follows. DISPLAYFORM0
Certain types (e.g., positive, negative, and privative) of words should play more important roles than other words do in texts; therefore, their embeddings are also used in the attention layer. A new LSTM based on our lexicon embedding is proposed, as shown in Fig. 5. The attention layer and final dense vector of the network in Fig. 3(a) are calculated as follows. DISPLAYFORM0
where INLINEFORM0 is the attention weight for the INLINEFORM1 -th input, lt is the lexicon embedding for key lexical words for the INLINEFORM2 -th input, and INLINEFORM3 is the final dense vector. Eq. (2) is used in the first-level LSTM.
POS is usually used as a key cue in sentiment analysis BIBREF27 . To this end, we use additional lexicon embedding. The new lexicon embedding includes several major types of POS, namely, interrogative, exclamatory, and others. This new lexicon embedding is also applied to the attention layer. The motivation lies in that certain types of POS should play important roles in sentiment.
The proposed INLINEFORM0 -hot embedding is still applied to POS types in this study. According to our initial case studies, eight POS types are considered. They are noun, adjective, verb, pronoun, adverb, preposition, accessory, and others. The eight POS types are represented by the proposed INLINEFORM1 -hot encoding. We let INLINEFORM2 in Eq. (6) be 10. The first three POS types are as follows. INLINEFORM3
When POS embedding is used, the attention layer and final outputs of the network in Eq. (3) become DISPLAYFORM0
where INLINEFORM0 is the lexicon embedding for key lexical words for the INLINEFORM1 -th input.
Conjunction words play important roles in sentiment analysis BIBREF28 . For example, conjunctions such as “but" and “moreover" usually indicate the focus of texts and attract readers¡¯ attention. Therefore, conjunctions are considered in the input of the second-level LSTM.
Once a set of conjunction words is compiled, INLINEFORM0 -hot embedding is used. In our experiments, the number of conjunction words is 169. Therefore, the parameter INLINEFORM1 in Eq. (2) is set as 1.
When conjunction embedding is used for the second-level layer, the attention layer and final outputs of the network in Fig. 3(b) are calculated as follows. DISPLAYFORM0
where INLINEFORM0 is the attention weight for the INLINEFORM1 -th input clause; INLINEFORM2 is the hidden vector of the second-level LSTM; INLINEFORM3 and INLINEFORM4 are the conjunction embeddings for the first and last words in the INLINEFORM5 -th input clause, respectively; and INLINEFORM6 is the final dense vector used for the final classification.
Shin et al. BIBREF24 also embedded lexical information into sentiment analysis. Three major differences exist between our method and the method proposed by Shin et al. BIBREF24 .
The lexicon embedding proposed by Shin et al. us-es one-hot encoding, whereas the proposed method uses a new encoding strategy that can be considered a soft one-hot encoding.
The lexicon embedding proposed by Shin et al. ex-tends the length of raw encoded vectors. However, the extension aims to keep the lengths of lexical and word embeddings equal. Their extension method also only relies on zero padding and is thus different from the proposed method.
Only sentimental words are considered in the lexicon embedding proposed by Shin et al. On the contrary, sentimental words, POS, and conjunctions are considered in our work.
The Learning Procedure
The algorithmic steps of the entire learning procedure for the proposed INLINEFORM0 -hot lexicon embedding-based two-level LSTM (called INLINEFORM1 Tl-LSTM) are shown in Algorithm 1. In Algorithm 1, T1 refers to the training data that consist of clauses and the labels obtained in the first-stage labeling procedure. T2 refers to the training data that consist of sentences and the labels obtained in the second-stage labeling procedure. The structure of INLINEFORM2 Tl-LSTM is presented in Fig. 6.
INLINEFORM0 Tl-LSTM Input: Training sets T1 and T2; dictionary of key lexical words; POS for each word; dictionary of conjunction words; character/word embeddings for each character/word.
Output: A trained two-level LSTM for sentiment classification.
Steps:
Construct the embedding vector for each character (including punctuation) in the clauses in T1. The embeddings include the character/word and lexicon embeddings of each character/word;
Train the first-level LSTM on the basis of the input embedding vectors and labels of the T1 text clauses;
Run the learned first-level LSTM on each clause of the text samples in T2. Record the predicted score INLINEFORM0 and the final dense vector INLINEFORM1 for each clause;
Construct the embedding vector for each clause in the text samples in T2. Each embedding vector consists of INLINEFORM0 , INLINEFORM1 , and the lexicon embedding of conjunctions of each clause;
Train the second-level LSTM on the basis of the input embedding vectors and labels of the T2 text samples.
The first-level and second-level LSTM networks consist of the final two-level LSTM.
The proposed two-level LSTM can be applied to texts with arbitrary languages. Word information is required in lexical construction regardless of whether character or word embedding is used. The reason is that the three types of lexicon embeddings are performed at the word level. Therefore, when character embedding is used, the lexicon embedding of each character is the lexicon embedding of the word containing it.
This section shows the evaluation of the proposed methodology in terms of the two-level LSTM network and each part of the lexicon embedding.
We compile three Chinese text corpora from online data for three domains, namely, “hotel", “mobile phone (mobile)", and “travel". All texts are about user reviews. Each text sample collected is first partitioned into clauses according to Chinese tokens. Three clause sets are subsequently obtained from the three text corpora.
The labels “+1", “0.5", and “0" correspond to the three sentiment classes “positive", “neutral", and “negative", respectively. The text data are labeled according to our two-stage labeling strategy.
In the first stage, only one user is invited to label each clause sample as the sentiment orientations for clauses (or sub-sentences) are easy to label.
In the second stage, five users are invited to label each text sample in the three raw data sets. The average score of the five users on each sample is calculated. Samples with average scores located in [0.6, 1] are labeled as “positive". Samples with average scores located in [0, 0.4] are labeled as “negative". Others are labeled as “neutral". The details of the labeling results are shown in Table 1.
All the training and test data and the labels are available online.
In our experiments, the five types of key lexical words introduced in Subsection 3.3.2 are manually constructed. The details of the five types of words are listed in Table 2. The conjunction words are also manually constructed. The number of conjunction words used in the experiments is 169.
In each experimental run, the training set is compiled on the basis of the training data listed in Table 1. The compiling rule is specified before each experimental run. The test data are fixed to facilitate experimental duplication and comparison by other researchers.
In our experiments, three competing algorithms, namely, BOW, CNN, and (conventional) LSTM, are used.
For BOW, term frequency-inverse document frequency is utilized to construct features. Ridge regression BIBREF29 is used as a classifier. For CNN, a three-channel CNN is used. For LSTM, one-layer and two-layer bi-LSTM with attention are adopted, and the results of the network with superior performance are presented. CNN and LSTM are performed on TensorFlow, and default parameter settings are followed.
The key parameters are searched as follows. The embedding dimensions of characters and words are searched in [100, 150, 200, 250, 300]. The parameter INLINEFORM0 in INLINEFORM1 -hot encoding is searched in INLINEFORM2 .
In this subsubsection, each of the three raw data sets (associated with their labels) shown in Table 1 is used. The clause data are not used. In other words, the training data used in this subsubsection are the same as those used in previous studies. For each data corpus, 1000 raw data samples are used as the test data, and the rest are used as the training data. The involved algorithms are detailed as follows.
CNN-C denotes the CNN with (Chinese) character embedding.
CNN-W denotes the CNN with (Chinese) word embedding.
CNN-Lex-C denotes the algorithm which also integrates polar words in CNN which is proposed by Shin et al. BIBREF24 . The (Chinese) character embedding is used.
CNN-Lex-W denotes the algorithm which also integrates polar words in CNN which is proposed by Shin et al. BIBREF24 . The (Chinese) word embedding is used.
Bi-LSTM-C denotes the BI-LSTM with (Chinese) character embedding.
Bi-LSTM-W denotes the Bi-LSTM with (Chinese) word embedding.
Lex-rule denotes the rule-based approach shows in Fig. 1. This approach is unsupervised.
BOW denotes the conventional algorithm which is based of bag-of-words features.
The accuracies of the above algorithms are listed in Table 3. Overall, Bi-LSTM outperforms CNN and BOW. This conclusion is in accordance with the conclusion that RNN performs efficiently against CNN in a broad range of natural language processing (NLP) tasks on the basis of extensive comparative studies BIBREF30 . In addition, CNN-lex outperforms CNN under both character and word embeddings, which suggests that lexicon cues are useful in sentiment analysis. Lex-rule achieves the lowest accuracies on all the three data sets. Considering that the performances of (traditional) CNN, Lex-rule, and BOW are relatively poor, they are not applied in the remaining parts.
In this experimental comparison, the proposed two-level LSTM is evaluated, whereas lexicon embedding is not used in the entire network. The primary goal is to test whether the introduced two-stage labeling and the two-level network structure are useful for sentiment analysis.
The raw and clause data listed in Table 1 are used to perform the two-level LSTM. Tl-LSTM denotes the two-level LSTM. “R+C" refer to the mixed data of raw and clause data. The test data are still the 1000 samples used in section 4.3.1 for each corpus. Table 4 shows the classification accuracies. To ensure that the results differ from those in Table 3, we explicitly add “R+C" after each algorithm in Table 4. In the last line of Table 4, the base results for each corpus in Table 3 are also listed.
On all the three data corpora, the proposed two-level network (without lexicon embedding) with character embedding, Tl-LSTM-C, outperforms all the other involved algorithms. On the travel and the mobile corpora, TI-LSTM-W outperforms Bi-LSTM-W. The results in Table 4 indicate that the performances of Tl-LSTM on the mixed training and test data (R+C) are better than those of Bi-LSTM. This comparison indicates that the proposed two-level LSTM is effective.
In addition, for the involved algorithms, most results achieved on “R+C" are better than the best results only achieved on `R'listed in Table 3. This comparison suggests that the introduced two-stage labeling is useful.
The results also show that in the two-level LSTM, character embedding is more effective than word embedding.
In this experimental run, lexicon embedding is used in the proposed two-level LSTM or INLINEFORM0 Tl-LSTM. Table 5 shows the results. The optimal parameter INLINEFORM1 is about 11.
The performances of TI-LSTM with lexicon embedding (i.e., INLINEFORM0 Tl-LSTM) are consistently better than those of TI-LSTM without lexicon embedding (i.e., Tl-LSTM) listed in Table 5. The improved accuracies of INLINEFORM1 TI-LSTM over Tl-LSTM on the three data corpora are explicitly listed in Table 6.
The experimental evaluation discussed in Subsection 4.3 verifies the effectiveness of the proposed method, INLINEFORM0 Tl-LSTM. Unlike the conventional RNN, INLINEFORM1 Tl-LSTM contains lexicon embedding that consists of new technique and components, including INLINEFORM2 -hot encoding, embedding for polar words, embedding for POS, and embedding for conjunctions. Therefore, this subsection evaluates the performances of the involved technique and embeddings separately.
Our INLINEFORM0 -hot encoding differs from one-hot encoding in two aspects. The first aspect is that the nonzero values in one-hot encoding are only equal to 1, whereas the nonzero values in INLINEFORM1 -hot encoding are INLINEFORM2 . The second aspect is that only one element in one-hot encoding is nonzero, whereas n elements in INLINEFORM3 -hot encoding are nonzero.
In this experiment, we test whether INLINEFORM0 -hot encoding is useful in two experimental runs. In the first run, the value of INLINEFORM1 is manually set to 0.5 and 1 in the experimental run without optimization. The parameter INLINEFORM2 in Eq. (6) is set as 15. The classification accuracies vary according to different INLINEFORM3 values on all the three data corpora. When INLINEFORM4 equals 1, the accuracies are the lowest in most cases shown in Fig. 7.
The results shown in Fig. 7 indicate that the value of INLINEFORM0 does affect the performance of the entire network. Consequently, the classical one-hot encoding, which fixes the value of nonzero elements as 1, is ineffective. In our experiments, the learned value of INLINEFORM1 is approximate 0.4.
In the second run, the performances under different INLINEFORM0 (i.e., 1, 5, 10, 15) are tested. Table 7 shows the comparison results. The value of INLINEFORM1 does affect the performance of the entire network, thereby indicating that the introduction of the INLINEFORM2 -duplicated strategy in encoding is effective. In the experiments, when INLINEFORM3 is increasing, the accuracies first increase and then decrease. The main reason may lie in the fact that when INLINEFORM4 becomes large, the proportion of lexicon embedding becomes large accordingly. An over-length input feature vector may incur “curse of dimensionality" and thus weaken the performance of the proposed two-level network.
In this experimental run, we test whether the labeled polar (negative and positive) words do affect the performance of the entire method when they are used in lexicon embedding. To this end, we order the polar words according to their frequencies in the training data. Top 0%, 50%, 100% polar words are used. The corresponding classification accuracies are depicted in Fig. 8.
In most cases, the accuracies are the lowest when no polar words are used in the lexicon embedding. When all polar words are used, the proposed network achieves the highest accuracies.
In the experiment, only one user is invited to manually compile the dictionary for a data corpus. One and a half hour is needed for each data corpus. In our viewpoint, it is worth manually compiling the polar words for sentiment analysis by considering the performance improvement and time-consumption.
In this experimental run, we test whether POS cues do play positive roles in the entire model. To this end, we remove POS in the lexicon embedding of the proposed method. The results are shown in Fig. 9.
In most cases, the accuracies with POS embedding are greater than those without POS embedding, thereby indicating that the application of POS to lexicon embedding is useful.
In this experimental run, we test whether conjunction cues do play positive roles in the entire model. To this end, the lexicon embedding for conjunction words is also removed from the proposed method. The results are shown in Fig. 10.
The algorithm with conjunction embedding outperforms that without conjunction embedding consistently, thereby indicating that the application of conjunction to lexicon embedding is useful.
High-quality labels are crucial for learning systems. Nevertheless, texts with mixed sentiments are difficult for humans to label in text sentiment classification. In this study, a new labeling strategy is introduced to partition texts into those with pure and mixed sentiment orientations. These two categories of texts are labeled using different processes. A two-level network is accordingly proposed to utilize the two labeled data in our two-stage labeling strategy. Lexical cues (e.g., polar words, POS, conjunction words) are particularly useful in sentiment analysis. These lexical cues are used in our two-level network, and a new encoding strategy, that is, INLINEFORM0 -hot encoding, is introduced. INLINEFORM1 -hot encoding is motivated by one-hot encoding. However, the former alleviates the drawbacks of the latter. Three Chinese sentiment text data corpora are compiled to verify the effectiveness of the proposed methodology. Our proposed method achieves the highest accuracies on these three data corpora.
The proposed two-level network and lexicon embedding can also be applied to other types of deep neural networks. In our future work, we will extend our main idea into several networks and text mining applications.
The authors wish to thank Zefeng Han, Qing Yin, Lei Yang, Xiaonan Wang, Nan Chen, Rujing Yao, Lihong Guo, Pinglong Zhao for the labeling of the experimental data. | They use a two-stage labeling strategy where in the first stage single annotators label a large number of short texts with relatively pure sentiment orientations and in the second stage multiple annotators label few text samples with mixed sentiment orientations |
dbf606cb6fc1d070418cc25e38ae57bbbb7087a0 | dbf606cb6fc1d070418cc25e38ae57bbbb7087a0_0 | Q: Which future direction in NLG are discussed?
Text: Introduction
Unsupervised pre-training has sparked a sensational research interest in the natural language processing (NLP) community. This technology provides a promising way to exploit linguistic information from large-scale unlabelled textual data, which can serve as an auxiliary prior knowledge to benefit a wide range of NLP applications. In the literature, language modeling (LM) is a prevalent task for pre-training, where the target words are predicted conditioned on a given context. Therefore, it is intuitive to employ the pre-trained LMs for natural language generation, as the pre-training objective naturally accords with the goal of NLG. However, revolutionary improvements are only observed in the field of NLU.
The primary factor that impedes the progress of unsupervised pre-training in NLG is an idiosyncratic nature of text generation: Basically, we do not write words from scratch, but instead based on particular context, e.g., the source language sentences for translation, the dialog histories for response generation, and the visual scenes for image captioning, among others. In unsupervised pre-training, the task-specific context is not available, which leads to a discrepancy between pre-training and training in the target task. More precisely, the challenges posed by the discrepancy can be reflected in two aspects: First, the diverse context makes it intractable to design a universal representation extractor as in the case of NLU, and the pre-trained language generators may have to modify their inner structures to deal with the task-specific context. Second, the mismatch in data distribution and objective between the two training stages might result in the performance on the pre-training tasks being compromised during fine-tuning, which is dubbed as the catastrophic forgetting problem BIBREF0.
In response to the above challenges, two lines of work are proposed by resorting to architecture-based and strategy-based solutions, respectively. Architecture-based methods either try to induce task-specific architecture during pre-training (task-specific methods), or aim at building a general pre-training architecture to fit all downstream tasks (task-agnostic methods). Strategy-based methods depart from the pre-training stage, seeking to take advantage of the pre-trained models during the process of target task learning. The approaches include fine-tuning schedules that elaborately design the control of learning rates for optimization, proxy tasks that leverage labeled data to help the pre-trained model better fit the target data distribution, and knowledge distillation approaches that ditch the paradigm of initialization with pre-trained parameters by adopting the pre-trained model as a teacher network.
The remainder of this review is organized as follows: In Section SECREF2, we will introduce the background knowledge about unsupervised pre-training for NLU, followed by a sketch of how the pre-trained models are employed through parameter initialization for NLG in Section SECREF3. In Section SECREF4, we will describe the architecture-based methods, and the strategy-based methods are presented in Section SECREF5. Section SECREF6 provides some in-depth discussions, and Section SECREF7 concludes this review.
Background: Unsupervised Pre-training for NLU
Learning fine-grained language representations is a perennial topic in natural language understanding. In restrospect, compelling evidences suggest that good representations can be learned through unsupervised pre-training.
Early work focused on word-level representations BIBREF1, BIBREF2, which encodes each word independently. For sentence-level representations, there are roughly two kinds of pre-training objectives, namely discriminative pre-training and generative pre-training. Discriminative pre-training distinguishes context sentence(s) for a given sentence from non-context sentence(s) BIBREF3, BIBREF4, with the aim to capture inter-sentence relationships. Generative pre-training follows the language model paradigm:
where $x_{t}$ is the $t^{th}$ word in the textual sequence to generate, $T$ indicates sequence length, $\theta $ stands for learnable parameters, and $C$ is the context information, which is defined by the pre-training objective. ELMo BIBREF5 and GPT (short for Generative Pre-training) BIBREF6 adopt uni-directional Transformer BIBREF7 and bi-directional LSTM BIBREF8 language models, respectively. In this case, the context is defined as $x_{1:t}$ or $x_{t+1:T}$. BERT BIBREF3 is trained with a novel masked language model (MLM), which is a non-autoregressive way of generation. Specifically, MLM randomly replaces a fixed proportion of tokens in each sentence with a special [MASK] token or a random token, which results in a corrupted sentence $X_{\text{mask}}$, and predicts each replaced token based on the same context $X_{\text{mask}}$. To alleviate the inconsistency with target tasks caused by the introduction of [MASK] token, XLNet BIBREF9 introduces permutation-based language model, which conducts autoregressive language modeling over all possible permutations of the original word sequence. This gives rise to a context $C=X_{\mathbf {z}_{1:t-1}}$, where $\mathbf {z}$ is a certain permutation of $[1,2, \ldots , T]$, according to the definitions in BIBREF9. BIBREF10 and BIBREF11 pre-trained an encoder-decoder framework to reconstruct the input sentence and the surrounding sentence, respectively, and the encoded input sentence thereby is included in the context $C$.
The sentence representations learned by LMs can be used to perform many NLU tasks by adding a simple linear classifier. Despite the objective of language modeling, the pre-trained representations and have successfuly pushed the state-of-the-art on multiple benchmarks .
Unsupervised Pre-training and Parameter Initialization for NLG
NLG systems are usually built with an encoder-decoder framework, where the encoder reads the context information and the decoder generates the target text from the encoded vectorial representations. A direct way to utilize the pre-trained models is to initialize part of the encoder (when dealing with textual context) and/or the decoder with pre-trained parameters. For the encoder, pre-training is expected to provide better sentence representations, as we discussed in Section SECREF2. For the decoder, the intuition is to endow the model with some rudimentary ability for text generation.
BIBREF12 employed BERT as the encoder for abstractive text summarization, with some additional techniques to help integrate the BERT-initialized encoder with the randomly initialized decoder, which we will explicate in Section SECREF12. GPT-2 BIBREF13 inherited the left-to-right LM pre-training objective from GPT and extended the application to NLG, where the pre-trained LM directly serves as the language generator, with some special symbols to identify task-specific contexts. In the case of zero-shot task transfer, preliminary experiments showed that straightforward adaption of GPT-2 compares unfavorably with other unsupervised baselines.
BIBREF14 is among the first attempts to investigate unsupervised pre-training for sequence to sequence (Seq2Seq) learning. They used pre-trained LSTM-based LMs to initialize the first layer of the encoder and the decoder, which act as representation extractors. An additional LSTM layer, which is randomly initialized, is then added on top of the pre-trained LMs to build the Seq2Seq framework. To make use of the text generation ability of LMs, the output softmax layer of the decoder LM is also retained. Some recent endeavours BIBREF15, BIBREF16 explored multiple combinations of GPT- and BERT-based models to initialize the encoder and the decoder, respectively. Although remarkable results are observed, the separately pre-trained LMs are still inconsistent with the Seq2Seq framework.
Architecture-based Methods ::: Inducing Task-Specific Architecture in Pre-training
Separately initializing the encoder and the decoder with LMs neglects the interaction between the two modules at the pre-training stage, which is sub-optimal. For NLG tasks that can be modeled as Seq2Seq learning, it is feasible to jointly pre-train the encoder and the decoder. Existing approaches for this sake can be categorized into three variants: Denoising autoencoders (DAEs), conditional masked language models (CMLMs) and sequence to sequence language models (Seq2Seq LMs).
Architecture-based Methods ::: Inducing Task-Specific Architecture in Pre-training ::: Denoising Autoencoder
An intuitive way to conduct unsupervised Seq2Seq learning is to train an autoencoder (AE) based on encoder-decoder framework. Different from AEs, DAEs take a corrupted sentence as input and reconstruct the original sentence. The advantage is that the corrupted input will force the decoder to extract relevant information from the source side for text generation. To obtain the corrupted sentence, BIBREF17 designed three noising functions: shuffle, delete and replace (the left plot of Figure FIGREF4 gives an illustration), each of which is controlled by a pre-defined probability distribution. To be more specific, each token in the raw sequence is assigned with a new index based on a gaussion distribution $N(0, \sigma )$; the delete and replace operations of a token are determined by a Bernoulli distribution $B(p)$ with Beta distribution as prior. The three functions are applied to the raw sequences in random order.
Architecture-based Methods ::: Inducing Task-Specific Architecture in Pre-training ::: Conditional Masked Language Model
CMLM BIBREF18 extends the single model MLM proposed by BIBREF3 to the encoder-decoder setting, where the masked text sequence is read by the encoder, and the decoder only reconstructs the masked tokens, in construct to the entire sequence in DAEs. As the middle plot of Figure FIGREF4 shows, CMLM masks consecutive tokens , and the unmasked tokens in the encoder side are masked when being feed to the decoder. Following the notations in BIBREF18, let us assume that the tokens with index from $u$ to $v$ are masked from the raw sentence $X$, which results in $X^{\backslash u: v}$, and $X^{u: v}$ denotes the decoder input. Then, when predicting each masked token $x_{t}$ ($u \le t \le v$), the context is $X^{u: v}_{<t}$ and $X^{\backslash u: v}$. The underlying motivation, as BIBREF18 argued, is to force the encoder to understand the meaning of the unmasked tokens, which is achieved by encoder side masks, and encourage the decoder to refer to the source information rather than the leftward target tokens, which is achieved by decoder side masks.
Architecture-based Methods ::: Inducing Task-Specific Architecture in Pre-training ::: Sequence to Sequence Language Model
Seq2Seq LM BIBREF19 performs Seq2Seq modeling using a single Transformer model, with the concatenation of source sentence and target sentence as input. To simulate Seq2Seq learning with encoder-decoder frameworks, the attention span of each target token is constrained to the source tokens and the leftward target tokens, which is achieved by self-attention masks (see the right plot of Figure FIGREF4). In this way, the ability to extract language representation and generate texts are integrated into a single model. It is worth mentioning that Seq2Seq LM does not auto-regressively generate the target sentence, but instead predicting masked tokens based on the contexts controlled by self-attention masks. In other words, Seq2Seq LM still belongs into the family of MLMs. Apart from Seq2Seq LM, BIBREF19 also explored uni-directional LM and bi-directional LM structures to perform the MLM-based cloze task, and incorporated the three kinds of LMs to build the final pre-training objective.
Architecture-based Methods ::: Encoder-Agnostic Architectures for Adaptation
Although the Seq2Seq-based pre-training methods exhibit strong performance, they are confined to text-to-text generation. In order to encompass more diverse contexts, some researches began to investigate encoder-agnostic pre-training architectures BIBREF22, BIBREF23. Context Attention and Pseudo Self-Attention are two typical variants presented by BIBREF23, which differ in the way that the task-specific context is injected (see Figure FIGREF11). Context Attention takes the form of a standard Transformer decoder, with the layer that attends to the encoder outputs being randomly initialized. Pseudo Self-Attention considers the context vectors and the previous layer decoder outputs as an integral input, and the attended results are computed as follows:
where $C \in \mathbb {R}^{|C| \times d_{c}}$ and $Y \in \mathbb {R}^{|Y| \times d_{y}}$ are the context vectors and representations of the target textual sequence, respectively. The linear transformation matrices $W^{c}_{k}, W^{c}_{v} \in \mathbb {R}^{|C| \times d_{model}}$ with respect to $C$ are added to project the context to the self-attention space, and $W_{q}, W^{y}_{k}, W^{y}_{v} \in \mathbb {R}^{|Y| \times d_{model}}$ are part of the pre-trained model.
Except for the performance on target tasks, an alternative metric to gauge the quality of encoder-agnostic architectures is the degree to which the pre-trained parameters have to change, in order to inject the task-specific context. BIBREF23 compared the parameter changes of Context Attention and Pseudo Self-Attention in the feed forward layer, and discovered that Pseudo Self-Attention is more robust under this evaluation.
Strategy-based Methods ::: Fine-tuning Schedules for Adaption
When the pre-trained model is only a part of the target task system, fine-tuning requires joint learning of the components initialized in different fashion, which can make the training process unstable. The pre-trained model may also suffer from aggravated catastrophic forgetting problem as it has to coordinate with other components during fine-tuning BIBREF24, BIBREF25. From the perspective of optimization, it is unreasonable to schedule the pre-trained components and the newly-introduced components with the same learning rate, considering that the former have already possessed some unique knowledge. A common assumption is that the pre-trained parameters should be updated at a slower learning rate and with smoother decay BIBREF12, BIBREF25. The rationale behind such setting is that fine-tuning with more accurate gradient can prevent the pre-trained parameters from deviating too faraway from the original point, and the newly-introduced components need to quickly converge to the target parameter space. To this end, BIBREF12 adopted two Adam optimizers with different learning rates for the pre-trained encoder and the randomly initialized decoder. The learning rates are scheduled as in BIBREF7 with different warming up steps:
where ${warmup}_{\operatorname{Enc/Dec}}$ and $\tilde{l}r_{\operatorname{Enc/Dec}}$ determine the speed of learning rate changes and the max learning rates, respectively.
Strategy-based Methods ::: Proxy Tasks for Adaption
Large-scale unlabelled data provides generic linguistic knowledge, but the target tasks have unique data distribution and objectives. An effective way to bridge this gap is to introduce proxy tasks with moderate changes to the pre-training objectives, but at the same time take the labeled data into account BIBREF15, BIBREF20. Translation Language Modeling (TLM) BIBREF15 is a special generalization of MLM in the cross-lingual situation. It leverages the paralleled machine translation corpus for further training of the LMs that are pre-trained on monolingual corpora. Specifically, the source language sentence and the corresponding target language sentence are fed to the model in parallel, with random tokens from each language being masked to perform the cloze-style prediction as in MLM. Different from monolingual MLM, TLM encourages word predictions to rely on the interdependence from two languages, therefore the sentence representations learned from separate languages can be well aligned.
For some particular NLG tasks, existing proxy tasks designed under the supervised setup can also work with unsupervised pre-training models. For instance, in neural text summarization, the combination of extractive and abstractive objectives can generate better summaries BIBREF26, BIBREF27. Inspired by this, BIBREF12 introduced extractive summarization as a proxy task to fine-tune the pre-trained BERT, before adopting it as the abstractive summarization encoder. Compared with the original BERT features, the representations learned from extractive summarization contain more task-specific information, therefore conveying the meaning of source texts better.
Strategy-based Methods ::: Knowledge Distillation for Adaption
The aforementioned methods are diverse in implementation, but share the common idea of employing the pre-trained models through parameter initialization. An alternative way to exploit the pre-trained models is using the knowledge distillation technique BIBREF28. Knowledge distillation is a special form of training, where a student network learns from the supervision signals produced by a teacher network.
Taking BERT as an example, the pre-trained MLM contains global information, which can teach the autoregressive Seq2Seq models to “see from the future” BIBREF20. In practice, the probability distribution predicted by BERT is regarded as a soft label to compute the cross-entropy loss function :
where $X$, $Y$ and $Y^{masked}$ are the source sequence, the raw target sequence and the masked target sequence, respectively. $\mathcal {V}$ denotes the output vocabulary. $\theta $ indicates the parameters of the student network (Seq2Seq), which are learnable, and $\phi $ indicates the BERT parameters, which are fixed. In this way, the knowledge from unsupervised pre-training can be flexibly transferred to the target tasks, dispensing with the size and architecture limitations.
The supervision can also be derived from the hidden representations BIBREF25, with a mean-squared-error (MSE) distillation loss:
where $m$ and $n$ are hyper-parameters denoting the layer subscripts. Compared with the probability soft labels, the representation distillation method requires the Seq2Seq model to have the same hidden size with BERT, which is a more strict constrain.
Combining the knowledge distillation loss and the standard generative loss for Seq2Seq learning gives rise to the final objective to optimize:
where $\alpha $ is the weighting term that balances the contribution of the two kinds of loss functions.
Discussions ::: The Relationship between Architecture- and Strategy-based Methods
We have analysed two major challenges faced by the application of unsupervised pre-training to NLG (see Section SECREF1). On this basis, we introduced existing methodologies from the architecture and strategy considerations. The architecture-based methods are mainly proposed in response to the first challenge. Since the architecture of pre-trained model has a significant effect on the downstream task (when the pre-trained parameters are used for initialization), architecture designings have to plan in advance to narrow the discrepancy between pre-training and training on target tasks. This motivation has shown great effectiveness on the Seq2Seq framework BIBREF17, BIBREF18, BIBREF19. The strategy-based methods focus on the second challenge. They take a postprocessing point of view, with the aim to make the best of the pre-trained model at the target task training stage. It is noteworthy that the challenges are not independent inherently, and the two types of methods can actually work as complement to each other. For example, the fine-tuning schedules can alleviate the negative effects caused by the modification of pre-trained structures, and the catastrophic forgetting problem can also seek solution by devising a general task-agnostic architecture.
Discussions ::: Experimental Phenomenons
Existing researches on unsupervised pre-training for NLG are conducted on various tasks for different purposes. Probing into the assorted empirical results may help us discover some interesting phenomenons:
The advantage of pre-training gradually diminishes with the increase of labeled data BIBREF14, BIBREF17, BIBREF18.
Fixed representations yield better results than fine-tuning in some cases BIBREF24.
Overall, pre-training the Seq2Seq encoder outperforms pre-training the decoder BIBREF24, BIBREF17, BIBREF15, BIBREF16.
The first two phenomenons attest to the catastrophic forgetting theory. Thanks to the access to large-scale unlabeled corpora, unsupervised pre-training is able to excel at zero/low-shot settings, while the pre-trained models can only achieve few gains when abundant labeled data is available. This can be explained by the high quality of the dataset and the capacity of the task-specific models, which leave little space for improvement. Nonetheless, the increased supervision from labeled data can also influence the performance on pre-training tasks. By fixing the pre-trained parameters, the learned representations will not be affected by the numerous iterations of training on the target task, which makes them work better without fine-tuning.
The third phenomenon is kind of counter-intuitive, as the generative pre-training objectives are more similar to the decoder's function. There is no unanimous theory to explain why the encoder is a more important element to pre-train. But this discovery suggests that the pre-trained LMs are more robust when acting as representation extractors, while they are more sensitive the the change of context when acting as conditional language generators.
Discussions ::: Future Directions
The diversity of NLG applications poses challenges on the employment of unsupervised pre-training, yet it also raises more scientific questions for us to explore. In terms of the future development of this technology, we emphasize the importance of answering four questions: 1) How to introduce unsupervised pre-training into NLG tasks with cross-modal context? 2) How to design a generic pre-training algorithm to fit a wide range of NLG tasks? 3) How to reduce the computing resources required for large-scale pre-training? 4) What aspect of knowledge do the pre-trained models provide for better language generation?
NLG tasks can be defined by the context features and mapping functions. The introduction of cross-lingual textual features BIBREF15 and task-specific Seq2Seq architectures BIBREF18, BIBREF17, BIBREF19 in the pre-training stage has successfully boosted the performance on text-to-text generation. For NLG tasks concerning multiple modalities, it is conceivable that pre-training methods could also benefit from the joint consideration of cross-modal features. For example, in the vision-and-language field, the learning of cross-modal representations has proven to be highly effective BIBREF29, BIBREF30, but such representations can not yet be extracted from unpaired images and texts for image-grounded text generation, to the best of our knowledge.
In NLU, it is possible to pre-train one model to obtain language representations once and for all. As for NLG, a task-agnostic pre-training algorithm should transcend the purpose of representation learning, and consider the general ability for language generation. The notion of “encoder-agnostic adaption” BIBREF23 makes a preliminary step towards this direction, but still remains far from approaching the equivalent performance as its NLU counterparts BIBREF5, BIBREF3, BIBREF6, BIBREF9.
Due to the colossal scale of the pre-training corpora, including a large number of parameters is essential to achieve favorable performance. As a result, the model size usually costs at least 8 GPU cards BIBREF19, BIBREF18, BIBREF15 in the pre-training for NLG systems, and it also hinders real-world applications. To reduce the memory consumption problem, existing work resorted to knowledge distillation to transfer the knowledge from a large teacher network to a small student network BIBREF31, BIBREF32, or parameter reduction techniques to prune the model size in a more direct way BIBREF33. However, the research context is limited to the NLU scenarios, and same endeavours are necessary to NLG applications.
Another important branch of researches on unsupervised pre-training in NLP try to explain what kind of knowledge can be learned from pre-training. Related work has been done on the basis of both language understanding BIBREF34, BIBREF35 and generation BIBREF36. Specially, BIBREF36 analysed the characters of texts generated from a pre-trained GPT-2 by evaluating them over a wide spectrum of metrics. We argue that deeper understanding the way in which unsupervised pre-training contributes to better text generation, and the intrinsic mechanisms of the pre-trained models are also crucial to future work.
Conclusion
Unsupervised pre-training has defined the state-of-the-arts on a variety NLP tasks. However, in the field of NLG, the diversity of context information is still impeding the the application of unsupervised pre-training. The major challenges exist in designing model architectures to cater for the assorted context, and retaining the general knowledge learned from pre-training. In this review, we survey the recent unsupervised methods to utilize large-scale corpora for NLG purposes, with a highlight on those aiming at facilitating the integration of pre-trained models with downstream tasks. We propose to classify them into architecture- and strategy-based methods, followed with detailed introductions and discussions of their pros and cons. Based on the comparison of these methods and analyses of some informative experimental results from previous publications, we summarize some scientific questions that has not yet been well understood, and suggest attention being paid to these questions by future work. | 1) How to introduce unsupervised pre-training into NLG tasks with cross-modal context?, 2) How to design a generic pre-training algorithm to fit a wide range of NLG tasks?, 3) How to reduce the computing resources required for large-scale pre-training?, 4) What aspect of knowledge do the pre-trained models provide for better language generation? |
9651fbd887439bf12590244c75e714f15f50f73d | 9651fbd887439bf12590244c75e714f15f50f73d_0 | Q: What experimental phenomena are presented?
Text: Introduction
Unsupervised pre-training has sparked a sensational research interest in the natural language processing (NLP) community. This technology provides a promising way to exploit linguistic information from large-scale unlabelled textual data, which can serve as an auxiliary prior knowledge to benefit a wide range of NLP applications. In the literature, language modeling (LM) is a prevalent task for pre-training, where the target words are predicted conditioned on a given context. Therefore, it is intuitive to employ the pre-trained LMs for natural language generation, as the pre-training objective naturally accords with the goal of NLG. However, revolutionary improvements are only observed in the field of NLU.
The primary factor that impedes the progress of unsupervised pre-training in NLG is an idiosyncratic nature of text generation: Basically, we do not write words from scratch, but instead based on particular context, e.g., the source language sentences for translation, the dialog histories for response generation, and the visual scenes for image captioning, among others. In unsupervised pre-training, the task-specific context is not available, which leads to a discrepancy between pre-training and training in the target task. More precisely, the challenges posed by the discrepancy can be reflected in two aspects: First, the diverse context makes it intractable to design a universal representation extractor as in the case of NLU, and the pre-trained language generators may have to modify their inner structures to deal with the task-specific context. Second, the mismatch in data distribution and objective between the two training stages might result in the performance on the pre-training tasks being compromised during fine-tuning, which is dubbed as the catastrophic forgetting problem BIBREF0.
In response to the above challenges, two lines of work are proposed by resorting to architecture-based and strategy-based solutions, respectively. Architecture-based methods either try to induce task-specific architecture during pre-training (task-specific methods), or aim at building a general pre-training architecture to fit all downstream tasks (task-agnostic methods). Strategy-based methods depart from the pre-training stage, seeking to take advantage of the pre-trained models during the process of target task learning. The approaches include fine-tuning schedules that elaborately design the control of learning rates for optimization, proxy tasks that leverage labeled data to help the pre-trained model better fit the target data distribution, and knowledge distillation approaches that ditch the paradigm of initialization with pre-trained parameters by adopting the pre-trained model as a teacher network.
The remainder of this review is organized as follows: In Section SECREF2, we will introduce the background knowledge about unsupervised pre-training for NLU, followed by a sketch of how the pre-trained models are employed through parameter initialization for NLG in Section SECREF3. In Section SECREF4, we will describe the architecture-based methods, and the strategy-based methods are presented in Section SECREF5. Section SECREF6 provides some in-depth discussions, and Section SECREF7 concludes this review.
Background: Unsupervised Pre-training for NLU
Learning fine-grained language representations is a perennial topic in natural language understanding. In restrospect, compelling evidences suggest that good representations can be learned through unsupervised pre-training.
Early work focused on word-level representations BIBREF1, BIBREF2, which encodes each word independently. For sentence-level representations, there are roughly two kinds of pre-training objectives, namely discriminative pre-training and generative pre-training. Discriminative pre-training distinguishes context sentence(s) for a given sentence from non-context sentence(s) BIBREF3, BIBREF4, with the aim to capture inter-sentence relationships. Generative pre-training follows the language model paradigm:
where $x_{t}$ is the $t^{th}$ word in the textual sequence to generate, $T$ indicates sequence length, $\theta $ stands for learnable parameters, and $C$ is the context information, which is defined by the pre-training objective. ELMo BIBREF5 and GPT (short for Generative Pre-training) BIBREF6 adopt uni-directional Transformer BIBREF7 and bi-directional LSTM BIBREF8 language models, respectively. In this case, the context is defined as $x_{1:t}$ or $x_{t+1:T}$. BERT BIBREF3 is trained with a novel masked language model (MLM), which is a non-autoregressive way of generation. Specifically, MLM randomly replaces a fixed proportion of tokens in each sentence with a special [MASK] token or a random token, which results in a corrupted sentence $X_{\text{mask}}$, and predicts each replaced token based on the same context $X_{\text{mask}}$. To alleviate the inconsistency with target tasks caused by the introduction of [MASK] token, XLNet BIBREF9 introduces permutation-based language model, which conducts autoregressive language modeling over all possible permutations of the original word sequence. This gives rise to a context $C=X_{\mathbf {z}_{1:t-1}}$, where $\mathbf {z}$ is a certain permutation of $[1,2, \ldots , T]$, according to the definitions in BIBREF9. BIBREF10 and BIBREF11 pre-trained an encoder-decoder framework to reconstruct the input sentence and the surrounding sentence, respectively, and the encoded input sentence thereby is included in the context $C$.
The sentence representations learned by LMs can be used to perform many NLU tasks by adding a simple linear classifier. Despite the objective of language modeling, the pre-trained representations and have successfuly pushed the state-of-the-art on multiple benchmarks .
Unsupervised Pre-training and Parameter Initialization for NLG
NLG systems are usually built with an encoder-decoder framework, where the encoder reads the context information and the decoder generates the target text from the encoded vectorial representations. A direct way to utilize the pre-trained models is to initialize part of the encoder (when dealing with textual context) and/or the decoder with pre-trained parameters. For the encoder, pre-training is expected to provide better sentence representations, as we discussed in Section SECREF2. For the decoder, the intuition is to endow the model with some rudimentary ability for text generation.
BIBREF12 employed BERT as the encoder for abstractive text summarization, with some additional techniques to help integrate the BERT-initialized encoder with the randomly initialized decoder, which we will explicate in Section SECREF12. GPT-2 BIBREF13 inherited the left-to-right LM pre-training objective from GPT and extended the application to NLG, where the pre-trained LM directly serves as the language generator, with some special symbols to identify task-specific contexts. In the case of zero-shot task transfer, preliminary experiments showed that straightforward adaption of GPT-2 compares unfavorably with other unsupervised baselines.
BIBREF14 is among the first attempts to investigate unsupervised pre-training for sequence to sequence (Seq2Seq) learning. They used pre-trained LSTM-based LMs to initialize the first layer of the encoder and the decoder, which act as representation extractors. An additional LSTM layer, which is randomly initialized, is then added on top of the pre-trained LMs to build the Seq2Seq framework. To make use of the text generation ability of LMs, the output softmax layer of the decoder LM is also retained. Some recent endeavours BIBREF15, BIBREF16 explored multiple combinations of GPT- and BERT-based models to initialize the encoder and the decoder, respectively. Although remarkable results are observed, the separately pre-trained LMs are still inconsistent with the Seq2Seq framework.
Architecture-based Methods ::: Inducing Task-Specific Architecture in Pre-training
Separately initializing the encoder and the decoder with LMs neglects the interaction between the two modules at the pre-training stage, which is sub-optimal. For NLG tasks that can be modeled as Seq2Seq learning, it is feasible to jointly pre-train the encoder and the decoder. Existing approaches for this sake can be categorized into three variants: Denoising autoencoders (DAEs), conditional masked language models (CMLMs) and sequence to sequence language models (Seq2Seq LMs).
Architecture-based Methods ::: Inducing Task-Specific Architecture in Pre-training ::: Denoising Autoencoder
An intuitive way to conduct unsupervised Seq2Seq learning is to train an autoencoder (AE) based on encoder-decoder framework. Different from AEs, DAEs take a corrupted sentence as input and reconstruct the original sentence. The advantage is that the corrupted input will force the decoder to extract relevant information from the source side for text generation. To obtain the corrupted sentence, BIBREF17 designed three noising functions: shuffle, delete and replace (the left plot of Figure FIGREF4 gives an illustration), each of which is controlled by a pre-defined probability distribution. To be more specific, each token in the raw sequence is assigned with a new index based on a gaussion distribution $N(0, \sigma )$; the delete and replace operations of a token are determined by a Bernoulli distribution $B(p)$ with Beta distribution as prior. The three functions are applied to the raw sequences in random order.
Architecture-based Methods ::: Inducing Task-Specific Architecture in Pre-training ::: Conditional Masked Language Model
CMLM BIBREF18 extends the single model MLM proposed by BIBREF3 to the encoder-decoder setting, where the masked text sequence is read by the encoder, and the decoder only reconstructs the masked tokens, in construct to the entire sequence in DAEs. As the middle plot of Figure FIGREF4 shows, CMLM masks consecutive tokens , and the unmasked tokens in the encoder side are masked when being feed to the decoder. Following the notations in BIBREF18, let us assume that the tokens with index from $u$ to $v$ are masked from the raw sentence $X$, which results in $X^{\backslash u: v}$, and $X^{u: v}$ denotes the decoder input. Then, when predicting each masked token $x_{t}$ ($u \le t \le v$), the context is $X^{u: v}_{<t}$ and $X^{\backslash u: v}$. The underlying motivation, as BIBREF18 argued, is to force the encoder to understand the meaning of the unmasked tokens, which is achieved by encoder side masks, and encourage the decoder to refer to the source information rather than the leftward target tokens, which is achieved by decoder side masks.
Architecture-based Methods ::: Inducing Task-Specific Architecture in Pre-training ::: Sequence to Sequence Language Model
Seq2Seq LM BIBREF19 performs Seq2Seq modeling using a single Transformer model, with the concatenation of source sentence and target sentence as input. To simulate Seq2Seq learning with encoder-decoder frameworks, the attention span of each target token is constrained to the source tokens and the leftward target tokens, which is achieved by self-attention masks (see the right plot of Figure FIGREF4). In this way, the ability to extract language representation and generate texts are integrated into a single model. It is worth mentioning that Seq2Seq LM does not auto-regressively generate the target sentence, but instead predicting masked tokens based on the contexts controlled by self-attention masks. In other words, Seq2Seq LM still belongs into the family of MLMs. Apart from Seq2Seq LM, BIBREF19 also explored uni-directional LM and bi-directional LM structures to perform the MLM-based cloze task, and incorporated the three kinds of LMs to build the final pre-training objective.
Architecture-based Methods ::: Encoder-Agnostic Architectures for Adaptation
Although the Seq2Seq-based pre-training methods exhibit strong performance, they are confined to text-to-text generation. In order to encompass more diverse contexts, some researches began to investigate encoder-agnostic pre-training architectures BIBREF22, BIBREF23. Context Attention and Pseudo Self-Attention are two typical variants presented by BIBREF23, which differ in the way that the task-specific context is injected (see Figure FIGREF11). Context Attention takes the form of a standard Transformer decoder, with the layer that attends to the encoder outputs being randomly initialized. Pseudo Self-Attention considers the context vectors and the previous layer decoder outputs as an integral input, and the attended results are computed as follows:
where $C \in \mathbb {R}^{|C| \times d_{c}}$ and $Y \in \mathbb {R}^{|Y| \times d_{y}}$ are the context vectors and representations of the target textual sequence, respectively. The linear transformation matrices $W^{c}_{k}, W^{c}_{v} \in \mathbb {R}^{|C| \times d_{model}}$ with respect to $C$ are added to project the context to the self-attention space, and $W_{q}, W^{y}_{k}, W^{y}_{v} \in \mathbb {R}^{|Y| \times d_{model}}$ are part of the pre-trained model.
Except for the performance on target tasks, an alternative metric to gauge the quality of encoder-agnostic architectures is the degree to which the pre-trained parameters have to change, in order to inject the task-specific context. BIBREF23 compared the parameter changes of Context Attention and Pseudo Self-Attention in the feed forward layer, and discovered that Pseudo Self-Attention is more robust under this evaluation.
Strategy-based Methods ::: Fine-tuning Schedules for Adaption
When the pre-trained model is only a part of the target task system, fine-tuning requires joint learning of the components initialized in different fashion, which can make the training process unstable. The pre-trained model may also suffer from aggravated catastrophic forgetting problem as it has to coordinate with other components during fine-tuning BIBREF24, BIBREF25. From the perspective of optimization, it is unreasonable to schedule the pre-trained components and the newly-introduced components with the same learning rate, considering that the former have already possessed some unique knowledge. A common assumption is that the pre-trained parameters should be updated at a slower learning rate and with smoother decay BIBREF12, BIBREF25. The rationale behind such setting is that fine-tuning with more accurate gradient can prevent the pre-trained parameters from deviating too faraway from the original point, and the newly-introduced components need to quickly converge to the target parameter space. To this end, BIBREF12 adopted two Adam optimizers with different learning rates for the pre-trained encoder and the randomly initialized decoder. The learning rates are scheduled as in BIBREF7 with different warming up steps:
where ${warmup}_{\operatorname{Enc/Dec}}$ and $\tilde{l}r_{\operatorname{Enc/Dec}}$ determine the speed of learning rate changes and the max learning rates, respectively.
Strategy-based Methods ::: Proxy Tasks for Adaption
Large-scale unlabelled data provides generic linguistic knowledge, but the target tasks have unique data distribution and objectives. An effective way to bridge this gap is to introduce proxy tasks with moderate changes to the pre-training objectives, but at the same time take the labeled data into account BIBREF15, BIBREF20. Translation Language Modeling (TLM) BIBREF15 is a special generalization of MLM in the cross-lingual situation. It leverages the paralleled machine translation corpus for further training of the LMs that are pre-trained on monolingual corpora. Specifically, the source language sentence and the corresponding target language sentence are fed to the model in parallel, with random tokens from each language being masked to perform the cloze-style prediction as in MLM. Different from monolingual MLM, TLM encourages word predictions to rely on the interdependence from two languages, therefore the sentence representations learned from separate languages can be well aligned.
For some particular NLG tasks, existing proxy tasks designed under the supervised setup can also work with unsupervised pre-training models. For instance, in neural text summarization, the combination of extractive and abstractive objectives can generate better summaries BIBREF26, BIBREF27. Inspired by this, BIBREF12 introduced extractive summarization as a proxy task to fine-tune the pre-trained BERT, before adopting it as the abstractive summarization encoder. Compared with the original BERT features, the representations learned from extractive summarization contain more task-specific information, therefore conveying the meaning of source texts better.
Strategy-based Methods ::: Knowledge Distillation for Adaption
The aforementioned methods are diverse in implementation, but share the common idea of employing the pre-trained models through parameter initialization. An alternative way to exploit the pre-trained models is using the knowledge distillation technique BIBREF28. Knowledge distillation is a special form of training, where a student network learns from the supervision signals produced by a teacher network.
Taking BERT as an example, the pre-trained MLM contains global information, which can teach the autoregressive Seq2Seq models to “see from the future” BIBREF20. In practice, the probability distribution predicted by BERT is regarded as a soft label to compute the cross-entropy loss function :
where $X$, $Y$ and $Y^{masked}$ are the source sequence, the raw target sequence and the masked target sequence, respectively. $\mathcal {V}$ denotes the output vocabulary. $\theta $ indicates the parameters of the student network (Seq2Seq), which are learnable, and $\phi $ indicates the BERT parameters, which are fixed. In this way, the knowledge from unsupervised pre-training can be flexibly transferred to the target tasks, dispensing with the size and architecture limitations.
The supervision can also be derived from the hidden representations BIBREF25, with a mean-squared-error (MSE) distillation loss:
where $m$ and $n$ are hyper-parameters denoting the layer subscripts. Compared with the probability soft labels, the representation distillation method requires the Seq2Seq model to have the same hidden size with BERT, which is a more strict constrain.
Combining the knowledge distillation loss and the standard generative loss for Seq2Seq learning gives rise to the final objective to optimize:
where $\alpha $ is the weighting term that balances the contribution of the two kinds of loss functions.
Discussions ::: The Relationship between Architecture- and Strategy-based Methods
We have analysed two major challenges faced by the application of unsupervised pre-training to NLG (see Section SECREF1). On this basis, we introduced existing methodologies from the architecture and strategy considerations. The architecture-based methods are mainly proposed in response to the first challenge. Since the architecture of pre-trained model has a significant effect on the downstream task (when the pre-trained parameters are used for initialization), architecture designings have to plan in advance to narrow the discrepancy between pre-training and training on target tasks. This motivation has shown great effectiveness on the Seq2Seq framework BIBREF17, BIBREF18, BIBREF19. The strategy-based methods focus on the second challenge. They take a postprocessing point of view, with the aim to make the best of the pre-trained model at the target task training stage. It is noteworthy that the challenges are not independent inherently, and the two types of methods can actually work as complement to each other. For example, the fine-tuning schedules can alleviate the negative effects caused by the modification of pre-trained structures, and the catastrophic forgetting problem can also seek solution by devising a general task-agnostic architecture.
Discussions ::: Experimental Phenomenons
Existing researches on unsupervised pre-training for NLG are conducted on various tasks for different purposes. Probing into the assorted empirical results may help us discover some interesting phenomenons:
The advantage of pre-training gradually diminishes with the increase of labeled data BIBREF14, BIBREF17, BIBREF18.
Fixed representations yield better results than fine-tuning in some cases BIBREF24.
Overall, pre-training the Seq2Seq encoder outperforms pre-training the decoder BIBREF24, BIBREF17, BIBREF15, BIBREF16.
The first two phenomenons attest to the catastrophic forgetting theory. Thanks to the access to large-scale unlabeled corpora, unsupervised pre-training is able to excel at zero/low-shot settings, while the pre-trained models can only achieve few gains when abundant labeled data is available. This can be explained by the high quality of the dataset and the capacity of the task-specific models, which leave little space for improvement. Nonetheless, the increased supervision from labeled data can also influence the performance on pre-training tasks. By fixing the pre-trained parameters, the learned representations will not be affected by the numerous iterations of training on the target task, which makes them work better without fine-tuning.
The third phenomenon is kind of counter-intuitive, as the generative pre-training objectives are more similar to the decoder's function. There is no unanimous theory to explain why the encoder is a more important element to pre-train. But this discovery suggests that the pre-trained LMs are more robust when acting as representation extractors, while they are more sensitive the the change of context when acting as conditional language generators.
Discussions ::: Future Directions
The diversity of NLG applications poses challenges on the employment of unsupervised pre-training, yet it also raises more scientific questions for us to explore. In terms of the future development of this technology, we emphasize the importance of answering four questions: 1) How to introduce unsupervised pre-training into NLG tasks with cross-modal context? 2) How to design a generic pre-training algorithm to fit a wide range of NLG tasks? 3) How to reduce the computing resources required for large-scale pre-training? 4) What aspect of knowledge do the pre-trained models provide for better language generation?
NLG tasks can be defined by the context features and mapping functions. The introduction of cross-lingual textual features BIBREF15 and task-specific Seq2Seq architectures BIBREF18, BIBREF17, BIBREF19 in the pre-training stage has successfully boosted the performance on text-to-text generation. For NLG tasks concerning multiple modalities, it is conceivable that pre-training methods could also benefit from the joint consideration of cross-modal features. For example, in the vision-and-language field, the learning of cross-modal representations has proven to be highly effective BIBREF29, BIBREF30, but such representations can not yet be extracted from unpaired images and texts for image-grounded text generation, to the best of our knowledge.
In NLU, it is possible to pre-train one model to obtain language representations once and for all. As for NLG, a task-agnostic pre-training algorithm should transcend the purpose of representation learning, and consider the general ability for language generation. The notion of “encoder-agnostic adaption” BIBREF23 makes a preliminary step towards this direction, but still remains far from approaching the equivalent performance as its NLU counterparts BIBREF5, BIBREF3, BIBREF6, BIBREF9.
Due to the colossal scale of the pre-training corpora, including a large number of parameters is essential to achieve favorable performance. As a result, the model size usually costs at least 8 GPU cards BIBREF19, BIBREF18, BIBREF15 in the pre-training for NLG systems, and it also hinders real-world applications. To reduce the memory consumption problem, existing work resorted to knowledge distillation to transfer the knowledge from a large teacher network to a small student network BIBREF31, BIBREF32, or parameter reduction techniques to prune the model size in a more direct way BIBREF33. However, the research context is limited to the NLU scenarios, and same endeavours are necessary to NLG applications.
Another important branch of researches on unsupervised pre-training in NLP try to explain what kind of knowledge can be learned from pre-training. Related work has been done on the basis of both language understanding BIBREF34, BIBREF35 and generation BIBREF36. Specially, BIBREF36 analysed the characters of texts generated from a pre-trained GPT-2 by evaluating them over a wide spectrum of metrics. We argue that deeper understanding the way in which unsupervised pre-training contributes to better text generation, and the intrinsic mechanisms of the pre-trained models are also crucial to future work.
Conclusion
Unsupervised pre-training has defined the state-of-the-arts on a variety NLP tasks. However, in the field of NLG, the diversity of context information is still impeding the the application of unsupervised pre-training. The major challenges exist in designing model architectures to cater for the assorted context, and retaining the general knowledge learned from pre-training. In this review, we survey the recent unsupervised methods to utilize large-scale corpora for NLG purposes, with a highlight on those aiming at facilitating the integration of pre-trained models with downstream tasks. We propose to classify them into architecture- and strategy-based methods, followed with detailed introductions and discussions of their pros and cons. Based on the comparison of these methods and analyses of some informative experimental results from previous publications, we summarize some scientific questions that has not yet been well understood, and suggest attention being paid to these questions by future work. | The advantage of pre-training gradually diminishes with the increase of labeled data, Fixed representations yield better results than fine-tuning in some cases, pre-training the Seq2Seq encoder outperforms pre-training the decoder |
1fd969f53bc714d9b5e6604a7780cbd6b12fd616 | 1fd969f53bc714d9b5e6604a7780cbd6b12fd616_0 | Q: How strategy-based methods handle obstacles in NLG?
Text: Introduction
Unsupervised pre-training has sparked a sensational research interest in the natural language processing (NLP) community. This technology provides a promising way to exploit linguistic information from large-scale unlabelled textual data, which can serve as an auxiliary prior knowledge to benefit a wide range of NLP applications. In the literature, language modeling (LM) is a prevalent task for pre-training, where the target words are predicted conditioned on a given context. Therefore, it is intuitive to employ the pre-trained LMs for natural language generation, as the pre-training objective naturally accords with the goal of NLG. However, revolutionary improvements are only observed in the field of NLU.
The primary factor that impedes the progress of unsupervised pre-training in NLG is an idiosyncratic nature of text generation: Basically, we do not write words from scratch, but instead based on particular context, e.g., the source language sentences for translation, the dialog histories for response generation, and the visual scenes for image captioning, among others. In unsupervised pre-training, the task-specific context is not available, which leads to a discrepancy between pre-training and training in the target task. More precisely, the challenges posed by the discrepancy can be reflected in two aspects: First, the diverse context makes it intractable to design a universal representation extractor as in the case of NLU, and the pre-trained language generators may have to modify their inner structures to deal with the task-specific context. Second, the mismatch in data distribution and objective between the two training stages might result in the performance on the pre-training tasks being compromised during fine-tuning, which is dubbed as the catastrophic forgetting problem BIBREF0.
In response to the above challenges, two lines of work are proposed by resorting to architecture-based and strategy-based solutions, respectively. Architecture-based methods either try to induce task-specific architecture during pre-training (task-specific methods), or aim at building a general pre-training architecture to fit all downstream tasks (task-agnostic methods). Strategy-based methods depart from the pre-training stage, seeking to take advantage of the pre-trained models during the process of target task learning. The approaches include fine-tuning schedules that elaborately design the control of learning rates for optimization, proxy tasks that leverage labeled data to help the pre-trained model better fit the target data distribution, and knowledge distillation approaches that ditch the paradigm of initialization with pre-trained parameters by adopting the pre-trained model as a teacher network.
The remainder of this review is organized as follows: In Section SECREF2, we will introduce the background knowledge about unsupervised pre-training for NLU, followed by a sketch of how the pre-trained models are employed through parameter initialization for NLG in Section SECREF3. In Section SECREF4, we will describe the architecture-based methods, and the strategy-based methods are presented in Section SECREF5. Section SECREF6 provides some in-depth discussions, and Section SECREF7 concludes this review.
Background: Unsupervised Pre-training for NLU
Learning fine-grained language representations is a perennial topic in natural language understanding. In restrospect, compelling evidences suggest that good representations can be learned through unsupervised pre-training.
Early work focused on word-level representations BIBREF1, BIBREF2, which encodes each word independently. For sentence-level representations, there are roughly two kinds of pre-training objectives, namely discriminative pre-training and generative pre-training. Discriminative pre-training distinguishes context sentence(s) for a given sentence from non-context sentence(s) BIBREF3, BIBREF4, with the aim to capture inter-sentence relationships. Generative pre-training follows the language model paradigm:
where $x_{t}$ is the $t^{th}$ word in the textual sequence to generate, $T$ indicates sequence length, $\theta $ stands for learnable parameters, and $C$ is the context information, which is defined by the pre-training objective. ELMo BIBREF5 and GPT (short for Generative Pre-training) BIBREF6 adopt uni-directional Transformer BIBREF7 and bi-directional LSTM BIBREF8 language models, respectively. In this case, the context is defined as $x_{1:t}$ or $x_{t+1:T}$. BERT BIBREF3 is trained with a novel masked language model (MLM), which is a non-autoregressive way of generation. Specifically, MLM randomly replaces a fixed proportion of tokens in each sentence with a special [MASK] token or a random token, which results in a corrupted sentence $X_{\text{mask}}$, and predicts each replaced token based on the same context $X_{\text{mask}}$. To alleviate the inconsistency with target tasks caused by the introduction of [MASK] token, XLNet BIBREF9 introduces permutation-based language model, which conducts autoregressive language modeling over all possible permutations of the original word sequence. This gives rise to a context $C=X_{\mathbf {z}_{1:t-1}}$, where $\mathbf {z}$ is a certain permutation of $[1,2, \ldots , T]$, according to the definitions in BIBREF9. BIBREF10 and BIBREF11 pre-trained an encoder-decoder framework to reconstruct the input sentence and the surrounding sentence, respectively, and the encoded input sentence thereby is included in the context $C$.
The sentence representations learned by LMs can be used to perform many NLU tasks by adding a simple linear classifier. Despite the objective of language modeling, the pre-trained representations and have successfuly pushed the state-of-the-art on multiple benchmarks .
Unsupervised Pre-training and Parameter Initialization for NLG
NLG systems are usually built with an encoder-decoder framework, where the encoder reads the context information and the decoder generates the target text from the encoded vectorial representations. A direct way to utilize the pre-trained models is to initialize part of the encoder (when dealing with textual context) and/or the decoder with pre-trained parameters. For the encoder, pre-training is expected to provide better sentence representations, as we discussed in Section SECREF2. For the decoder, the intuition is to endow the model with some rudimentary ability for text generation.
BIBREF12 employed BERT as the encoder for abstractive text summarization, with some additional techniques to help integrate the BERT-initialized encoder with the randomly initialized decoder, which we will explicate in Section SECREF12. GPT-2 BIBREF13 inherited the left-to-right LM pre-training objective from GPT and extended the application to NLG, where the pre-trained LM directly serves as the language generator, with some special symbols to identify task-specific contexts. In the case of zero-shot task transfer, preliminary experiments showed that straightforward adaption of GPT-2 compares unfavorably with other unsupervised baselines.
BIBREF14 is among the first attempts to investigate unsupervised pre-training for sequence to sequence (Seq2Seq) learning. They used pre-trained LSTM-based LMs to initialize the first layer of the encoder and the decoder, which act as representation extractors. An additional LSTM layer, which is randomly initialized, is then added on top of the pre-trained LMs to build the Seq2Seq framework. To make use of the text generation ability of LMs, the output softmax layer of the decoder LM is also retained. Some recent endeavours BIBREF15, BIBREF16 explored multiple combinations of GPT- and BERT-based models to initialize the encoder and the decoder, respectively. Although remarkable results are observed, the separately pre-trained LMs are still inconsistent with the Seq2Seq framework.
Architecture-based Methods ::: Inducing Task-Specific Architecture in Pre-training
Separately initializing the encoder and the decoder with LMs neglects the interaction between the two modules at the pre-training stage, which is sub-optimal. For NLG tasks that can be modeled as Seq2Seq learning, it is feasible to jointly pre-train the encoder and the decoder. Existing approaches for this sake can be categorized into three variants: Denoising autoencoders (DAEs), conditional masked language models (CMLMs) and sequence to sequence language models (Seq2Seq LMs).
Architecture-based Methods ::: Inducing Task-Specific Architecture in Pre-training ::: Denoising Autoencoder
An intuitive way to conduct unsupervised Seq2Seq learning is to train an autoencoder (AE) based on encoder-decoder framework. Different from AEs, DAEs take a corrupted sentence as input and reconstruct the original sentence. The advantage is that the corrupted input will force the decoder to extract relevant information from the source side for text generation. To obtain the corrupted sentence, BIBREF17 designed three noising functions: shuffle, delete and replace (the left plot of Figure FIGREF4 gives an illustration), each of which is controlled by a pre-defined probability distribution. To be more specific, each token in the raw sequence is assigned with a new index based on a gaussion distribution $N(0, \sigma )$; the delete and replace operations of a token are determined by a Bernoulli distribution $B(p)$ with Beta distribution as prior. The three functions are applied to the raw sequences in random order.
Architecture-based Methods ::: Inducing Task-Specific Architecture in Pre-training ::: Conditional Masked Language Model
CMLM BIBREF18 extends the single model MLM proposed by BIBREF3 to the encoder-decoder setting, where the masked text sequence is read by the encoder, and the decoder only reconstructs the masked tokens, in construct to the entire sequence in DAEs. As the middle plot of Figure FIGREF4 shows, CMLM masks consecutive tokens , and the unmasked tokens in the encoder side are masked when being feed to the decoder. Following the notations in BIBREF18, let us assume that the tokens with index from $u$ to $v$ are masked from the raw sentence $X$, which results in $X^{\backslash u: v}$, and $X^{u: v}$ denotes the decoder input. Then, when predicting each masked token $x_{t}$ ($u \le t \le v$), the context is $X^{u: v}_{<t}$ and $X^{\backslash u: v}$. The underlying motivation, as BIBREF18 argued, is to force the encoder to understand the meaning of the unmasked tokens, which is achieved by encoder side masks, and encourage the decoder to refer to the source information rather than the leftward target tokens, which is achieved by decoder side masks.
Architecture-based Methods ::: Inducing Task-Specific Architecture in Pre-training ::: Sequence to Sequence Language Model
Seq2Seq LM BIBREF19 performs Seq2Seq modeling using a single Transformer model, with the concatenation of source sentence and target sentence as input. To simulate Seq2Seq learning with encoder-decoder frameworks, the attention span of each target token is constrained to the source tokens and the leftward target tokens, which is achieved by self-attention masks (see the right plot of Figure FIGREF4). In this way, the ability to extract language representation and generate texts are integrated into a single model. It is worth mentioning that Seq2Seq LM does not auto-regressively generate the target sentence, but instead predicting masked tokens based on the contexts controlled by self-attention masks. In other words, Seq2Seq LM still belongs into the family of MLMs. Apart from Seq2Seq LM, BIBREF19 also explored uni-directional LM and bi-directional LM structures to perform the MLM-based cloze task, and incorporated the three kinds of LMs to build the final pre-training objective.
Architecture-based Methods ::: Encoder-Agnostic Architectures for Adaptation
Although the Seq2Seq-based pre-training methods exhibit strong performance, they are confined to text-to-text generation. In order to encompass more diverse contexts, some researches began to investigate encoder-agnostic pre-training architectures BIBREF22, BIBREF23. Context Attention and Pseudo Self-Attention are two typical variants presented by BIBREF23, which differ in the way that the task-specific context is injected (see Figure FIGREF11). Context Attention takes the form of a standard Transformer decoder, with the layer that attends to the encoder outputs being randomly initialized. Pseudo Self-Attention considers the context vectors and the previous layer decoder outputs as an integral input, and the attended results are computed as follows:
where $C \in \mathbb {R}^{|C| \times d_{c}}$ and $Y \in \mathbb {R}^{|Y| \times d_{y}}$ are the context vectors and representations of the target textual sequence, respectively. The linear transformation matrices $W^{c}_{k}, W^{c}_{v} \in \mathbb {R}^{|C| \times d_{model}}$ with respect to $C$ are added to project the context to the self-attention space, and $W_{q}, W^{y}_{k}, W^{y}_{v} \in \mathbb {R}^{|Y| \times d_{model}}$ are part of the pre-trained model.
Except for the performance on target tasks, an alternative metric to gauge the quality of encoder-agnostic architectures is the degree to which the pre-trained parameters have to change, in order to inject the task-specific context. BIBREF23 compared the parameter changes of Context Attention and Pseudo Self-Attention in the feed forward layer, and discovered that Pseudo Self-Attention is more robust under this evaluation.
Strategy-based Methods ::: Fine-tuning Schedules for Adaption
When the pre-trained model is only a part of the target task system, fine-tuning requires joint learning of the components initialized in different fashion, which can make the training process unstable. The pre-trained model may also suffer from aggravated catastrophic forgetting problem as it has to coordinate with other components during fine-tuning BIBREF24, BIBREF25. From the perspective of optimization, it is unreasonable to schedule the pre-trained components and the newly-introduced components with the same learning rate, considering that the former have already possessed some unique knowledge. A common assumption is that the pre-trained parameters should be updated at a slower learning rate and with smoother decay BIBREF12, BIBREF25. The rationale behind such setting is that fine-tuning with more accurate gradient can prevent the pre-trained parameters from deviating too faraway from the original point, and the newly-introduced components need to quickly converge to the target parameter space. To this end, BIBREF12 adopted two Adam optimizers with different learning rates for the pre-trained encoder and the randomly initialized decoder. The learning rates are scheduled as in BIBREF7 with different warming up steps:
where ${warmup}_{\operatorname{Enc/Dec}}$ and $\tilde{l}r_{\operatorname{Enc/Dec}}$ determine the speed of learning rate changes and the max learning rates, respectively.
Strategy-based Methods ::: Proxy Tasks for Adaption
Large-scale unlabelled data provides generic linguistic knowledge, but the target tasks have unique data distribution and objectives. An effective way to bridge this gap is to introduce proxy tasks with moderate changes to the pre-training objectives, but at the same time take the labeled data into account BIBREF15, BIBREF20. Translation Language Modeling (TLM) BIBREF15 is a special generalization of MLM in the cross-lingual situation. It leverages the paralleled machine translation corpus for further training of the LMs that are pre-trained on monolingual corpora. Specifically, the source language sentence and the corresponding target language sentence are fed to the model in parallel, with random tokens from each language being masked to perform the cloze-style prediction as in MLM. Different from monolingual MLM, TLM encourages word predictions to rely on the interdependence from two languages, therefore the sentence representations learned from separate languages can be well aligned.
For some particular NLG tasks, existing proxy tasks designed under the supervised setup can also work with unsupervised pre-training models. For instance, in neural text summarization, the combination of extractive and abstractive objectives can generate better summaries BIBREF26, BIBREF27. Inspired by this, BIBREF12 introduced extractive summarization as a proxy task to fine-tune the pre-trained BERT, before adopting it as the abstractive summarization encoder. Compared with the original BERT features, the representations learned from extractive summarization contain more task-specific information, therefore conveying the meaning of source texts better.
Strategy-based Methods ::: Knowledge Distillation for Adaption
The aforementioned methods are diverse in implementation, but share the common idea of employing the pre-trained models through parameter initialization. An alternative way to exploit the pre-trained models is using the knowledge distillation technique BIBREF28. Knowledge distillation is a special form of training, where a student network learns from the supervision signals produced by a teacher network.
Taking BERT as an example, the pre-trained MLM contains global information, which can teach the autoregressive Seq2Seq models to “see from the future” BIBREF20. In practice, the probability distribution predicted by BERT is regarded as a soft label to compute the cross-entropy loss function :
where $X$, $Y$ and $Y^{masked}$ are the source sequence, the raw target sequence and the masked target sequence, respectively. $\mathcal {V}$ denotes the output vocabulary. $\theta $ indicates the parameters of the student network (Seq2Seq), which are learnable, and $\phi $ indicates the BERT parameters, which are fixed. In this way, the knowledge from unsupervised pre-training can be flexibly transferred to the target tasks, dispensing with the size and architecture limitations.
The supervision can also be derived from the hidden representations BIBREF25, with a mean-squared-error (MSE) distillation loss:
where $m$ and $n$ are hyper-parameters denoting the layer subscripts. Compared with the probability soft labels, the representation distillation method requires the Seq2Seq model to have the same hidden size with BERT, which is a more strict constrain.
Combining the knowledge distillation loss and the standard generative loss for Seq2Seq learning gives rise to the final objective to optimize:
where $\alpha $ is the weighting term that balances the contribution of the two kinds of loss functions.
Discussions ::: The Relationship between Architecture- and Strategy-based Methods
We have analysed two major challenges faced by the application of unsupervised pre-training to NLG (see Section SECREF1). On this basis, we introduced existing methodologies from the architecture and strategy considerations. The architecture-based methods are mainly proposed in response to the first challenge. Since the architecture of pre-trained model has a significant effect on the downstream task (when the pre-trained parameters are used for initialization), architecture designings have to plan in advance to narrow the discrepancy between pre-training and training on target tasks. This motivation has shown great effectiveness on the Seq2Seq framework BIBREF17, BIBREF18, BIBREF19. The strategy-based methods focus on the second challenge. They take a postprocessing point of view, with the aim to make the best of the pre-trained model at the target task training stage. It is noteworthy that the challenges are not independent inherently, and the two types of methods can actually work as complement to each other. For example, the fine-tuning schedules can alleviate the negative effects caused by the modification of pre-trained structures, and the catastrophic forgetting problem can also seek solution by devising a general task-agnostic architecture.
Discussions ::: Experimental Phenomenons
Existing researches on unsupervised pre-training for NLG are conducted on various tasks for different purposes. Probing into the assorted empirical results may help us discover some interesting phenomenons:
The advantage of pre-training gradually diminishes with the increase of labeled data BIBREF14, BIBREF17, BIBREF18.
Fixed representations yield better results than fine-tuning in some cases BIBREF24.
Overall, pre-training the Seq2Seq encoder outperforms pre-training the decoder BIBREF24, BIBREF17, BIBREF15, BIBREF16.
The first two phenomenons attest to the catastrophic forgetting theory. Thanks to the access to large-scale unlabeled corpora, unsupervised pre-training is able to excel at zero/low-shot settings, while the pre-trained models can only achieve few gains when abundant labeled data is available. This can be explained by the high quality of the dataset and the capacity of the task-specific models, which leave little space for improvement. Nonetheless, the increased supervision from labeled data can also influence the performance on pre-training tasks. By fixing the pre-trained parameters, the learned representations will not be affected by the numerous iterations of training on the target task, which makes them work better without fine-tuning.
The third phenomenon is kind of counter-intuitive, as the generative pre-training objectives are more similar to the decoder's function. There is no unanimous theory to explain why the encoder is a more important element to pre-train. But this discovery suggests that the pre-trained LMs are more robust when acting as representation extractors, while they are more sensitive the the change of context when acting as conditional language generators.
Discussions ::: Future Directions
The diversity of NLG applications poses challenges on the employment of unsupervised pre-training, yet it also raises more scientific questions for us to explore. In terms of the future development of this technology, we emphasize the importance of answering four questions: 1) How to introduce unsupervised pre-training into NLG tasks with cross-modal context? 2) How to design a generic pre-training algorithm to fit a wide range of NLG tasks? 3) How to reduce the computing resources required for large-scale pre-training? 4) What aspect of knowledge do the pre-trained models provide for better language generation?
NLG tasks can be defined by the context features and mapping functions. The introduction of cross-lingual textual features BIBREF15 and task-specific Seq2Seq architectures BIBREF18, BIBREF17, BIBREF19 in the pre-training stage has successfully boosted the performance on text-to-text generation. For NLG tasks concerning multiple modalities, it is conceivable that pre-training methods could also benefit from the joint consideration of cross-modal features. For example, in the vision-and-language field, the learning of cross-modal representations has proven to be highly effective BIBREF29, BIBREF30, but such representations can not yet be extracted from unpaired images and texts for image-grounded text generation, to the best of our knowledge.
In NLU, it is possible to pre-train one model to obtain language representations once and for all. As for NLG, a task-agnostic pre-training algorithm should transcend the purpose of representation learning, and consider the general ability for language generation. The notion of “encoder-agnostic adaption” BIBREF23 makes a preliminary step towards this direction, but still remains far from approaching the equivalent performance as its NLU counterparts BIBREF5, BIBREF3, BIBREF6, BIBREF9.
Due to the colossal scale of the pre-training corpora, including a large number of parameters is essential to achieve favorable performance. As a result, the model size usually costs at least 8 GPU cards BIBREF19, BIBREF18, BIBREF15 in the pre-training for NLG systems, and it also hinders real-world applications. To reduce the memory consumption problem, existing work resorted to knowledge distillation to transfer the knowledge from a large teacher network to a small student network BIBREF31, BIBREF32, or parameter reduction techniques to prune the model size in a more direct way BIBREF33. However, the research context is limited to the NLU scenarios, and same endeavours are necessary to NLG applications.
Another important branch of researches on unsupervised pre-training in NLP try to explain what kind of knowledge can be learned from pre-training. Related work has been done on the basis of both language understanding BIBREF34, BIBREF35 and generation BIBREF36. Specially, BIBREF36 analysed the characters of texts generated from a pre-trained GPT-2 by evaluating them over a wide spectrum of metrics. We argue that deeper understanding the way in which unsupervised pre-training contributes to better text generation, and the intrinsic mechanisms of the pre-trained models are also crucial to future work.
Conclusion
Unsupervised pre-training has defined the state-of-the-arts on a variety NLP tasks. However, in the field of NLG, the diversity of context information is still impeding the the application of unsupervised pre-training. The major challenges exist in designing model architectures to cater for the assorted context, and retaining the general knowledge learned from pre-training. In this review, we survey the recent unsupervised methods to utilize large-scale corpora for NLG purposes, with a highlight on those aiming at facilitating the integration of pre-trained models with downstream tasks. We propose to classify them into architecture- and strategy-based methods, followed with detailed introductions and discussions of their pros and cons. Based on the comparison of these methods and analyses of some informative experimental results from previous publications, we summarize some scientific questions that has not yet been well understood, and suggest attention being paid to these questions by future work. | fine-tuning schedules that elaborately design the control of learning rates for optimization, proxy tasks that leverage labeled data to help the pre-trained model better fit the target data distribution, knowledge distillation approaches that ditch the paradigm of initialization with pre-trained parameters by adopting the pre-trained model as a teacher network |
cd37ad149d500e1c7d2de9de1f4bae8dcc443a72 | cd37ad149d500e1c7d2de9de1f4bae8dcc443a72_0 | Q: How architecture-based method handle obstacles in NLG?
Text: Introduction
Unsupervised pre-training has sparked a sensational research interest in the natural language processing (NLP) community. This technology provides a promising way to exploit linguistic information from large-scale unlabelled textual data, which can serve as an auxiliary prior knowledge to benefit a wide range of NLP applications. In the literature, language modeling (LM) is a prevalent task for pre-training, where the target words are predicted conditioned on a given context. Therefore, it is intuitive to employ the pre-trained LMs for natural language generation, as the pre-training objective naturally accords with the goal of NLG. However, revolutionary improvements are only observed in the field of NLU.
The primary factor that impedes the progress of unsupervised pre-training in NLG is an idiosyncratic nature of text generation: Basically, we do not write words from scratch, but instead based on particular context, e.g., the source language sentences for translation, the dialog histories for response generation, and the visual scenes for image captioning, among others. In unsupervised pre-training, the task-specific context is not available, which leads to a discrepancy between pre-training and training in the target task. More precisely, the challenges posed by the discrepancy can be reflected in two aspects: First, the diverse context makes it intractable to design a universal representation extractor as in the case of NLU, and the pre-trained language generators may have to modify their inner structures to deal with the task-specific context. Second, the mismatch in data distribution and objective between the two training stages might result in the performance on the pre-training tasks being compromised during fine-tuning, which is dubbed as the catastrophic forgetting problem BIBREF0.
In response to the above challenges, two lines of work are proposed by resorting to architecture-based and strategy-based solutions, respectively. Architecture-based methods either try to induce task-specific architecture during pre-training (task-specific methods), or aim at building a general pre-training architecture to fit all downstream tasks (task-agnostic methods). Strategy-based methods depart from the pre-training stage, seeking to take advantage of the pre-trained models during the process of target task learning. The approaches include fine-tuning schedules that elaborately design the control of learning rates for optimization, proxy tasks that leverage labeled data to help the pre-trained model better fit the target data distribution, and knowledge distillation approaches that ditch the paradigm of initialization with pre-trained parameters by adopting the pre-trained model as a teacher network.
The remainder of this review is organized as follows: In Section SECREF2, we will introduce the background knowledge about unsupervised pre-training for NLU, followed by a sketch of how the pre-trained models are employed through parameter initialization for NLG in Section SECREF3. In Section SECREF4, we will describe the architecture-based methods, and the strategy-based methods are presented in Section SECREF5. Section SECREF6 provides some in-depth discussions, and Section SECREF7 concludes this review.
Background: Unsupervised Pre-training for NLU
Learning fine-grained language representations is a perennial topic in natural language understanding. In restrospect, compelling evidences suggest that good representations can be learned through unsupervised pre-training.
Early work focused on word-level representations BIBREF1, BIBREF2, which encodes each word independently. For sentence-level representations, there are roughly two kinds of pre-training objectives, namely discriminative pre-training and generative pre-training. Discriminative pre-training distinguishes context sentence(s) for a given sentence from non-context sentence(s) BIBREF3, BIBREF4, with the aim to capture inter-sentence relationships. Generative pre-training follows the language model paradigm:
where $x_{t}$ is the $t^{th}$ word in the textual sequence to generate, $T$ indicates sequence length, $\theta $ stands for learnable parameters, and $C$ is the context information, which is defined by the pre-training objective. ELMo BIBREF5 and GPT (short for Generative Pre-training) BIBREF6 adopt uni-directional Transformer BIBREF7 and bi-directional LSTM BIBREF8 language models, respectively. In this case, the context is defined as $x_{1:t}$ or $x_{t+1:T}$. BERT BIBREF3 is trained with a novel masked language model (MLM), which is a non-autoregressive way of generation. Specifically, MLM randomly replaces a fixed proportion of tokens in each sentence with a special [MASK] token or a random token, which results in a corrupted sentence $X_{\text{mask}}$, and predicts each replaced token based on the same context $X_{\text{mask}}$. To alleviate the inconsistency with target tasks caused by the introduction of [MASK] token, XLNet BIBREF9 introduces permutation-based language model, which conducts autoregressive language modeling over all possible permutations of the original word sequence. This gives rise to a context $C=X_{\mathbf {z}_{1:t-1}}$, where $\mathbf {z}$ is a certain permutation of $[1,2, \ldots , T]$, according to the definitions in BIBREF9. BIBREF10 and BIBREF11 pre-trained an encoder-decoder framework to reconstruct the input sentence and the surrounding sentence, respectively, and the encoded input sentence thereby is included in the context $C$.
The sentence representations learned by LMs can be used to perform many NLU tasks by adding a simple linear classifier. Despite the objective of language modeling, the pre-trained representations and have successfuly pushed the state-of-the-art on multiple benchmarks .
Unsupervised Pre-training and Parameter Initialization for NLG
NLG systems are usually built with an encoder-decoder framework, where the encoder reads the context information and the decoder generates the target text from the encoded vectorial representations. A direct way to utilize the pre-trained models is to initialize part of the encoder (when dealing with textual context) and/or the decoder with pre-trained parameters. For the encoder, pre-training is expected to provide better sentence representations, as we discussed in Section SECREF2. For the decoder, the intuition is to endow the model with some rudimentary ability for text generation.
BIBREF12 employed BERT as the encoder for abstractive text summarization, with some additional techniques to help integrate the BERT-initialized encoder with the randomly initialized decoder, which we will explicate in Section SECREF12. GPT-2 BIBREF13 inherited the left-to-right LM pre-training objective from GPT and extended the application to NLG, where the pre-trained LM directly serves as the language generator, with some special symbols to identify task-specific contexts. In the case of zero-shot task transfer, preliminary experiments showed that straightforward adaption of GPT-2 compares unfavorably with other unsupervised baselines.
BIBREF14 is among the first attempts to investigate unsupervised pre-training for sequence to sequence (Seq2Seq) learning. They used pre-trained LSTM-based LMs to initialize the first layer of the encoder and the decoder, which act as representation extractors. An additional LSTM layer, which is randomly initialized, is then added on top of the pre-trained LMs to build the Seq2Seq framework. To make use of the text generation ability of LMs, the output softmax layer of the decoder LM is also retained. Some recent endeavours BIBREF15, BIBREF16 explored multiple combinations of GPT- and BERT-based models to initialize the encoder and the decoder, respectively. Although remarkable results are observed, the separately pre-trained LMs are still inconsistent with the Seq2Seq framework.
Architecture-based Methods ::: Inducing Task-Specific Architecture in Pre-training
Separately initializing the encoder and the decoder with LMs neglects the interaction between the two modules at the pre-training stage, which is sub-optimal. For NLG tasks that can be modeled as Seq2Seq learning, it is feasible to jointly pre-train the encoder and the decoder. Existing approaches for this sake can be categorized into three variants: Denoising autoencoders (DAEs), conditional masked language models (CMLMs) and sequence to sequence language models (Seq2Seq LMs).
Architecture-based Methods ::: Inducing Task-Specific Architecture in Pre-training ::: Denoising Autoencoder
An intuitive way to conduct unsupervised Seq2Seq learning is to train an autoencoder (AE) based on encoder-decoder framework. Different from AEs, DAEs take a corrupted sentence as input and reconstruct the original sentence. The advantage is that the corrupted input will force the decoder to extract relevant information from the source side for text generation. To obtain the corrupted sentence, BIBREF17 designed three noising functions: shuffle, delete and replace (the left plot of Figure FIGREF4 gives an illustration), each of which is controlled by a pre-defined probability distribution. To be more specific, each token in the raw sequence is assigned with a new index based on a gaussion distribution $N(0, \sigma )$; the delete and replace operations of a token are determined by a Bernoulli distribution $B(p)$ with Beta distribution as prior. The three functions are applied to the raw sequences in random order.
Architecture-based Methods ::: Inducing Task-Specific Architecture in Pre-training ::: Conditional Masked Language Model
CMLM BIBREF18 extends the single model MLM proposed by BIBREF3 to the encoder-decoder setting, where the masked text sequence is read by the encoder, and the decoder only reconstructs the masked tokens, in construct to the entire sequence in DAEs. As the middle plot of Figure FIGREF4 shows, CMLM masks consecutive tokens , and the unmasked tokens in the encoder side are masked when being feed to the decoder. Following the notations in BIBREF18, let us assume that the tokens with index from $u$ to $v$ are masked from the raw sentence $X$, which results in $X^{\backslash u: v}$, and $X^{u: v}$ denotes the decoder input. Then, when predicting each masked token $x_{t}$ ($u \le t \le v$), the context is $X^{u: v}_{<t}$ and $X^{\backslash u: v}$. The underlying motivation, as BIBREF18 argued, is to force the encoder to understand the meaning of the unmasked tokens, which is achieved by encoder side masks, and encourage the decoder to refer to the source information rather than the leftward target tokens, which is achieved by decoder side masks.
Architecture-based Methods ::: Inducing Task-Specific Architecture in Pre-training ::: Sequence to Sequence Language Model
Seq2Seq LM BIBREF19 performs Seq2Seq modeling using a single Transformer model, with the concatenation of source sentence and target sentence as input. To simulate Seq2Seq learning with encoder-decoder frameworks, the attention span of each target token is constrained to the source tokens and the leftward target tokens, which is achieved by self-attention masks (see the right plot of Figure FIGREF4). In this way, the ability to extract language representation and generate texts are integrated into a single model. It is worth mentioning that Seq2Seq LM does not auto-regressively generate the target sentence, but instead predicting masked tokens based on the contexts controlled by self-attention masks. In other words, Seq2Seq LM still belongs into the family of MLMs. Apart from Seq2Seq LM, BIBREF19 also explored uni-directional LM and bi-directional LM structures to perform the MLM-based cloze task, and incorporated the three kinds of LMs to build the final pre-training objective.
Architecture-based Methods ::: Encoder-Agnostic Architectures for Adaptation
Although the Seq2Seq-based pre-training methods exhibit strong performance, they are confined to text-to-text generation. In order to encompass more diverse contexts, some researches began to investigate encoder-agnostic pre-training architectures BIBREF22, BIBREF23. Context Attention and Pseudo Self-Attention are two typical variants presented by BIBREF23, which differ in the way that the task-specific context is injected (see Figure FIGREF11). Context Attention takes the form of a standard Transformer decoder, with the layer that attends to the encoder outputs being randomly initialized. Pseudo Self-Attention considers the context vectors and the previous layer decoder outputs as an integral input, and the attended results are computed as follows:
where $C \in \mathbb {R}^{|C| \times d_{c}}$ and $Y \in \mathbb {R}^{|Y| \times d_{y}}$ are the context vectors and representations of the target textual sequence, respectively. The linear transformation matrices $W^{c}_{k}, W^{c}_{v} \in \mathbb {R}^{|C| \times d_{model}}$ with respect to $C$ are added to project the context to the self-attention space, and $W_{q}, W^{y}_{k}, W^{y}_{v} \in \mathbb {R}^{|Y| \times d_{model}}$ are part of the pre-trained model.
Except for the performance on target tasks, an alternative metric to gauge the quality of encoder-agnostic architectures is the degree to which the pre-trained parameters have to change, in order to inject the task-specific context. BIBREF23 compared the parameter changes of Context Attention and Pseudo Self-Attention in the feed forward layer, and discovered that Pseudo Self-Attention is more robust under this evaluation.
Strategy-based Methods ::: Fine-tuning Schedules for Adaption
When the pre-trained model is only a part of the target task system, fine-tuning requires joint learning of the components initialized in different fashion, which can make the training process unstable. The pre-trained model may also suffer from aggravated catastrophic forgetting problem as it has to coordinate with other components during fine-tuning BIBREF24, BIBREF25. From the perspective of optimization, it is unreasonable to schedule the pre-trained components and the newly-introduced components with the same learning rate, considering that the former have already possessed some unique knowledge. A common assumption is that the pre-trained parameters should be updated at a slower learning rate and with smoother decay BIBREF12, BIBREF25. The rationale behind such setting is that fine-tuning with more accurate gradient can prevent the pre-trained parameters from deviating too faraway from the original point, and the newly-introduced components need to quickly converge to the target parameter space. To this end, BIBREF12 adopted two Adam optimizers with different learning rates for the pre-trained encoder and the randomly initialized decoder. The learning rates are scheduled as in BIBREF7 with different warming up steps:
where ${warmup}_{\operatorname{Enc/Dec}}$ and $\tilde{l}r_{\operatorname{Enc/Dec}}$ determine the speed of learning rate changes and the max learning rates, respectively.
Strategy-based Methods ::: Proxy Tasks for Adaption
Large-scale unlabelled data provides generic linguistic knowledge, but the target tasks have unique data distribution and objectives. An effective way to bridge this gap is to introduce proxy tasks with moderate changes to the pre-training objectives, but at the same time take the labeled data into account BIBREF15, BIBREF20. Translation Language Modeling (TLM) BIBREF15 is a special generalization of MLM in the cross-lingual situation. It leverages the paralleled machine translation corpus for further training of the LMs that are pre-trained on monolingual corpora. Specifically, the source language sentence and the corresponding target language sentence are fed to the model in parallel, with random tokens from each language being masked to perform the cloze-style prediction as in MLM. Different from monolingual MLM, TLM encourages word predictions to rely on the interdependence from two languages, therefore the sentence representations learned from separate languages can be well aligned.
For some particular NLG tasks, existing proxy tasks designed under the supervised setup can also work with unsupervised pre-training models. For instance, in neural text summarization, the combination of extractive and abstractive objectives can generate better summaries BIBREF26, BIBREF27. Inspired by this, BIBREF12 introduced extractive summarization as a proxy task to fine-tune the pre-trained BERT, before adopting it as the abstractive summarization encoder. Compared with the original BERT features, the representations learned from extractive summarization contain more task-specific information, therefore conveying the meaning of source texts better.
Strategy-based Methods ::: Knowledge Distillation for Adaption
The aforementioned methods are diverse in implementation, but share the common idea of employing the pre-trained models through parameter initialization. An alternative way to exploit the pre-trained models is using the knowledge distillation technique BIBREF28. Knowledge distillation is a special form of training, where a student network learns from the supervision signals produced by a teacher network.
Taking BERT as an example, the pre-trained MLM contains global information, which can teach the autoregressive Seq2Seq models to “see from the future” BIBREF20. In practice, the probability distribution predicted by BERT is regarded as a soft label to compute the cross-entropy loss function :
where $X$, $Y$ and $Y^{masked}$ are the source sequence, the raw target sequence and the masked target sequence, respectively. $\mathcal {V}$ denotes the output vocabulary. $\theta $ indicates the parameters of the student network (Seq2Seq), which are learnable, and $\phi $ indicates the BERT parameters, which are fixed. In this way, the knowledge from unsupervised pre-training can be flexibly transferred to the target tasks, dispensing with the size and architecture limitations.
The supervision can also be derived from the hidden representations BIBREF25, with a mean-squared-error (MSE) distillation loss:
where $m$ and $n$ are hyper-parameters denoting the layer subscripts. Compared with the probability soft labels, the representation distillation method requires the Seq2Seq model to have the same hidden size with BERT, which is a more strict constrain.
Combining the knowledge distillation loss and the standard generative loss for Seq2Seq learning gives rise to the final objective to optimize:
where $\alpha $ is the weighting term that balances the contribution of the two kinds of loss functions.
Discussions ::: The Relationship between Architecture- and Strategy-based Methods
We have analysed two major challenges faced by the application of unsupervised pre-training to NLG (see Section SECREF1). On this basis, we introduced existing methodologies from the architecture and strategy considerations. The architecture-based methods are mainly proposed in response to the first challenge. Since the architecture of pre-trained model has a significant effect on the downstream task (when the pre-trained parameters are used for initialization), architecture designings have to plan in advance to narrow the discrepancy between pre-training and training on target tasks. This motivation has shown great effectiveness on the Seq2Seq framework BIBREF17, BIBREF18, BIBREF19. The strategy-based methods focus on the second challenge. They take a postprocessing point of view, with the aim to make the best of the pre-trained model at the target task training stage. It is noteworthy that the challenges are not independent inherently, and the two types of methods can actually work as complement to each other. For example, the fine-tuning schedules can alleviate the negative effects caused by the modification of pre-trained structures, and the catastrophic forgetting problem can also seek solution by devising a general task-agnostic architecture.
Discussions ::: Experimental Phenomenons
Existing researches on unsupervised pre-training for NLG are conducted on various tasks for different purposes. Probing into the assorted empirical results may help us discover some interesting phenomenons:
The advantage of pre-training gradually diminishes with the increase of labeled data BIBREF14, BIBREF17, BIBREF18.
Fixed representations yield better results than fine-tuning in some cases BIBREF24.
Overall, pre-training the Seq2Seq encoder outperforms pre-training the decoder BIBREF24, BIBREF17, BIBREF15, BIBREF16.
The first two phenomenons attest to the catastrophic forgetting theory. Thanks to the access to large-scale unlabeled corpora, unsupervised pre-training is able to excel at zero/low-shot settings, while the pre-trained models can only achieve few gains when abundant labeled data is available. This can be explained by the high quality of the dataset and the capacity of the task-specific models, which leave little space for improvement. Nonetheless, the increased supervision from labeled data can also influence the performance on pre-training tasks. By fixing the pre-trained parameters, the learned representations will not be affected by the numerous iterations of training on the target task, which makes them work better without fine-tuning.
The third phenomenon is kind of counter-intuitive, as the generative pre-training objectives are more similar to the decoder's function. There is no unanimous theory to explain why the encoder is a more important element to pre-train. But this discovery suggests that the pre-trained LMs are more robust when acting as representation extractors, while they are more sensitive the the change of context when acting as conditional language generators.
Discussions ::: Future Directions
The diversity of NLG applications poses challenges on the employment of unsupervised pre-training, yet it also raises more scientific questions for us to explore. In terms of the future development of this technology, we emphasize the importance of answering four questions: 1) How to introduce unsupervised pre-training into NLG tasks with cross-modal context? 2) How to design a generic pre-training algorithm to fit a wide range of NLG tasks? 3) How to reduce the computing resources required for large-scale pre-training? 4) What aspect of knowledge do the pre-trained models provide for better language generation?
NLG tasks can be defined by the context features and mapping functions. The introduction of cross-lingual textual features BIBREF15 and task-specific Seq2Seq architectures BIBREF18, BIBREF17, BIBREF19 in the pre-training stage has successfully boosted the performance on text-to-text generation. For NLG tasks concerning multiple modalities, it is conceivable that pre-training methods could also benefit from the joint consideration of cross-modal features. For example, in the vision-and-language field, the learning of cross-modal representations has proven to be highly effective BIBREF29, BIBREF30, but such representations can not yet be extracted from unpaired images and texts for image-grounded text generation, to the best of our knowledge.
In NLU, it is possible to pre-train one model to obtain language representations once and for all. As for NLG, a task-agnostic pre-training algorithm should transcend the purpose of representation learning, and consider the general ability for language generation. The notion of “encoder-agnostic adaption” BIBREF23 makes a preliminary step towards this direction, but still remains far from approaching the equivalent performance as its NLU counterparts BIBREF5, BIBREF3, BIBREF6, BIBREF9.
Due to the colossal scale of the pre-training corpora, including a large number of parameters is essential to achieve favorable performance. As a result, the model size usually costs at least 8 GPU cards BIBREF19, BIBREF18, BIBREF15 in the pre-training for NLG systems, and it also hinders real-world applications. To reduce the memory consumption problem, existing work resorted to knowledge distillation to transfer the knowledge from a large teacher network to a small student network BIBREF31, BIBREF32, or parameter reduction techniques to prune the model size in a more direct way BIBREF33. However, the research context is limited to the NLU scenarios, and same endeavours are necessary to NLG applications.
Another important branch of researches on unsupervised pre-training in NLP try to explain what kind of knowledge can be learned from pre-training. Related work has been done on the basis of both language understanding BIBREF34, BIBREF35 and generation BIBREF36. Specially, BIBREF36 analysed the characters of texts generated from a pre-trained GPT-2 by evaluating them over a wide spectrum of metrics. We argue that deeper understanding the way in which unsupervised pre-training contributes to better text generation, and the intrinsic mechanisms of the pre-trained models are also crucial to future work.
Conclusion
Unsupervised pre-training has defined the state-of-the-arts on a variety NLP tasks. However, in the field of NLG, the diversity of context information is still impeding the the application of unsupervised pre-training. The major challenges exist in designing model architectures to cater for the assorted context, and retaining the general knowledge learned from pre-training. In this review, we survey the recent unsupervised methods to utilize large-scale corpora for NLG purposes, with a highlight on those aiming at facilitating the integration of pre-trained models with downstream tasks. We propose to classify them into architecture- and strategy-based methods, followed with detailed introductions and discussions of their pros and cons. Based on the comparison of these methods and analyses of some informative experimental results from previous publications, we summarize some scientific questions that has not yet been well understood, and suggest attention being paid to these questions by future work. | task-specific architecture during pre-training (task-specific methods), aim at building a general pre-training architecture to fit all downstream tasks (task-agnostic methods) |
14eb2b89ba39e56c52954058b6b799a49d1b74bf | 14eb2b89ba39e56c52954058b6b799a49d1b74bf_0 | Q: How are their changes evaluated?
Text: Introduction
There is no shortage of services that are marketed as natural language understanding (nlu) solutions for use in chatbots, digital personal assistants, or spoken dialogue systems (sds). Recently, Braun2017 systematically evaluated several such services, including Microsoft LUIS, IBM Watson Conversation, API.ai, wit.ai, Amazon Lex, and RASA BIBREF0 . More recently, Liu2019b evaluated LUIS, Watson, RASA, and DialogFlow using some established benchmarks. Some nlu services work better than others in certain tasks and domains with a perhaps surprising pattern: RASA, the only fully open-source nlu service among those evaluated, consistently performs on par with the commercial services.
Though these services yield state-of-the-art performance on a handful of nlu tasks, one drawback to sds and robotics researchers is the fact that all of these nlu solutions process input at the utterance level; none of them process incrementally at the word-level. Yet, research has shown that humans comprehend utterances as they unfold BIBREF1 . Moreover, when a listener feels they are missing some crucial information mid-utterance, they can interject with a clarification request, so as to ensure they and the speaker are maintaining common ground BIBREF2 . Users who interact with sdss perceive incremental systems as being more natural than traditional, turn-based systems BIBREF3 , BIBREF4 , BIBREF5 , offer a more human-like experience BIBREF6 and are more satisfying to interact with than non-incremental systems BIBREF7 . Users even prefer interacting with an incremental sds when the system is less accurate or requires filled pauses while replying BIBREF8 or operates in a limited domain as long as there is incremental feedback BIBREF9 .
In this paper, we report our recent efforts in making the RASA nlu pipeline process incrementally. We explain briefly the RASA framework and pipeline, explain how we altered the RASA framework and individual components (including a new component which we added) to allow it to process incrementally, then we explain how we evaluated the system to ensure that RASA works as intended and how researchers can leverage this tool.
The RASA NLU Pipeline
RASA consists of nlu and core modules, the latter of which is akin to a dialogue manager; our focus here is on the nlu. The nlu itself is further modularized as pipelines which define how user utterances are processed, for example an utterance can pass through a tokenizer, named entity recognizer, then an intent classifier before producing a distribution over possible dialogue acts or intents. The pipeline and the training data are authorable (following a markdown representation; json format can also be used for the training data) allowing users to easily setup and run experiments in any domain as a standalone nlu component or as a module in a sds or chatbot. Importantly, RASA has provisions for authoring new components as well as altering existing ones.
Figure FIGREF7 shows a schematic of a pipeline for three components. The context (i.e., training data) is passed to Component A which performs its training, then persists a trained model for that component. Then the data is passed through Component A as input for Component B which also trains and persists, and so on for Component C. During runtime, the persisted models are loaded into memory and together form the nlu module.
Incrementalizing RASA
Our approach to making RASA incremental follows the incremental unit (iu) framework Schlangen2011 as has been done in previous work for dialogue processing toolkits BIBREF10 . We treat each module in RASA as an iu processing module and specifically make use of the ADD and REVOKE iu operations; for example, ADD when a new word is typed or recognized by a speech recognizer, and REVOKE if that word is identified as having been erroneously recognized in light of new information.
By default, RASA components expect full utterances, not single words. In addition to the challenge of making components in the nlu pipeline process word-by-word, we encounter another important problem: there is no ready-made signal for the end of an utterance. To solve this, we added functionality to signal the end of an utterance; this signal can be triggered by any component, including the speech recognizer where it has traditionally originated via endpointing. With this flexibility, any component (or set of components) can make a more informed decision about when an utterance is complete (e.g., if a user is uttering installments, endpointing may occur, but the intent behind the user's installments is not yet complete; the decision as to when an utterance is complete can be made by the nlu or dialogue manager).
Training RASA nlu proceeds as explained above (i.e., non-incrementally). For runtime, processing incrementally through the RASA pipeline is challenging because each component must have provisions for handling word-level input and must be able to handle ADD and REVOKE iu operations. Each component in a pipeline, for example, as depicted in Figure FIGREF7 , must operate in lock-step with each other where a word is ADDed to Component A which beings processing immediately, then ADDs its processing result to Component B, then Component B processes and passes output to Component C all before the next word is produced for Component A.
Incrementalizing RASA Components
We now explain how we altered specific RASA components to make them work incrementally.
The Message class in RASA nlu is the main message bus between components in the pipeline. Message follows a blackboard approach to passing information between components. For example, in a pipeline containing a tokenizer, intent classifier, and entity extractor, each of the components would store the tokens, intent class, and entities in the Message object, respectively. Our modifications to Message were minimal; we simply used it to store ius and corresponding edit types (i.e., ADD or REVOKE).
In order to incrementalize RASA nlu, we extended the base Component to make an addition of a new component, IncrementalComponent. A user who defines their own IncrementalComponent understands the difference in functionality, notably in the parse method. At runtime, a non-incremental component expects a full utterance, whereas an incremental one expects only a single iu. Because non-incremental components expect the entire utterance, they have no need to save any internal state across process calls, and can clear any internal data at the end of the method. However, with incremental components, that workflow changes; each call to process must maintain its internal state, so that it can be updated as it receives new ius. Moreover, IncrementalComponents additionally have a new_utterance method. In non-incremental systems, the call to process implicitly signals that the utterance has been completed, and there is no need to store internal data across process calls, whereas incremental systems lose that signal as a result. The new_utterance method acts as that signal.
The Interpreter class in RASA nlu is the main interface between user input (e.g., asr) and the series of components in the pipeline. On training, the Interpreter prepares the training data, and serially calls train on each of the components in the pipeline. Similarly, to process input, one uses the Interpreter’s parse method, where the Interpreter prepares the input (i.e., the ongoing utterance) and serially calls process on the components in the pipeline (analgous to left buffer updates in the iu framework). As a result of its design, we were able to leverage the Interpreter class for incremental processing, notably because of its use of a persistent Message object as a bus of communication between Components.
As with our implementation of the IncrementalComponent class, we created the IncrementalInterpreter. The IncrementalInterpreter class adds two new methods:
new_utterance
parse_incremental
The new_utterance method is fairly straightforward; it clears RASA nlu’s internal Message object that is shared between components, and calls each IncrementalComponent in the pipeline’s new_utterance method, signaling that the utterance has been completed, and for each component to clear their internal states. The parse_incremental method takes the iu from the calling input (e.g., asr), and appends it to a list of previous ius being stored in the Message object. After the iu has been added to the Message, the IncrementalInterpreter calls each component’s process method, where they can operate on the newest iu. This was intentionally designed to be generalizable, so that future incremental components can use different formats or edit types for their respective iu framework implementation.
Incremental Intent Recognizer Components
With the incremental framework in place, we further developed a sample incremental component to test the functionality of our changes. For this, we used the Simple Incremental Update Model (sium) described in BIBREF11 . This model is a generative factored joint distribution, which uses a simple Bayesian update as new words are added. At each iu, a distribution of intents and entities are generated with confidence scores, and the intent can be classified at each step as the output with the highest confidence value. Entities on the other hand, can be extracted if their confidence exceeds a predetermined threshold.
Following khouzaimi-laroche-lefevre:2014:W14-43), we incrementalizaed RASA's existing Tensorflow Embedding component for intent recognition as an incremental component. The pipeline consists of a whitespace tokenizer, scikit-learn Conditional Random Field (crf) entity extractor, Bag-of-Words featurizer, and lastly, a TensorFlow Neural Network for intent classification. To start with incrementalizing, we modified the whitespace tokenizer to work on word-level increments, rather than the entire utterance. For the crf entity extractor, we modified it to update the entities up to that point in the utterance with each process call, and then modified the Bag-of-Words featurizer to update its embeddings with each process call by vectorizing the individual word in the iu, and summing that vector with the existing embeddings. At each word iu increment, we treat the entire utterance prefix to that point as a full utterance as input to the Tensorflow Embeddings component, which returns a distribution over intents. This process is repeated until all words in the utterance have been added to the prefix. In this way, the component differs from sium in that it doesn't update its internal state; rather, it treats each prefix as a full utterance (i.e., so-called restart-incrementality).
Experiment
In this section, we explain a simple experiment we conducted to evaluate our work in incrementalizing RASA by using the update-incremental sium and restart-incremental tensorflow-embedding modules in a known nlu task.
Data, Task, Metrics
To evaluate the performance of our approach, we used a subset of the SNIPS BIBREF12 dataset, which is readily available in RASA nlu format. Our training data consisted of 700 utterances, across 7 different intents (AddToPlaylist, BookRestaurant, GetWeather, PlayMusic, RateBook, SearchCreativeWork, and SearchScreeningEvent). In order to test our implementation of incremental components, we initially benchmarked their non-incremental counterparts, and used that as a baseline for the incremental versions (to treat the sium component as non-incremental, we simply applied all words in each utterance to it and obtained the distribution over intents after each full utterance had been processed).
We use accuracy of intent and entity recognition as our task and metric. To evaluate the components worked as intended, we then used the IncrementalInterpreter to parse the messages as individual ius. To ensure REVOKE worked as intended, we injected random incorrect words at a rate of 40%, followed by subsequent revokes, ensuring that an ADD followed by a revoke resulted in the same output as if the incorrect word had never been added. While we implemented both an update-incremental and a restart-incremental RASA nlu component, the results of the two cannot be directly compared for accuracy as the underlying models differ greatly (i.e., sium is generative, whereas Tensorflow Embedding is a discriminative neural network; moreover, sium was designed to work as a reference resolution component to physical objects, not abstract intents), nor are these results conducive to an argument of update- vs. restart-incremental approaches, as the underlying architecture of the models vary greatly.
Results
The results of our evaluation can be found in Table TABREF14 . These results show that our incremental implementation works as intended, as the incremental and non-incremental version of each component yieled the same results. While there is a small variation between the F1 scores between the non-incremental and incremental components, 1% is well within a reasonable tolerance as there is some randomness in training the underlying model.
Conclusion
RASA nlu is a useful and well-evaluated toolkit for developing nlu components in sds and chatbot systems. We extended RASA by adding provisions for incremental processing generally, and we implemented two components for intent recognition that used update- and restart-incremental approaches. Our results show that the incrementalizing worked as expected. For ongoing and future work, we plan on developing an update-incremental counterpart to the Tensorflow Embeddings component that uses a recurrent neural network to maintain the state. We will further evaluate our work with incremental asr in live dialogue tasks. We will make our code available upon acceptance of this publication.
Appendix
language: "en"
pipeline:
- name: "intent_featurizer_count_vectors"
- name: "intent_..._tensorflow_embedding"
intent_tokenization_flag: true
intent_split_symbol: "+" | The changes are evaluated based on accuracy of intent and entity recognition on SNIPS dataset |
83f24e4bbf9de82d560cdde64b91d6d672def6bf | 83f24e4bbf9de82d560cdde64b91d6d672def6bf_0 | Q: What baseline is used for the verb classification experiments?
Text: Introduction
Playing a key role in conveying the meaning of a sentence, verbs are famously complex. They display a wide range of syntactic-semantic behaviour, expressing the semantics of an event as well as relational information among its participants BIBREF0 , BIBREF1 , BIBREF2 .
Lexical resources which capture the variability of verbs are instrumental for many Natural Language Processing (NLP) applications. One of the richest verb resources currently available for English is VerbNet BIBREF3 , BIBREF4 . Based on the work of Levin Levin:1993book, this largely hand-crafted taxonomy organises verbs into classes on the basis of their shared syntactic-semantic behaviour. Providing a useful level of generalisation for many NLP tasks, VerbNet has been used to support semantic role labelling BIBREF5 , BIBREF6 , semantic parsing BIBREF7 , word sense disambiguation BIBREF8 , discourse parsing BIBREF9 , information extraction BIBREF10 , text mining applications BIBREF11 , BIBREF12 , research into human language acquisition BIBREF13 , and other tasks.
This benefit for English NLP has motivated the development of VerbNets for languages such as Spanish and Catalan BIBREF14 , Czech BIBREF15 , and Mandarin BIBREF16 . However, end-to-end manual resource development using Levin's methodology is extremely time consuming, even when supported by translations of English VerbNet classes to other languages BIBREF17 , BIBREF18 . Approaches which aim to learn verb classes automatically offer an attractive alternative. However, existing methods rely on carefully engineered features that are extracted using sophisticated language-specific resources BIBREF19 , BIBREF17 , BIBREF20 , ranging from accurate parsers to pre-compiled subcategorisation frames BIBREF21 , BIBREF22 , BIBREF23 . Such methods are limited to a small set of resource-rich languages.
It has been argued that VerbNet-style classification has a strong cross-lingual element BIBREF24 , BIBREF2 . In support of this argument, Majewska:2017lre have shown that English VerbNet has high translatability across different, even typologically diverse languages. Based on this finding, we propose an automatic approach which exploits readily available annotations for English to facilitate efficient, large-scale development of VerbNets for a wide set of target languages.
Recently, unsupervised methods for inducing distributed word vector space representations or word embeddings BIBREF25 have been successfully applied to a plethora of NLP tasks BIBREF26 , BIBREF27 , BIBREF28 . These methods offer an elegant way to learn directly from large corpora, bypassing the feature engineering step and the dependence on mature NLP pipelines (e.g., POS taggers, parsers, extraction of subcategorisation frames). In this work, we demonstrate how these models can be used to support automatic verb class induction. Moreover, we show that these models offer the means to exploit inherent cross-lingual links in VerbNet-style classification in order to guide the development of new classifications for resource-lean languages. To the best of our knowledge, this proposition has not been investigated in previous work.
There has been little work on assessing the suitability of embeddings for capturing rich syntactic-semantic phenomena. One challenge is their reliance on the distributional hypothesis BIBREF29 , which coalesces fine-grained syntactic-semantic relations between words into a broad relation of semantic relatedness (e.g., coffee:cup) BIBREF30 , BIBREF31 . This property has an adverse effect when word embeddings are used in downstream tasks such as spoken language understanding BIBREF32 , BIBREF33 or dialogue state tracking BIBREF34 , BIBREF35 . It could have a similar effect on verb classification, which relies on the similarity in syntactic-semantic properties of verbs within a class. In summary, we explore three important questions in this paper:
(Q1) Given their fundamental dependence on the distributional hypothesis, to what extent can unsupervised methods for inducing vector spaces facilitate the automatic induction of VerbNet-style verb classes across different languages?
(Q2) Can one boost verb classification for lower-resource languages by exploiting general-purpose cross-lingual resources such as BabelNet BIBREF36 , BIBREF37 or bilingual dictionaries such as PanLex BIBREF38 to construct better word vector spaces for these languages?
(Q3) Based on the stipulated cross-linguistic validity of VerbNet-style classification, can one exploit rich sets of readily available annotations in one language (e.g., the full English VerbNet) to automatically bootstrap the creation of VerbNets for other languages? In other words, is it possible to exploit a cross-lingual vector space to transfer VerbNet knowledge from a resource-rich to a resource-lean language?
To investigate Q1, we induce standard distributional vector spaces BIBREF39 , BIBREF40 from large monolingual corpora in English and six target languages. As expected, the results obtained with this straightforward approach show positive trends, but at the same time reveal its limitations for all the languages involved. Therefore, the focus of our work shifts to Q2 and Q3. The problem of inducing VerbNet-oriented embeddings is framed as vector space specialisation using the available external resources: BabelNet or PanLex, and (English) VerbNet. Formalised as an instance of post-processing semantic specialisation approaches BIBREF41 , BIBREF34 , our procedure is steered by two sets of linguistic constraints: 1) cross-lingual (translation) links between languages extracted from BabelNet (targeting Q2); and 2) the available VerbNet annotations for a resource-rich language. The two sets of constraints jointly target Q3.
The main goal of vector space specialisation is to pull examples standing in desirable relations, as described by the constraints, closer together in the transformed vector space. The specialisation process can capitalise on the knowledge of VerbNet relations in the source language (English) by using translation pairs to transfer that knowledge to each of the target languages. By constructing shared bilingual vector spaces, our method facilitates the transfer of semantic relations derived from VerbNet to the vector spaces of resource-lean target languages. This idea is illustrated by Fig. FIGREF2 .
Our results indicate that cross-lingual connections yield improved verb classes across all six target languages (thus answering Q2). Moreover, a consistent and significant boost in verb classification performance is achieved by propagating the VerbNet-style information from the source language (English) to any other target language (e.g., Italian, Croatian, Polish, Finnish) for which no VerbNet-style information is available during the fine-tuning process (thus answering Q3). We report state-of-the-art verb classification performance for all six languages in our experiments. For instance, we improve the state-of-the-art F-1 score from prior work from 0.55 to 0.79 for French, and from 0.43 to 0.74 for Brazilian Portuguese.
Vector Space Specialisation
Our departure point is a state-of-the-art specialisation model for fine-tuning vector spaces termed Paragram BIBREF49 . The Paragram procedure injects similarity constraints between word pairs in order to make their vector space representations more similar; we term these the Attract constraints. Let INLINEFORM0 be the vocabulary consisting of the source language and target language vocabularies INLINEFORM1 and INLINEFORM2 , respectively. Let INLINEFORM3 be the set of word pairs standing in desirable lexical relations; these include: 1) verb pairs from the same VerbNet class (e.g. (en_transport, en_transfer) from verb class send-11.1); and 2) the cross-lingual synonymy pairs (e.g. (en_peace, fi_rauha)). Given the initial distributional space and collections of such Attract pairs INLINEFORM4 , the model gradually modifies the space to bring the designated word vectors closer together, working in mini-batches of size INLINEFORM5 . The method's cost function can be expressed as:
DISPLAYFORM0
The first term of the method's cost function (i.e., INLINEFORM0 ) pulls the Attract examples INLINEFORM1 closer together (see Fig. FIGREF2 for an illustration). INLINEFORM2 refers to the current mini-batch of Attract constraints. This term is expressed as follows:
DISPLAYFORM0
INLINEFORM0 is the standard rectified linear unit or the hinge loss function BIBREF50 , BIBREF51 . INLINEFORM1 is the “attract” margin: it determines how much vectors of words from Attract constraints should be closer to each other than to their negative examples. The negative example INLINEFORM2 for each word INLINEFORM3 in any Attract pair is always the vector closest to INLINEFORM4 taken from the pairs in the current mini-batch, distinct from the other word paired with INLINEFORM5 , and INLINEFORM6 itself.
The second INLINEFORM0 term is the regularisation which aims to retain the semantic information encoded in the initial distributional space as long as this information does not contradict the used Attract constraints. Let INLINEFORM1 refer to the initial distributional vector of the word INLINEFORM2 and let INLINEFORM3 be the set of all word vectors present in the given mini-batch. If INLINEFORM4 denotes the L2 regularisation constant, this term can be expressed as:
DISPLAYFORM0
The fine-tuning procedure effectively blends the knowledge from external resources (i.e., the input Attract set of constraints) with distributional information extracted directly from large corpora. We show how to propagate annotations from a knowledge source such as VerbNet from source to target by combining two types of constraints within the specialisation framework: a) cross-lingual (translation) links between languages, and b) available VerbNet annotations in a resource-rich language transformed into pairwise constraints. Cross-lingual constraints such as (pl_wojna, it_guerra) are extracted from BabelNet BIBREF36 , a large-scale resource which groups words into cross-lingual babel synsets (and is currently available for 271 languages). The wide and steadily growing coverage of languages in BabelNet means that our proposed framework promises to support the transfer of VerbNet-style information to numerous target languages (with increasingly high accuracy).
To establish that the proposed transfer approach is in fact independent of the chosen cross-lingual information source, we also experiment with another cross-lingual dictionary: PanLex BIBREF38 , which was used in prior work on cross-lingual word vector spaces BIBREF52 , BIBREF53 . This dictionary currently covers around 1,300 language varieties with over 12 million expressions, thus offering support also for low-resource transfer settings.
VerbNet constraints are extracted from the English VerbNet class structure in a straightforward manner. For each class INLINEFORM0 from the 273 VerbNet classes, we simply take the set of all INLINEFORM1 verbs INLINEFORM2 associated with that class, including its subclasses, and generate all unique pairs INLINEFORM3 so that INLINEFORM4 and INLINEFORM5 . Example VerbNet pairwise constraints are shown in Tab. TABREF15 . Note that VerbNet classes in practice contain verb instances standing in a variety of lexical relations, including synonyms, antonyms, troponyms, hypernyms, and the class membership is determined on the basis of connections between the syntactic patterns and the underlying semantic relations BIBREF54 , BIBREF55 .
Clustering Algorithm
Given the initial distributional or specialised collection of target language vectors INLINEFORM0 , we apply an off-the-shelf clustering algorithm on top of these vectors in order to group verbs into classes. Following prior work BIBREF56 , BIBREF57 , BIBREF17 , we employ the MNCut spectral clustering algorithm BIBREF58 , which has wide applicability in similar NLP tasks which involve high-dimensional feature spaces BIBREF59 , BIBREF60 , BIBREF18 . Again, following prior work BIBREF17 , BIBREF61 , we estimate the number of clusters INLINEFORM1 using the self-tuning method of Zelnik:2004nips. This algorithm finds the optimal number by minimising a cost function based on the eigenvector structure of the word similarity matrix. We refer the reader to the relevant literature for further details.
Results and Discussion
Cross-Lingual Transfer Model F-1 verb classification scores for the six target languages with different sets of constraints are summarised in Fig. FIGREF29 . We can draw several interesting conclusions. First, the strongest results on average are obtained with the model which transfers the VerbNet knowledge from English (as a resource-rich language) to the resource-lean target language (providing an answer to question Q3, Sect. SECREF1 ). These improvements are visible across all target languages, empirically demonstrating the cross-lingual nature of VerbNet-style classifications. Second, using cross-lingual constraints alone (XLing) yields strong gains over initial distributional spaces (answering Q1 and Q2). Fig. FIGREF29 also shows that cross-lingual similarity constraints are more beneficial than the monolingual ones, despite a larger total number of the monolingual constraints in each language (see Tab. TABREF18 ). This suggests that such cross-lingual similarity links are strong implicit indicators of class membership. Namely, target language words which map to the same source language word are likely to be synonyms and consequently end up in the same verb class in the target language. However, the cross-lingual links are even more useful as means for transferring the VerbNet knowledge, as evidenced by additional gains with XLing+VerbNet-EN.
The absolute classification scores are the lowest for the two Slavic languages: pl and hr. This may be partially explained by the lowest number of cross-lingual constraints for the two languages covering only a subset of their entire vocabularies (see Tab. TABREF18 and compare the total number of constraints for hr and pl to the numbers for e.g. fi or fr). Another reason for weaker performance of these two languages could be their rich morphology, which induces data sparsity both in the initial vector space estimation and in the coverage of constraints.
Further Discussion and Future Work
This work has proven the potential of transferring lexical resources from resource-rich to resource-poor languages using general-purpose cross-lingual dictionaries and bilingual vector spaces as means of transfer within a semantic specialisation framework. However, we believe that the proposed basic framework may be upgraded and extended across several research paths in future work.
First, in the current work we have operated with standard single-sense/single-prototype representations, thus effectively disregarding the problem of verb polysemy. While several polysemy-aware verb classification models for English were developed recently BIBREF79 , BIBREF80 , the current lack of polysemy-aware evaluation sets in other languages impedes this line of research. Evaluation issues aside, one idea for future work is to use the Attract-Repel specialisation framework for sense-aware cross-lingual transfer relying on recently developed multi-sense/prototype word representations BIBREF81 , BIBREF82 .
Another challenge is to apply the idea from this work to enable cross-lingual transfer of other structured lexical resources available in English such as FrameNet BIBREF44 , PropBank BIBREF45 , and VerbKB BIBREF83 . Other potential research avenues include porting the approach to other typologically diverse languages and truly low-resource settings (e.g., with only limited amounts of parallel data), as well as experiments with other distributional spaces, e.g. BIBREF84 . Further refinements of the specialisation and clustering algorithms may also result in improved verb class induction.
Conclusion
We have presented a novel cross-lingual transfer model which enables the automatic induction of VerbNet-style verb classifications across multiple languages. The transfer is based on a word vector space specialisation framework, utilised to directly model the assumption of cross-linguistic validity of VerbNet-style classifications. Our results indicate strong improvements in verb classification accuracy across all six target languages explored. All automatically induced VerbNets are available at:
github.com/cambridgeltl/verbnets.
Acknowledgments
This work is supported by the ERC Consolidator Grant LEXICAL: Lexical Acquisition Across Languages (no 648909). The authors are grateful to the entire LEXICAL team, especially to Roi Reichart, and also to the three anonymous reviewers for their helpful and constructive suggestions. | Unanswerable |
6b8a3100895f2192e08973006474428319dc298e | 6b8a3100895f2192e08973006474428319dc298e_0 | Q: What clustering algorithm is used on top of the VerbNet-specialized representations?
Text: Introduction
Playing a key role in conveying the meaning of a sentence, verbs are famously complex. They display a wide range of syntactic-semantic behaviour, expressing the semantics of an event as well as relational information among its participants BIBREF0 , BIBREF1 , BIBREF2 .
Lexical resources which capture the variability of verbs are instrumental for many Natural Language Processing (NLP) applications. One of the richest verb resources currently available for English is VerbNet BIBREF3 , BIBREF4 . Based on the work of Levin Levin:1993book, this largely hand-crafted taxonomy organises verbs into classes on the basis of their shared syntactic-semantic behaviour. Providing a useful level of generalisation for many NLP tasks, VerbNet has been used to support semantic role labelling BIBREF5 , BIBREF6 , semantic parsing BIBREF7 , word sense disambiguation BIBREF8 , discourse parsing BIBREF9 , information extraction BIBREF10 , text mining applications BIBREF11 , BIBREF12 , research into human language acquisition BIBREF13 , and other tasks.
This benefit for English NLP has motivated the development of VerbNets for languages such as Spanish and Catalan BIBREF14 , Czech BIBREF15 , and Mandarin BIBREF16 . However, end-to-end manual resource development using Levin's methodology is extremely time consuming, even when supported by translations of English VerbNet classes to other languages BIBREF17 , BIBREF18 . Approaches which aim to learn verb classes automatically offer an attractive alternative. However, existing methods rely on carefully engineered features that are extracted using sophisticated language-specific resources BIBREF19 , BIBREF17 , BIBREF20 , ranging from accurate parsers to pre-compiled subcategorisation frames BIBREF21 , BIBREF22 , BIBREF23 . Such methods are limited to a small set of resource-rich languages.
It has been argued that VerbNet-style classification has a strong cross-lingual element BIBREF24 , BIBREF2 . In support of this argument, Majewska:2017lre have shown that English VerbNet has high translatability across different, even typologically diverse languages. Based on this finding, we propose an automatic approach which exploits readily available annotations for English to facilitate efficient, large-scale development of VerbNets for a wide set of target languages.
Recently, unsupervised methods for inducing distributed word vector space representations or word embeddings BIBREF25 have been successfully applied to a plethora of NLP tasks BIBREF26 , BIBREF27 , BIBREF28 . These methods offer an elegant way to learn directly from large corpora, bypassing the feature engineering step and the dependence on mature NLP pipelines (e.g., POS taggers, parsers, extraction of subcategorisation frames). In this work, we demonstrate how these models can be used to support automatic verb class induction. Moreover, we show that these models offer the means to exploit inherent cross-lingual links in VerbNet-style classification in order to guide the development of new classifications for resource-lean languages. To the best of our knowledge, this proposition has not been investigated in previous work.
There has been little work on assessing the suitability of embeddings for capturing rich syntactic-semantic phenomena. One challenge is their reliance on the distributional hypothesis BIBREF29 , which coalesces fine-grained syntactic-semantic relations between words into a broad relation of semantic relatedness (e.g., coffee:cup) BIBREF30 , BIBREF31 . This property has an adverse effect when word embeddings are used in downstream tasks such as spoken language understanding BIBREF32 , BIBREF33 or dialogue state tracking BIBREF34 , BIBREF35 . It could have a similar effect on verb classification, which relies on the similarity in syntactic-semantic properties of verbs within a class. In summary, we explore three important questions in this paper:
(Q1) Given their fundamental dependence on the distributional hypothesis, to what extent can unsupervised methods for inducing vector spaces facilitate the automatic induction of VerbNet-style verb classes across different languages?
(Q2) Can one boost verb classification for lower-resource languages by exploiting general-purpose cross-lingual resources such as BabelNet BIBREF36 , BIBREF37 or bilingual dictionaries such as PanLex BIBREF38 to construct better word vector spaces for these languages?
(Q3) Based on the stipulated cross-linguistic validity of VerbNet-style classification, can one exploit rich sets of readily available annotations in one language (e.g., the full English VerbNet) to automatically bootstrap the creation of VerbNets for other languages? In other words, is it possible to exploit a cross-lingual vector space to transfer VerbNet knowledge from a resource-rich to a resource-lean language?
To investigate Q1, we induce standard distributional vector spaces BIBREF39 , BIBREF40 from large monolingual corpora in English and six target languages. As expected, the results obtained with this straightforward approach show positive trends, but at the same time reveal its limitations for all the languages involved. Therefore, the focus of our work shifts to Q2 and Q3. The problem of inducing VerbNet-oriented embeddings is framed as vector space specialisation using the available external resources: BabelNet or PanLex, and (English) VerbNet. Formalised as an instance of post-processing semantic specialisation approaches BIBREF41 , BIBREF34 , our procedure is steered by two sets of linguistic constraints: 1) cross-lingual (translation) links between languages extracted from BabelNet (targeting Q2); and 2) the available VerbNet annotations for a resource-rich language. The two sets of constraints jointly target Q3.
The main goal of vector space specialisation is to pull examples standing in desirable relations, as described by the constraints, closer together in the transformed vector space. The specialisation process can capitalise on the knowledge of VerbNet relations in the source language (English) by using translation pairs to transfer that knowledge to each of the target languages. By constructing shared bilingual vector spaces, our method facilitates the transfer of semantic relations derived from VerbNet to the vector spaces of resource-lean target languages. This idea is illustrated by Fig. FIGREF2 .
Our results indicate that cross-lingual connections yield improved verb classes across all six target languages (thus answering Q2). Moreover, a consistent and significant boost in verb classification performance is achieved by propagating the VerbNet-style information from the source language (English) to any other target language (e.g., Italian, Croatian, Polish, Finnish) for which no VerbNet-style information is available during the fine-tuning process (thus answering Q3). We report state-of-the-art verb classification performance for all six languages in our experiments. For instance, we improve the state-of-the-art F-1 score from prior work from 0.55 to 0.79 for French, and from 0.43 to 0.74 for Brazilian Portuguese.
Vector Space Specialisation
Our departure point is a state-of-the-art specialisation model for fine-tuning vector spaces termed Paragram BIBREF49 . The Paragram procedure injects similarity constraints between word pairs in order to make their vector space representations more similar; we term these the Attract constraints. Let INLINEFORM0 be the vocabulary consisting of the source language and target language vocabularies INLINEFORM1 and INLINEFORM2 , respectively. Let INLINEFORM3 be the set of word pairs standing in desirable lexical relations; these include: 1) verb pairs from the same VerbNet class (e.g. (en_transport, en_transfer) from verb class send-11.1); and 2) the cross-lingual synonymy pairs (e.g. (en_peace, fi_rauha)). Given the initial distributional space and collections of such Attract pairs INLINEFORM4 , the model gradually modifies the space to bring the designated word vectors closer together, working in mini-batches of size INLINEFORM5 . The method's cost function can be expressed as:
DISPLAYFORM0
The first term of the method's cost function (i.e., INLINEFORM0 ) pulls the Attract examples INLINEFORM1 closer together (see Fig. FIGREF2 for an illustration). INLINEFORM2 refers to the current mini-batch of Attract constraints. This term is expressed as follows:
DISPLAYFORM0
INLINEFORM0 is the standard rectified linear unit or the hinge loss function BIBREF50 , BIBREF51 . INLINEFORM1 is the “attract” margin: it determines how much vectors of words from Attract constraints should be closer to each other than to their negative examples. The negative example INLINEFORM2 for each word INLINEFORM3 in any Attract pair is always the vector closest to INLINEFORM4 taken from the pairs in the current mini-batch, distinct from the other word paired with INLINEFORM5 , and INLINEFORM6 itself.
The second INLINEFORM0 term is the regularisation which aims to retain the semantic information encoded in the initial distributional space as long as this information does not contradict the used Attract constraints. Let INLINEFORM1 refer to the initial distributional vector of the word INLINEFORM2 and let INLINEFORM3 be the set of all word vectors present in the given mini-batch. If INLINEFORM4 denotes the L2 regularisation constant, this term can be expressed as:
DISPLAYFORM0
The fine-tuning procedure effectively blends the knowledge from external resources (i.e., the input Attract set of constraints) with distributional information extracted directly from large corpora. We show how to propagate annotations from a knowledge source such as VerbNet from source to target by combining two types of constraints within the specialisation framework: a) cross-lingual (translation) links between languages, and b) available VerbNet annotations in a resource-rich language transformed into pairwise constraints. Cross-lingual constraints such as (pl_wojna, it_guerra) are extracted from BabelNet BIBREF36 , a large-scale resource which groups words into cross-lingual babel synsets (and is currently available for 271 languages). The wide and steadily growing coverage of languages in BabelNet means that our proposed framework promises to support the transfer of VerbNet-style information to numerous target languages (with increasingly high accuracy).
To establish that the proposed transfer approach is in fact independent of the chosen cross-lingual information source, we also experiment with another cross-lingual dictionary: PanLex BIBREF38 , which was used in prior work on cross-lingual word vector spaces BIBREF52 , BIBREF53 . This dictionary currently covers around 1,300 language varieties with over 12 million expressions, thus offering support also for low-resource transfer settings.
VerbNet constraints are extracted from the English VerbNet class structure in a straightforward manner. For each class INLINEFORM0 from the 273 VerbNet classes, we simply take the set of all INLINEFORM1 verbs INLINEFORM2 associated with that class, including its subclasses, and generate all unique pairs INLINEFORM3 so that INLINEFORM4 and INLINEFORM5 . Example VerbNet pairwise constraints are shown in Tab. TABREF15 . Note that VerbNet classes in practice contain verb instances standing in a variety of lexical relations, including synonyms, antonyms, troponyms, hypernyms, and the class membership is determined on the basis of connections between the syntactic patterns and the underlying semantic relations BIBREF54 , BIBREF55 .
Clustering Algorithm
Given the initial distributional or specialised collection of target language vectors INLINEFORM0 , we apply an off-the-shelf clustering algorithm on top of these vectors in order to group verbs into classes. Following prior work BIBREF56 , BIBREF57 , BIBREF17 , we employ the MNCut spectral clustering algorithm BIBREF58 , which has wide applicability in similar NLP tasks which involve high-dimensional feature spaces BIBREF59 , BIBREF60 , BIBREF18 . Again, following prior work BIBREF17 , BIBREF61 , we estimate the number of clusters INLINEFORM1 using the self-tuning method of Zelnik:2004nips. This algorithm finds the optimal number by minimising a cost function based on the eigenvector structure of the word similarity matrix. We refer the reader to the relevant literature for further details.
Results and Discussion
Cross-Lingual Transfer Model F-1 verb classification scores for the six target languages with different sets of constraints are summarised in Fig. FIGREF29 . We can draw several interesting conclusions. First, the strongest results on average are obtained with the model which transfers the VerbNet knowledge from English (as a resource-rich language) to the resource-lean target language (providing an answer to question Q3, Sect. SECREF1 ). These improvements are visible across all target languages, empirically demonstrating the cross-lingual nature of VerbNet-style classifications. Second, using cross-lingual constraints alone (XLing) yields strong gains over initial distributional spaces (answering Q1 and Q2). Fig. FIGREF29 also shows that cross-lingual similarity constraints are more beneficial than the monolingual ones, despite a larger total number of the monolingual constraints in each language (see Tab. TABREF18 ). This suggests that such cross-lingual similarity links are strong implicit indicators of class membership. Namely, target language words which map to the same source language word are likely to be synonyms and consequently end up in the same verb class in the target language. However, the cross-lingual links are even more useful as means for transferring the VerbNet knowledge, as evidenced by additional gains with XLing+VerbNet-EN.
The absolute classification scores are the lowest for the two Slavic languages: pl and hr. This may be partially explained by the lowest number of cross-lingual constraints for the two languages covering only a subset of their entire vocabularies (see Tab. TABREF18 and compare the total number of constraints for hr and pl to the numbers for e.g. fi or fr). Another reason for weaker performance of these two languages could be their rich morphology, which induces data sparsity both in the initial vector space estimation and in the coverage of constraints.
Further Discussion and Future Work
This work has proven the potential of transferring lexical resources from resource-rich to resource-poor languages using general-purpose cross-lingual dictionaries and bilingual vector spaces as means of transfer within a semantic specialisation framework. However, we believe that the proposed basic framework may be upgraded and extended across several research paths in future work.
First, in the current work we have operated with standard single-sense/single-prototype representations, thus effectively disregarding the problem of verb polysemy. While several polysemy-aware verb classification models for English were developed recently BIBREF79 , BIBREF80 , the current lack of polysemy-aware evaluation sets in other languages impedes this line of research. Evaluation issues aside, one idea for future work is to use the Attract-Repel specialisation framework for sense-aware cross-lingual transfer relying on recently developed multi-sense/prototype word representations BIBREF81 , BIBREF82 .
Another challenge is to apply the idea from this work to enable cross-lingual transfer of other structured lexical resources available in English such as FrameNet BIBREF44 , PropBank BIBREF45 , and VerbKB BIBREF83 . Other potential research avenues include porting the approach to other typologically diverse languages and truly low-resource settings (e.g., with only limited amounts of parallel data), as well as experiments with other distributional spaces, e.g. BIBREF84 . Further refinements of the specialisation and clustering algorithms may also result in improved verb class induction.
Conclusion
We have presented a novel cross-lingual transfer model which enables the automatic induction of VerbNet-style verb classifications across multiple languages. The transfer is based on a word vector space specialisation framework, utilised to directly model the assumption of cross-linguistic validity of VerbNet-style classifications. Our results indicate strong improvements in verb classification accuracy across all six target languages explored. All automatically induced VerbNets are available at:
github.com/cambridgeltl/verbnets.
Acknowledgments
This work is supported by the ERC Consolidator Grant LEXICAL: Lexical Acquisition Across Languages (no 648909). The authors are grateful to the entire LEXICAL team, especially to Roi Reichart, and also to the three anonymous reviewers for their helpful and constructive suggestions. | MNCut spectral clustering algorithm BIBREF58 |
daf624f7d1623ccd3facb1d93d4d9d616b3192f4 | daf624f7d1623ccd3facb1d93d4d9d616b3192f4_0 | Q: How many words are translated between the cross-lingual translation pairs?
Text: Introduction
Playing a key role in conveying the meaning of a sentence, verbs are famously complex. They display a wide range of syntactic-semantic behaviour, expressing the semantics of an event as well as relational information among its participants BIBREF0 , BIBREF1 , BIBREF2 .
Lexical resources which capture the variability of verbs are instrumental for many Natural Language Processing (NLP) applications. One of the richest verb resources currently available for English is VerbNet BIBREF3 , BIBREF4 . Based on the work of Levin Levin:1993book, this largely hand-crafted taxonomy organises verbs into classes on the basis of their shared syntactic-semantic behaviour. Providing a useful level of generalisation for many NLP tasks, VerbNet has been used to support semantic role labelling BIBREF5 , BIBREF6 , semantic parsing BIBREF7 , word sense disambiguation BIBREF8 , discourse parsing BIBREF9 , information extraction BIBREF10 , text mining applications BIBREF11 , BIBREF12 , research into human language acquisition BIBREF13 , and other tasks.
This benefit for English NLP has motivated the development of VerbNets for languages such as Spanish and Catalan BIBREF14 , Czech BIBREF15 , and Mandarin BIBREF16 . However, end-to-end manual resource development using Levin's methodology is extremely time consuming, even when supported by translations of English VerbNet classes to other languages BIBREF17 , BIBREF18 . Approaches which aim to learn verb classes automatically offer an attractive alternative. However, existing methods rely on carefully engineered features that are extracted using sophisticated language-specific resources BIBREF19 , BIBREF17 , BIBREF20 , ranging from accurate parsers to pre-compiled subcategorisation frames BIBREF21 , BIBREF22 , BIBREF23 . Such methods are limited to a small set of resource-rich languages.
It has been argued that VerbNet-style classification has a strong cross-lingual element BIBREF24 , BIBREF2 . In support of this argument, Majewska:2017lre have shown that English VerbNet has high translatability across different, even typologically diverse languages. Based on this finding, we propose an automatic approach which exploits readily available annotations for English to facilitate efficient, large-scale development of VerbNets for a wide set of target languages.
Recently, unsupervised methods for inducing distributed word vector space representations or word embeddings BIBREF25 have been successfully applied to a plethora of NLP tasks BIBREF26 , BIBREF27 , BIBREF28 . These methods offer an elegant way to learn directly from large corpora, bypassing the feature engineering step and the dependence on mature NLP pipelines (e.g., POS taggers, parsers, extraction of subcategorisation frames). In this work, we demonstrate how these models can be used to support automatic verb class induction. Moreover, we show that these models offer the means to exploit inherent cross-lingual links in VerbNet-style classification in order to guide the development of new classifications for resource-lean languages. To the best of our knowledge, this proposition has not been investigated in previous work.
There has been little work on assessing the suitability of embeddings for capturing rich syntactic-semantic phenomena. One challenge is their reliance on the distributional hypothesis BIBREF29 , which coalesces fine-grained syntactic-semantic relations between words into a broad relation of semantic relatedness (e.g., coffee:cup) BIBREF30 , BIBREF31 . This property has an adverse effect when word embeddings are used in downstream tasks such as spoken language understanding BIBREF32 , BIBREF33 or dialogue state tracking BIBREF34 , BIBREF35 . It could have a similar effect on verb classification, which relies on the similarity in syntactic-semantic properties of verbs within a class. In summary, we explore three important questions in this paper:
(Q1) Given their fundamental dependence on the distributional hypothesis, to what extent can unsupervised methods for inducing vector spaces facilitate the automatic induction of VerbNet-style verb classes across different languages?
(Q2) Can one boost verb classification for lower-resource languages by exploiting general-purpose cross-lingual resources such as BabelNet BIBREF36 , BIBREF37 or bilingual dictionaries such as PanLex BIBREF38 to construct better word vector spaces for these languages?
(Q3) Based on the stipulated cross-linguistic validity of VerbNet-style classification, can one exploit rich sets of readily available annotations in one language (e.g., the full English VerbNet) to automatically bootstrap the creation of VerbNets for other languages? In other words, is it possible to exploit a cross-lingual vector space to transfer VerbNet knowledge from a resource-rich to a resource-lean language?
To investigate Q1, we induce standard distributional vector spaces BIBREF39 , BIBREF40 from large monolingual corpora in English and six target languages. As expected, the results obtained with this straightforward approach show positive trends, but at the same time reveal its limitations for all the languages involved. Therefore, the focus of our work shifts to Q2 and Q3. The problem of inducing VerbNet-oriented embeddings is framed as vector space specialisation using the available external resources: BabelNet or PanLex, and (English) VerbNet. Formalised as an instance of post-processing semantic specialisation approaches BIBREF41 , BIBREF34 , our procedure is steered by two sets of linguistic constraints: 1) cross-lingual (translation) links between languages extracted from BabelNet (targeting Q2); and 2) the available VerbNet annotations for a resource-rich language. The two sets of constraints jointly target Q3.
The main goal of vector space specialisation is to pull examples standing in desirable relations, as described by the constraints, closer together in the transformed vector space. The specialisation process can capitalise on the knowledge of VerbNet relations in the source language (English) by using translation pairs to transfer that knowledge to each of the target languages. By constructing shared bilingual vector spaces, our method facilitates the transfer of semantic relations derived from VerbNet to the vector spaces of resource-lean target languages. This idea is illustrated by Fig. FIGREF2 .
Our results indicate that cross-lingual connections yield improved verb classes across all six target languages (thus answering Q2). Moreover, a consistent and significant boost in verb classification performance is achieved by propagating the VerbNet-style information from the source language (English) to any other target language (e.g., Italian, Croatian, Polish, Finnish) for which no VerbNet-style information is available during the fine-tuning process (thus answering Q3). We report state-of-the-art verb classification performance for all six languages in our experiments. For instance, we improve the state-of-the-art F-1 score from prior work from 0.55 to 0.79 for French, and from 0.43 to 0.74 for Brazilian Portuguese.
Vector Space Specialisation
Our departure point is a state-of-the-art specialisation model for fine-tuning vector spaces termed Paragram BIBREF49 . The Paragram procedure injects similarity constraints between word pairs in order to make their vector space representations more similar; we term these the Attract constraints. Let INLINEFORM0 be the vocabulary consisting of the source language and target language vocabularies INLINEFORM1 and INLINEFORM2 , respectively. Let INLINEFORM3 be the set of word pairs standing in desirable lexical relations; these include: 1) verb pairs from the same VerbNet class (e.g. (en_transport, en_transfer) from verb class send-11.1); and 2) the cross-lingual synonymy pairs (e.g. (en_peace, fi_rauha)). Given the initial distributional space and collections of such Attract pairs INLINEFORM4 , the model gradually modifies the space to bring the designated word vectors closer together, working in mini-batches of size INLINEFORM5 . The method's cost function can be expressed as:
DISPLAYFORM0
The first term of the method's cost function (i.e., INLINEFORM0 ) pulls the Attract examples INLINEFORM1 closer together (see Fig. FIGREF2 for an illustration). INLINEFORM2 refers to the current mini-batch of Attract constraints. This term is expressed as follows:
DISPLAYFORM0
INLINEFORM0 is the standard rectified linear unit or the hinge loss function BIBREF50 , BIBREF51 . INLINEFORM1 is the “attract” margin: it determines how much vectors of words from Attract constraints should be closer to each other than to their negative examples. The negative example INLINEFORM2 for each word INLINEFORM3 in any Attract pair is always the vector closest to INLINEFORM4 taken from the pairs in the current mini-batch, distinct from the other word paired with INLINEFORM5 , and INLINEFORM6 itself.
The second INLINEFORM0 term is the regularisation which aims to retain the semantic information encoded in the initial distributional space as long as this information does not contradict the used Attract constraints. Let INLINEFORM1 refer to the initial distributional vector of the word INLINEFORM2 and let INLINEFORM3 be the set of all word vectors present in the given mini-batch. If INLINEFORM4 denotes the L2 regularisation constant, this term can be expressed as:
DISPLAYFORM0
The fine-tuning procedure effectively blends the knowledge from external resources (i.e., the input Attract set of constraints) with distributional information extracted directly from large corpora. We show how to propagate annotations from a knowledge source such as VerbNet from source to target by combining two types of constraints within the specialisation framework: a) cross-lingual (translation) links between languages, and b) available VerbNet annotations in a resource-rich language transformed into pairwise constraints. Cross-lingual constraints such as (pl_wojna, it_guerra) are extracted from BabelNet BIBREF36 , a large-scale resource which groups words into cross-lingual babel synsets (and is currently available for 271 languages). The wide and steadily growing coverage of languages in BabelNet means that our proposed framework promises to support the transfer of VerbNet-style information to numerous target languages (with increasingly high accuracy).
To establish that the proposed transfer approach is in fact independent of the chosen cross-lingual information source, we also experiment with another cross-lingual dictionary: PanLex BIBREF38 , which was used in prior work on cross-lingual word vector spaces BIBREF52 , BIBREF53 . This dictionary currently covers around 1,300 language varieties with over 12 million expressions, thus offering support also for low-resource transfer settings.
VerbNet constraints are extracted from the English VerbNet class structure in a straightforward manner. For each class INLINEFORM0 from the 273 VerbNet classes, we simply take the set of all INLINEFORM1 verbs INLINEFORM2 associated with that class, including its subclasses, and generate all unique pairs INLINEFORM3 so that INLINEFORM4 and INLINEFORM5 . Example VerbNet pairwise constraints are shown in Tab. TABREF15 . Note that VerbNet classes in practice contain verb instances standing in a variety of lexical relations, including synonyms, antonyms, troponyms, hypernyms, and the class membership is determined on the basis of connections between the syntactic patterns and the underlying semantic relations BIBREF54 , BIBREF55 .
Clustering Algorithm
Given the initial distributional or specialised collection of target language vectors INLINEFORM0 , we apply an off-the-shelf clustering algorithm on top of these vectors in order to group verbs into classes. Following prior work BIBREF56 , BIBREF57 , BIBREF17 , we employ the MNCut spectral clustering algorithm BIBREF58 , which has wide applicability in similar NLP tasks which involve high-dimensional feature spaces BIBREF59 , BIBREF60 , BIBREF18 . Again, following prior work BIBREF17 , BIBREF61 , we estimate the number of clusters INLINEFORM1 using the self-tuning method of Zelnik:2004nips. This algorithm finds the optimal number by minimising a cost function based on the eigenvector structure of the word similarity matrix. We refer the reader to the relevant literature for further details.
Results and Discussion
Cross-Lingual Transfer Model F-1 verb classification scores for the six target languages with different sets of constraints are summarised in Fig. FIGREF29 . We can draw several interesting conclusions. First, the strongest results on average are obtained with the model which transfers the VerbNet knowledge from English (as a resource-rich language) to the resource-lean target language (providing an answer to question Q3, Sect. SECREF1 ). These improvements are visible across all target languages, empirically demonstrating the cross-lingual nature of VerbNet-style classifications. Second, using cross-lingual constraints alone (XLing) yields strong gains over initial distributional spaces (answering Q1 and Q2). Fig. FIGREF29 also shows that cross-lingual similarity constraints are more beneficial than the monolingual ones, despite a larger total number of the monolingual constraints in each language (see Tab. TABREF18 ). This suggests that such cross-lingual similarity links are strong implicit indicators of class membership. Namely, target language words which map to the same source language word are likely to be synonyms and consequently end up in the same verb class in the target language. However, the cross-lingual links are even more useful as means for transferring the VerbNet knowledge, as evidenced by additional gains with XLing+VerbNet-EN.
The absolute classification scores are the lowest for the two Slavic languages: pl and hr. This may be partially explained by the lowest number of cross-lingual constraints for the two languages covering only a subset of their entire vocabularies (see Tab. TABREF18 and compare the total number of constraints for hr and pl to the numbers for e.g. fi or fr). Another reason for weaker performance of these two languages could be their rich morphology, which induces data sparsity both in the initial vector space estimation and in the coverage of constraints.
Further Discussion and Future Work
This work has proven the potential of transferring lexical resources from resource-rich to resource-poor languages using general-purpose cross-lingual dictionaries and bilingual vector spaces as means of transfer within a semantic specialisation framework. However, we believe that the proposed basic framework may be upgraded and extended across several research paths in future work.
First, in the current work we have operated with standard single-sense/single-prototype representations, thus effectively disregarding the problem of verb polysemy. While several polysemy-aware verb classification models for English were developed recently BIBREF79 , BIBREF80 , the current lack of polysemy-aware evaluation sets in other languages impedes this line of research. Evaluation issues aside, one idea for future work is to use the Attract-Repel specialisation framework for sense-aware cross-lingual transfer relying on recently developed multi-sense/prototype word representations BIBREF81 , BIBREF82 .
Another challenge is to apply the idea from this work to enable cross-lingual transfer of other structured lexical resources available in English such as FrameNet BIBREF44 , PropBank BIBREF45 , and VerbKB BIBREF83 . Other potential research avenues include porting the approach to other typologically diverse languages and truly low-resource settings (e.g., with only limited amounts of parallel data), as well as experiments with other distributional spaces, e.g. BIBREF84 . Further refinements of the specialisation and clustering algorithms may also result in improved verb class induction.
Conclusion
We have presented a novel cross-lingual transfer model which enables the automatic induction of VerbNet-style verb classifications across multiple languages. The transfer is based on a word vector space specialisation framework, utilised to directly model the assumption of cross-linguistic validity of VerbNet-style classifications. Our results indicate strong improvements in verb classification accuracy across all six target languages explored. All automatically induced VerbNets are available at:
github.com/cambridgeltl/verbnets.
Acknowledgments
This work is supported by the ERC Consolidator Grant LEXICAL: Lexical Acquisition Across Languages (no 648909). The authors are grateful to the entire LEXICAL team, especially to Roi Reichart, and also to the three anonymous reviewers for their helpful and constructive suggestions. | Unanswerable |
74261f410882551491657d76db1f0f2798ac680f | 74261f410882551491657d76db1f0f2798ac680f_0 | Q: What are the six target languages?
Text: Introduction
Playing a key role in conveying the meaning of a sentence, verbs are famously complex. They display a wide range of syntactic-semantic behaviour, expressing the semantics of an event as well as relational information among its participants BIBREF0 , BIBREF1 , BIBREF2 .
Lexical resources which capture the variability of verbs are instrumental for many Natural Language Processing (NLP) applications. One of the richest verb resources currently available for English is VerbNet BIBREF3 , BIBREF4 . Based on the work of Levin Levin:1993book, this largely hand-crafted taxonomy organises verbs into classes on the basis of their shared syntactic-semantic behaviour. Providing a useful level of generalisation for many NLP tasks, VerbNet has been used to support semantic role labelling BIBREF5 , BIBREF6 , semantic parsing BIBREF7 , word sense disambiguation BIBREF8 , discourse parsing BIBREF9 , information extraction BIBREF10 , text mining applications BIBREF11 , BIBREF12 , research into human language acquisition BIBREF13 , and other tasks.
This benefit for English NLP has motivated the development of VerbNets for languages such as Spanish and Catalan BIBREF14 , Czech BIBREF15 , and Mandarin BIBREF16 . However, end-to-end manual resource development using Levin's methodology is extremely time consuming, even when supported by translations of English VerbNet classes to other languages BIBREF17 , BIBREF18 . Approaches which aim to learn verb classes automatically offer an attractive alternative. However, existing methods rely on carefully engineered features that are extracted using sophisticated language-specific resources BIBREF19 , BIBREF17 , BIBREF20 , ranging from accurate parsers to pre-compiled subcategorisation frames BIBREF21 , BIBREF22 , BIBREF23 . Such methods are limited to a small set of resource-rich languages.
It has been argued that VerbNet-style classification has a strong cross-lingual element BIBREF24 , BIBREF2 . In support of this argument, Majewska:2017lre have shown that English VerbNet has high translatability across different, even typologically diverse languages. Based on this finding, we propose an automatic approach which exploits readily available annotations for English to facilitate efficient, large-scale development of VerbNets for a wide set of target languages.
Recently, unsupervised methods for inducing distributed word vector space representations or word embeddings BIBREF25 have been successfully applied to a plethora of NLP tasks BIBREF26 , BIBREF27 , BIBREF28 . These methods offer an elegant way to learn directly from large corpora, bypassing the feature engineering step and the dependence on mature NLP pipelines (e.g., POS taggers, parsers, extraction of subcategorisation frames). In this work, we demonstrate how these models can be used to support automatic verb class induction. Moreover, we show that these models offer the means to exploit inherent cross-lingual links in VerbNet-style classification in order to guide the development of new classifications for resource-lean languages. To the best of our knowledge, this proposition has not been investigated in previous work.
There has been little work on assessing the suitability of embeddings for capturing rich syntactic-semantic phenomena. One challenge is their reliance on the distributional hypothesis BIBREF29 , which coalesces fine-grained syntactic-semantic relations between words into a broad relation of semantic relatedness (e.g., coffee:cup) BIBREF30 , BIBREF31 . This property has an adverse effect when word embeddings are used in downstream tasks such as spoken language understanding BIBREF32 , BIBREF33 or dialogue state tracking BIBREF34 , BIBREF35 . It could have a similar effect on verb classification, which relies on the similarity in syntactic-semantic properties of verbs within a class. In summary, we explore three important questions in this paper:
(Q1) Given their fundamental dependence on the distributional hypothesis, to what extent can unsupervised methods for inducing vector spaces facilitate the automatic induction of VerbNet-style verb classes across different languages?
(Q2) Can one boost verb classification for lower-resource languages by exploiting general-purpose cross-lingual resources such as BabelNet BIBREF36 , BIBREF37 or bilingual dictionaries such as PanLex BIBREF38 to construct better word vector spaces for these languages?
(Q3) Based on the stipulated cross-linguistic validity of VerbNet-style classification, can one exploit rich sets of readily available annotations in one language (e.g., the full English VerbNet) to automatically bootstrap the creation of VerbNets for other languages? In other words, is it possible to exploit a cross-lingual vector space to transfer VerbNet knowledge from a resource-rich to a resource-lean language?
To investigate Q1, we induce standard distributional vector spaces BIBREF39 , BIBREF40 from large monolingual corpora in English and six target languages. As expected, the results obtained with this straightforward approach show positive trends, but at the same time reveal its limitations for all the languages involved. Therefore, the focus of our work shifts to Q2 and Q3. The problem of inducing VerbNet-oriented embeddings is framed as vector space specialisation using the available external resources: BabelNet or PanLex, and (English) VerbNet. Formalised as an instance of post-processing semantic specialisation approaches BIBREF41 , BIBREF34 , our procedure is steered by two sets of linguistic constraints: 1) cross-lingual (translation) links between languages extracted from BabelNet (targeting Q2); and 2) the available VerbNet annotations for a resource-rich language. The two sets of constraints jointly target Q3.
The main goal of vector space specialisation is to pull examples standing in desirable relations, as described by the constraints, closer together in the transformed vector space. The specialisation process can capitalise on the knowledge of VerbNet relations in the source language (English) by using translation pairs to transfer that knowledge to each of the target languages. By constructing shared bilingual vector spaces, our method facilitates the transfer of semantic relations derived from VerbNet to the vector spaces of resource-lean target languages. This idea is illustrated by Fig. FIGREF2 .
Our results indicate that cross-lingual connections yield improved verb classes across all six target languages (thus answering Q2). Moreover, a consistent and significant boost in verb classification performance is achieved by propagating the VerbNet-style information from the source language (English) to any other target language (e.g., Italian, Croatian, Polish, Finnish) for which no VerbNet-style information is available during the fine-tuning process (thus answering Q3). We report state-of-the-art verb classification performance for all six languages in our experiments. For instance, we improve the state-of-the-art F-1 score from prior work from 0.55 to 0.79 for French, and from 0.43 to 0.74 for Brazilian Portuguese.
Vector Space Specialisation
Our departure point is a state-of-the-art specialisation model for fine-tuning vector spaces termed Paragram BIBREF49 . The Paragram procedure injects similarity constraints between word pairs in order to make their vector space representations more similar; we term these the Attract constraints. Let INLINEFORM0 be the vocabulary consisting of the source language and target language vocabularies INLINEFORM1 and INLINEFORM2 , respectively. Let INLINEFORM3 be the set of word pairs standing in desirable lexical relations; these include: 1) verb pairs from the same VerbNet class (e.g. (en_transport, en_transfer) from verb class send-11.1); and 2) the cross-lingual synonymy pairs (e.g. (en_peace, fi_rauha)). Given the initial distributional space and collections of such Attract pairs INLINEFORM4 , the model gradually modifies the space to bring the designated word vectors closer together, working in mini-batches of size INLINEFORM5 . The method's cost function can be expressed as:
DISPLAYFORM0
The first term of the method's cost function (i.e., INLINEFORM0 ) pulls the Attract examples INLINEFORM1 closer together (see Fig. FIGREF2 for an illustration). INLINEFORM2 refers to the current mini-batch of Attract constraints. This term is expressed as follows:
DISPLAYFORM0
INLINEFORM0 is the standard rectified linear unit or the hinge loss function BIBREF50 , BIBREF51 . INLINEFORM1 is the “attract” margin: it determines how much vectors of words from Attract constraints should be closer to each other than to their negative examples. The negative example INLINEFORM2 for each word INLINEFORM3 in any Attract pair is always the vector closest to INLINEFORM4 taken from the pairs in the current mini-batch, distinct from the other word paired with INLINEFORM5 , and INLINEFORM6 itself.
The second INLINEFORM0 term is the regularisation which aims to retain the semantic information encoded in the initial distributional space as long as this information does not contradict the used Attract constraints. Let INLINEFORM1 refer to the initial distributional vector of the word INLINEFORM2 and let INLINEFORM3 be the set of all word vectors present in the given mini-batch. If INLINEFORM4 denotes the L2 regularisation constant, this term can be expressed as:
DISPLAYFORM0
The fine-tuning procedure effectively blends the knowledge from external resources (i.e., the input Attract set of constraints) with distributional information extracted directly from large corpora. We show how to propagate annotations from a knowledge source such as VerbNet from source to target by combining two types of constraints within the specialisation framework: a) cross-lingual (translation) links between languages, and b) available VerbNet annotations in a resource-rich language transformed into pairwise constraints. Cross-lingual constraints such as (pl_wojna, it_guerra) are extracted from BabelNet BIBREF36 , a large-scale resource which groups words into cross-lingual babel synsets (and is currently available for 271 languages). The wide and steadily growing coverage of languages in BabelNet means that our proposed framework promises to support the transfer of VerbNet-style information to numerous target languages (with increasingly high accuracy).
To establish that the proposed transfer approach is in fact independent of the chosen cross-lingual information source, we also experiment with another cross-lingual dictionary: PanLex BIBREF38 , which was used in prior work on cross-lingual word vector spaces BIBREF52 , BIBREF53 . This dictionary currently covers around 1,300 language varieties with over 12 million expressions, thus offering support also for low-resource transfer settings.
VerbNet constraints are extracted from the English VerbNet class structure in a straightforward manner. For each class INLINEFORM0 from the 273 VerbNet classes, we simply take the set of all INLINEFORM1 verbs INLINEFORM2 associated with that class, including its subclasses, and generate all unique pairs INLINEFORM3 so that INLINEFORM4 and INLINEFORM5 . Example VerbNet pairwise constraints are shown in Tab. TABREF15 . Note that VerbNet classes in practice contain verb instances standing in a variety of lexical relations, including synonyms, antonyms, troponyms, hypernyms, and the class membership is determined on the basis of connections between the syntactic patterns and the underlying semantic relations BIBREF54 , BIBREF55 .
Clustering Algorithm
Given the initial distributional or specialised collection of target language vectors INLINEFORM0 , we apply an off-the-shelf clustering algorithm on top of these vectors in order to group verbs into classes. Following prior work BIBREF56 , BIBREF57 , BIBREF17 , we employ the MNCut spectral clustering algorithm BIBREF58 , which has wide applicability in similar NLP tasks which involve high-dimensional feature spaces BIBREF59 , BIBREF60 , BIBREF18 . Again, following prior work BIBREF17 , BIBREF61 , we estimate the number of clusters INLINEFORM1 using the self-tuning method of Zelnik:2004nips. This algorithm finds the optimal number by minimising a cost function based on the eigenvector structure of the word similarity matrix. We refer the reader to the relevant literature for further details.
Results and Discussion
Cross-Lingual Transfer Model F-1 verb classification scores for the six target languages with different sets of constraints are summarised in Fig. FIGREF29 . We can draw several interesting conclusions. First, the strongest results on average are obtained with the model which transfers the VerbNet knowledge from English (as a resource-rich language) to the resource-lean target language (providing an answer to question Q3, Sect. SECREF1 ). These improvements are visible across all target languages, empirically demonstrating the cross-lingual nature of VerbNet-style classifications. Second, using cross-lingual constraints alone (XLing) yields strong gains over initial distributional spaces (answering Q1 and Q2). Fig. FIGREF29 also shows that cross-lingual similarity constraints are more beneficial than the monolingual ones, despite a larger total number of the monolingual constraints in each language (see Tab. TABREF18 ). This suggests that such cross-lingual similarity links are strong implicit indicators of class membership. Namely, target language words which map to the same source language word are likely to be synonyms and consequently end up in the same verb class in the target language. However, the cross-lingual links are even more useful as means for transferring the VerbNet knowledge, as evidenced by additional gains with XLing+VerbNet-EN.
The absolute classification scores are the lowest for the two Slavic languages: pl and hr. This may be partially explained by the lowest number of cross-lingual constraints for the two languages covering only a subset of their entire vocabularies (see Tab. TABREF18 and compare the total number of constraints for hr and pl to the numbers for e.g. fi or fr). Another reason for weaker performance of these two languages could be their rich morphology, which induces data sparsity both in the initial vector space estimation and in the coverage of constraints.
Further Discussion and Future Work
This work has proven the potential of transferring lexical resources from resource-rich to resource-poor languages using general-purpose cross-lingual dictionaries and bilingual vector spaces as means of transfer within a semantic specialisation framework. However, we believe that the proposed basic framework may be upgraded and extended across several research paths in future work.
First, in the current work we have operated with standard single-sense/single-prototype representations, thus effectively disregarding the problem of verb polysemy. While several polysemy-aware verb classification models for English were developed recently BIBREF79 , BIBREF80 , the current lack of polysemy-aware evaluation sets in other languages impedes this line of research. Evaluation issues aside, one idea for future work is to use the Attract-Repel specialisation framework for sense-aware cross-lingual transfer relying on recently developed multi-sense/prototype word representations BIBREF81 , BIBREF82 .
Another challenge is to apply the idea from this work to enable cross-lingual transfer of other structured lexical resources available in English such as FrameNet BIBREF44 , PropBank BIBREF45 , and VerbKB BIBREF83 . Other potential research avenues include porting the approach to other typologically diverse languages and truly low-resource settings (e.g., with only limited amounts of parallel data), as well as experiments with other distributional spaces, e.g. BIBREF84 . Further refinements of the specialisation and clustering algorithms may also result in improved verb class induction.
Conclusion
We have presented a novel cross-lingual transfer model which enables the automatic induction of VerbNet-style verb classifications across multiple languages. The transfer is based on a word vector space specialisation framework, utilised to directly model the assumption of cross-linguistic validity of VerbNet-style classifications. Our results indicate strong improvements in verb classification accuracy across all six target languages explored. All automatically induced VerbNets are available at:
github.com/cambridgeltl/verbnets.
Acknowledgments
This work is supported by the ERC Consolidator Grant LEXICAL: Lexical Acquisition Across Languages (no 648909). The authors are grateful to the entire LEXICAL team, especially to Roi Reichart, and also to the three anonymous reviewers for their helpful and constructive suggestions. | Answer with content missing: (3 Experimental Setup) We experiment with six target languages: French (FR), Brazilian Portuguese (PT), Italian (IT), Polish (PL), Croatian (HR), and Finnish (FI). |
3d34a02ceebcc93ee79dc073c408651d25e538bc | 3d34a02ceebcc93ee79dc073c408651d25e538bc_0 | Q: what classifiers were used in this paper?
Text: Introduction
Fake news are written and published with the intent to mislead in order to gain financially or politically, often targeting specific user groups. Another type of harmful content on the Internet are the so-called click-baits, which are distinguished by their sensational, exaggerated, or deliberately false headlines that grab attention and deceive the user into clicking an article with questionable content.
While the motives behind these two types of fake news are different, they constitute a growing problem as they constitute a sizable fraction of the online news that users encounter on a daily basis. With the recent boom of Internet, mobile, and social networks, the spread of fake news increases exponentially. Using on-line methods for spreading harmful content makes the task of keeping the Internet clean significantly harder as it is very easy to publish an article and there is no easy way to verify its veracity. Currently, domains that consistently spread misinformation are being banned from various platforms, but this is a rather inefficient way to deal with fake news as websites that specialize in spreading misinformation are reappearing with different domain names. That is why our method is based purely on text analysis, without taking into account the domain name or website's reliability as a source of information. Our work is focused on exploring various stylistic and lexical features in order to detect misleading content, and on experiments with neural network architectures in order to evaluate how deep learning can be used for detecting fake news. Moreover, we created various language-specific resources that could be used in future work on fake news and clickbait detection for Bulgarian, including task-specific word embeddings and various lexicons and dictionaries extracted from the training data.
Related Work
Trustworthiness and veracity analytics of on-line statements is an emerging research direction BIBREF0 . This includes predicting credibility of information shared in social media BIBREF1 , stance classification BIBREF2 and contradiction detection in rumours BIBREF3 . For example, Castillo:2011:ICT:1963405.1963500 studied the problem of finding false information about a newsworthy event. They compiled their own dataset, focusing on tweets using a variety of features including user reputation, author writing style, and various time-based features. Canini:2011 analysed the interaction of content and social network structure, and Morris:2012:TBU:2145204.2145274 studied how Twitter users judge truthfulness. They found that this is hard to do based on content alone, and instead users are influenced by heuristics such as user name.
Rumour detection in social media represents yet another angle of information credibility. zubiaga2015analysing studied how people handle rumours in social media. They found that users with higher reputation are more trusted, and thus can spread rumours among other users without raising suspicions about the credibility of the news or of its source. lukasik-cohn-bontcheva:2015:ACL-IJCNLP and Ma:2015:DRU used temporal patterns to detect rumours and to predict their frequency, PlosONE:2016 focused on conversational threads, and RANLP2017:factchecking:external used deep learning to verify claims using the Web as a corpus.
Veracity of information has been also studied in the context of online personal blogs BIBREF4 , community question answering forums BIBREF5 , and political debates BIBREF6 .
Astroturfing and misinformation detection represent another relevant research direction. Their importance is motivated by the strong interest from political science, and research methods are driven by the presence of massive streams of micro-blogging data, e.g., on Twitter BIBREF7 . While astroturfing has been primarily studied in microblogs such as Twitter, here we focus on on-line news and click-baits instead.
Identification of malicious accounts in social networks is another related research direction. This includes detecting spam accounts BIBREF8 , BIBREF9 , fake accounts BIBREF10 , BIBREF11 , compromised accounts and phishing accounts BIBREF12 . Fake profile detection has also been studied in the context of cyber-bullying BIBREF13 . A related problem is that of Web spam detection, which was addressed as a text classification problem BIBREF14 , e.g., using spam keyword spotting BIBREF15 , lexical affinity of arbitrary words to spam content BIBREF16 , frequency of punctuation and word co-occurrence BIBREF17 .
Fake news detection is most closely related to the present work. While social media have been seen for years as the main vehicle for spreading information of questionable veracity, recently there has been a proliferation of fake news, often spread on social media, but also published in specialized websites. This has attracted research attention recently. For example, there has been work on studying credibility, trust, and expertise in news communities BIBREF18 . The credibility of the information published in on-line news portals has been questioned by a number of researchers BIBREF19 , BIBREF20 , BIBREF21 . As timing is crucial when it comes to publishing breaking news, it is simply not possible to double-check the facts and the sources, as is usually standard in respectable printed newspapers and magazines. This is one of the biggest concerns about on-line news media that journalists have BIBREF22 . Finally, conroy2015automatic review various methods for detecting fake news, e.g., using linguistic analysis, discourse, linked data, and social network features.
All the above work was for English. The only work on fact checking for Bulgarian is that of BIBREF23 , but they focused on distinguishing serious news from humorous ones. In contrast, here we are interested in finding news that are not designed to sound funny, but to make the reader believe they are real. Unlike them, we use a deep learning approach.
Fake News & Click-bait Dataset
We use a corpus of Bulgarian news over a fixed period of time, whose factuality had been questioned. The news come from 377 different sources from various domains, including politics, interesting facts and tips&tricks. The dataset was prepared for the Hack the Fake News hackathon. It was provided by the Bulgarian Association of PR Agencies and is available in Gitlab. The corpus was automatically collected, and then annotated by students of journalism. Each entry in the dataset consists of the following elements: URL of the original article, date of publication, article heading, article content, a label indicating whether the article is fake or not, and another label indicating whether it is a click-bait.
The training dataset contains 2,815 examples, where 1,940 (i.e., 69%) are fake news and 1,968 (i.e., 70%) are click-baits; we further have 761 testing examples. However, there is 98% correlation between fake news and click-baits, i.e., a model trained on fake news would do well on click-baits and vice versa. Thus, below we focus on fake news detection only.
One important aspect about the training dataset is that it contains many repetitions. This should not be surprising as it attempts to represent a natural distribution of factual vs. fake news on-line over a period of time. As publishers of fake news often have a group of websites that feature the same deceiving content, we should expect some repetition.
In particular, the training dataset contains 434 unique articles with duplicates. These articles have three reposts each on average, with the most reposted article appearing 45 times. If we take into account the labels of the reposted articles, we can see that if an article is reposted, it is more likely to be fake news. The number of fake news that have a duplicate in the training dataset are 1018 whereas, the number of articles with genuine content that have a duplicate article in the training set is 322. We detect the duplicates based on their titles as far as they are distinctive enough and the content is sometimes slightly modified when reposted.
This supports the hypothesis that fake news websites are likely to repost their content. This is also in line with previous research BIBREF24 , which has found it beneficial to find a pattern of how a rumour is reposted over time.
Method
We propose a general framework for finding fake news focusing on the text only. We first create some resources, e.g., dictionaries of words strongly correlated with fake news, which are needed for feature extraction. Then, we design features that model a number of interesting aspects about an article, e.g., style, intent, etc. Moreover, we use a deep neural network to learn task-specific representations of the articles, which includes an attention mechanism that can focus on the most discriminative sentences and words.
Language Resources
As our work is the first attempt at predicting click-baits in Bulgarian, it is organized around building new language-specific resources and analyzing the task.
Word embeddings: We train 300-dimensional domain-specific word embeddings using word2vec BIBREF25 on 100,000 Bulgarian news articles from the same sources as the main dataset. The labelled dataset we use in our system is a subset of these articles. Finally, we end up with 207,270 unique words that occur in five or more documents. We use these embeddings for text representation, and as an input to our attention-based nevural network.
Latent Dirichlet allocation (LDA): We use LDA BIBREF26 in order to build domain-specific topic models, which could be useful for inducing classes of words that signal fake/factual news. The LDA model is trained on the same 100,000 Bulgarian news articles as for training the word embeddings. In our experiments, these LDA classes proved helpful by themselves, but they did not have much to offer on top of the word embeddings. Thus, we ended up not using them in our final system, but we chose to still release them as other researchers might find them useful in the future.
Fact-checking lexicon: Using lexicons of sentiment words has been shown to be very successful for the task of sentiment analysis BIBREF27 , and we applied the same idea to extract a fact-checking lexicon. In particular, we use point-wise mutual information (PMI) to find terms (words, word bi-grams, and named entities) that are highly correlated with the fake/factual news class. We calculated the PMI scores for uni-grams, bi-grams and on extracted named entities. Table TABREF9 shows some of the most significant words for the fake news class. We can see in the table some words that grab people attention, but are not very informative by themselves, such as mysterious or phenomenon. These words are largely context-independent and are likely to remain stable in their usage across different domains and even over an extended period of time. Thus, they should be useful beyond this task and this dataset.
Other lexicons: Finally, we create four lexicons that can help to model the difference in language use between fake and factual news articles. In particular, we explored and merged/cleansed a number of on-line resources in order to put together the following lexicons: (i) common typos in Bulgarian written text, (ii) Bulgarian slang words, (iii) commonly used foreign words, and (iv) English words with Bulgarian equivalents. We separate the latter two, because of the frequent usage of English words in common language. We make these lexicons freely available for future research.
Features
Fake news are written with the intent to deceive, and their authors often use a different style of writing compared to authors that create genuine content. This could be either deliberately, e.g., if the author wants to adapt the text to a specific target group or wants to provoke some particular emotional reaction in the reader, or unintentionally, e.g., because the authors of fake news have different writing style and personality compared to journalists in mainstream media. Disregarding the actual reason, we use features from author profiling and style detection BIBREF28 .
Use of specific words that have strong correlation with one of the classes (48 features). We used the above-described PMI-based fact-checking lexicons to extract features based on the presence of lexicon words in the target article. We end up with the following features: 16 for uni-grams + 16 for bi-grams + 16 for named entities, where we have a feature for the sum and also for the average of the word scores for each of the target classes (click-bait, non-click-bait, fake, non-fake), and we had these features separately for the title and for the body of the article.
Readability index (4 features): We calculate standard readability metrics including the type-token ratio, average word length, Flesch–Kincaid readability test BIBREF29 and Gunning-Fog index BIBREF30 . The last two metrics give scores to the text corresponding to the school grade the reader of the target article should have in order to be able to read and understand it easily. These metrics use statistics about the number of syllables, the number of words, and their length.
Orthographic features (12 features): The orthographic features used in our system include: the number of words in the title and in the content; the number of characters in the title and in the content; the number of specific symbols in the title and in the content, counting the following as symbols $.!;#?:-+%(), ; the number of capital letters in the title and in the content; the fraction of capital letters to all letters in the title and in the content; the number of URLs in the content; the overlap between the words from the title and the words of the content, relying on the fact that click-baits tend to have content that does not quite match their title. These features can be very effective for modelling the author's style.
Use of irregular vocabulary (4 features): During the initial analysis of our training dataset, we noticed the presence of a high number of foreign words. As it is not common in Bulgarian news articles to use words in another language, we thought that their presence could be a valuable feature to use. One of the reasons for their occurrence might be that they were translated from a foreign resource, or that they were borrowed. We further found that many articles that were labelled as fake news contained a high number of slang words, and we added this as a feature as well. Finally, we have a feature that counts the typos in the text.
General lexical features are often used in natural language processing as they are somewhat task-independent and reasonably effective in terms of classification accuracy. In our experiments, we used TF.IDF-based features over the title and over the content of the article we wanted to classify. We had these features twice – once for the title and once for the the content of the article, as we wanted to have two different representations of the same article. Thus, we used a total of 1,100 TF.IDF-weighted features (800 content + 300 title), limiting the vocabulary to the top 800 and 300 words, respectively (which occurred in more than five articles). We should note that TF.IDF features should be used with caution as they may not remain relevant over time or in different contexts without retraining.
The last type of hand-crafted features that we used are the grammatical features. First, we evaluate how often stop words are used in the content of the article. Extensive usage of stop words may indicate irregularities in the text, which would be missed by the above features. Additionally, we extract ten coarse-grained part-of-speech tags from the content of the article and we use part-of-speech occurrence ratios as features. This makes a total of twenty features, as we have separate features for the title and for the contents.
All the above features are hand-crafted, evaluating a specific text metric or checking whether specific words highly correlate with one of the classes. However, we lack features that target the semantic representation of the text itself. Thus, we further use two types of word representations.
Word embeddings (601 features). As we said above, we trained domain-specific word embeddings. In order to incorporate them as features, we calculate the average vector for the title and separately for the content of the news article. We end up with two 300-dimensional embedding representations of the semantics of the articles, which we use as 300+300=600 features. We also compute the cosine similarity between the average vector of the title and the average vector of the content, because we believe that this is a highly indicative measure for at least click-bait articles, whose content differs from what their title says.
Task-specific embeddings. As a more advanced representation, we feed the text into an attention-based deep neural network, which we train to produce a task-specific embedding of the news articles. The network is designed to recognize words and sentences that contribute to the click-bait class attribution. The architecture is described in details in Section UID15
Some Features that we Ignored
As we mentioned above, our method is purely text-based. Thus, we ignored the publishing date of the article. In future work, it could be explored as a useful piece of information about the credibility of the article, as there is interesting research in this direction BIBREF24 . We also disregarded the article source (the URL) because websites that specialize in producing and distributing fake content are often banned and then later reappear under another name. We recognize that the credibility of a specific website could be a very informative feature, but, for the sake of creating a robust method for fake news detection, our system relies only on the text when predicting whether the target article is likely to be fake. We describe our features in more detail below.
Model
Our framework for fake news detection is comprised of two components, which are used one after the other. First, we have an attention-based deep neural network model, which focuses on the segments of the text that are most indicative of the target class identification, and as a side effect learns task-specific representations of the news articles. We extract these representations from the last hidden layer in the network, and we feed it to the SVM classifier together with the hand-crafted features.
The attention network BIBREF31 , BIBREF32 is a powerful mechanism, inspired by the human ability to spot important sections in images or text. We adopt the approach used in BIBREF33 and employ an attention neural networks to build attention over the text of a piece of news with respect to the title it has. As far as it is in the nature of click-baits to have titles that are different from the text of the news, the attentional layers of the neural network should spot when the two texts talk about the same thing and when they are not corresponding or accurate. We implemented the attention mechanism using Keras BIBREF34 with the Tensorflow back-end BIBREF35 .
The architecture of the network with attention layers is shown in Figure FIGREF16 . Our neural model is based on Gated Recurrent Units (GRUs). GRUs are gating mechanism in RNNs which provide the ability to learn long-term dependencies and were first introduced in BIBREF36 . Given the document embedding, the GRUs build representations using input and forget gates, which help storing the valuable information through time. They build embeddings of the title and the text of the news, where at each step the unit has information only about the output from the previous step. This can be considered as a drawback, as far as we would considerably benefit if each step could construct its decision based not only on the previous step's output, but on all of the words that were processed so far. To improve this, the attention layer, for each step in the text sequence, uses the output of the steps in the title sequence. Thus, the layer learns weights, designating the strength of the relatedness between each word in the title and each word in the content.
For the neural network, we take the first 50 symbols of the title and the content of the news, which we choose after experiments. We train the neural network for 20 epochs and the final classification is derived with sigmoid activation. The optimizer used for the training is Adam optimizer. We feed the neural network with the embedding of the words we built earlier with word2vec.
As we will see below, the neural network is inferior in terms of performance to a feature-rich SVM (even though it performs well above the baseline). This is because it only has access to word embeddings, and does not use the manually-crafted features. Yet, its hidden layer represents a 128-dimensional task-specific embedding of the input article, and it turns out that using it as a list of 128 features in the SVM classifier yields even further great improvement, as we will see below. In this way, we combine a deep neural network with an attention mechanism with kernel-based SVM.
We feed the above-described hand-crafted features together with the task-specific embeddings learned by the deep neural neural network (a total of 1,892 attributes combined) into a Support Vector Machines (SVM) classifier BIBREF37 . SVMs have proven to perform well in different classification settings, including in the case of small and noisy datasets.
Experiments and Evaluation
We trained on the 2,815 training examples, and we tested on the 761 testing ones. The test dataset was provided apart from the training one, thus we didn't have to partition the original dataset to receive a testing one. The validation of the models was performed on a randomly chosen subset of sentences - one fifth of the original set. We scaled each feature individually by its maximum absolute value to end up with each feature having values in the [0;1] interval. We used an RBF kernel for the SVM, and we tuned the values of INLINEFORM0 and INLINEFORM1 using cross-validation. We trained the neural network using RMSProp BIBREF38 with a learning rate of 0.001 and mini-batches of size 32, chosen by performing experiments with cross-validation . We evaluated the model after each epoch and we kept the one that performed best on the development dataset.
Table TABREF17 shows the performance of the features in groups as described in Section SECREF7 . We can see that, among the hand-crafted features, the lexical features yield the best results, i.e., words are the most indicative features. The good results of the stylometric features indicate that the intricacies of language use are highly discriminative. The next group is the one with the grammatical features, which shows good performance in terms of Precision. The last one are the embedding features, which although having low individual performance, contribute to the overall performance of the system as shown in next paragraph.
Evaluating the final model, we set as a baseline the prediction of the majority class, i.e., the fake news class. This baseline has an F1 of 41.59% and accuracy of 71.22%. The performance of the built models can be seen in Table TABREF19 . Another stable baseline, apart from just taking the majority class, is the TF.IDF bag-of-words approach, which sets a high bar for the general model score. We then observe how much the attention mechanism embeddings improve the score (AttNN). Finally, we add the hand-crafted features (Feats), which further improve the performance. From the results, we can conclude that both the attention-based task-specific embeddings and the manual features are important for the task of finding fake news.
Conclusion and Future Work
We have presented the first attempt to solve the fake news problem for Bulgarian. Our method is purely text-based, and ignores the publication date and the source of the article. It combines task-specific embeddings, produced by a two-level attention-based deep neural network model, with manually crafted features (stylometric, lexical, grammatical, and semantic), into a kernel-based SVM classifier. We further produced and shared a number of relevant language resources for Bulgarian, which we created for solving the task.
The evaluation results are encouraging and suggest the potential applicability of our approach in a real-world scenario. They further show the potential of combining attention-based task-specific embeddings with manually crafted features. An important advantage of the attention-based neural networks is that the produced representations can be easily visualized and potentially interpreted as shown in BIBREF31 . We consider the implementation of such visualization as an important future work on the task.
Acknowledgements
We would like to thank Lachezar Bozhkov, who was part of our team in the Hack the Fake News hackathon, for his insight. This work is supported by the NSF of Bulgaria under Grant No. DN-02/11/2016 - ITDGate. | Support Vector Machines (SVM) classifier |
96992460cfc5f0b8d065ee427067147293746b7a | 96992460cfc5f0b8d065ee427067147293746b7a_0 | Q: what are their evaluation metrics?
Text: Introduction
Fake news are written and published with the intent to mislead in order to gain financially or politically, often targeting specific user groups. Another type of harmful content on the Internet are the so-called click-baits, which are distinguished by their sensational, exaggerated, or deliberately false headlines that grab attention and deceive the user into clicking an article with questionable content.
While the motives behind these two types of fake news are different, they constitute a growing problem as they constitute a sizable fraction of the online news that users encounter on a daily basis. With the recent boom of Internet, mobile, and social networks, the spread of fake news increases exponentially. Using on-line methods for spreading harmful content makes the task of keeping the Internet clean significantly harder as it is very easy to publish an article and there is no easy way to verify its veracity. Currently, domains that consistently spread misinformation are being banned from various platforms, but this is a rather inefficient way to deal with fake news as websites that specialize in spreading misinformation are reappearing with different domain names. That is why our method is based purely on text analysis, without taking into account the domain name or website's reliability as a source of information. Our work is focused on exploring various stylistic and lexical features in order to detect misleading content, and on experiments with neural network architectures in order to evaluate how deep learning can be used for detecting fake news. Moreover, we created various language-specific resources that could be used in future work on fake news and clickbait detection for Bulgarian, including task-specific word embeddings and various lexicons and dictionaries extracted from the training data.
Related Work
Trustworthiness and veracity analytics of on-line statements is an emerging research direction BIBREF0 . This includes predicting credibility of information shared in social media BIBREF1 , stance classification BIBREF2 and contradiction detection in rumours BIBREF3 . For example, Castillo:2011:ICT:1963405.1963500 studied the problem of finding false information about a newsworthy event. They compiled their own dataset, focusing on tweets using a variety of features including user reputation, author writing style, and various time-based features. Canini:2011 analysed the interaction of content and social network structure, and Morris:2012:TBU:2145204.2145274 studied how Twitter users judge truthfulness. They found that this is hard to do based on content alone, and instead users are influenced by heuristics such as user name.
Rumour detection in social media represents yet another angle of information credibility. zubiaga2015analysing studied how people handle rumours in social media. They found that users with higher reputation are more trusted, and thus can spread rumours among other users without raising suspicions about the credibility of the news or of its source. lukasik-cohn-bontcheva:2015:ACL-IJCNLP and Ma:2015:DRU used temporal patterns to detect rumours and to predict their frequency, PlosONE:2016 focused on conversational threads, and RANLP2017:factchecking:external used deep learning to verify claims using the Web as a corpus.
Veracity of information has been also studied in the context of online personal blogs BIBREF4 , community question answering forums BIBREF5 , and political debates BIBREF6 .
Astroturfing and misinformation detection represent another relevant research direction. Their importance is motivated by the strong interest from political science, and research methods are driven by the presence of massive streams of micro-blogging data, e.g., on Twitter BIBREF7 . While astroturfing has been primarily studied in microblogs such as Twitter, here we focus on on-line news and click-baits instead.
Identification of malicious accounts in social networks is another related research direction. This includes detecting spam accounts BIBREF8 , BIBREF9 , fake accounts BIBREF10 , BIBREF11 , compromised accounts and phishing accounts BIBREF12 . Fake profile detection has also been studied in the context of cyber-bullying BIBREF13 . A related problem is that of Web spam detection, which was addressed as a text classification problem BIBREF14 , e.g., using spam keyword spotting BIBREF15 , lexical affinity of arbitrary words to spam content BIBREF16 , frequency of punctuation and word co-occurrence BIBREF17 .
Fake news detection is most closely related to the present work. While social media have been seen for years as the main vehicle for spreading information of questionable veracity, recently there has been a proliferation of fake news, often spread on social media, but also published in specialized websites. This has attracted research attention recently. For example, there has been work on studying credibility, trust, and expertise in news communities BIBREF18 . The credibility of the information published in on-line news portals has been questioned by a number of researchers BIBREF19 , BIBREF20 , BIBREF21 . As timing is crucial when it comes to publishing breaking news, it is simply not possible to double-check the facts and the sources, as is usually standard in respectable printed newspapers and magazines. This is one of the biggest concerns about on-line news media that journalists have BIBREF22 . Finally, conroy2015automatic review various methods for detecting fake news, e.g., using linguistic analysis, discourse, linked data, and social network features.
All the above work was for English. The only work on fact checking for Bulgarian is that of BIBREF23 , but they focused on distinguishing serious news from humorous ones. In contrast, here we are interested in finding news that are not designed to sound funny, but to make the reader believe they are real. Unlike them, we use a deep learning approach.
Fake News & Click-bait Dataset
We use a corpus of Bulgarian news over a fixed period of time, whose factuality had been questioned. The news come from 377 different sources from various domains, including politics, interesting facts and tips&tricks. The dataset was prepared for the Hack the Fake News hackathon. It was provided by the Bulgarian Association of PR Agencies and is available in Gitlab. The corpus was automatically collected, and then annotated by students of journalism. Each entry in the dataset consists of the following elements: URL of the original article, date of publication, article heading, article content, a label indicating whether the article is fake or not, and another label indicating whether it is a click-bait.
The training dataset contains 2,815 examples, where 1,940 (i.e., 69%) are fake news and 1,968 (i.e., 70%) are click-baits; we further have 761 testing examples. However, there is 98% correlation between fake news and click-baits, i.e., a model trained on fake news would do well on click-baits and vice versa. Thus, below we focus on fake news detection only.
One important aspect about the training dataset is that it contains many repetitions. This should not be surprising as it attempts to represent a natural distribution of factual vs. fake news on-line over a period of time. As publishers of fake news often have a group of websites that feature the same deceiving content, we should expect some repetition.
In particular, the training dataset contains 434 unique articles with duplicates. These articles have three reposts each on average, with the most reposted article appearing 45 times. If we take into account the labels of the reposted articles, we can see that if an article is reposted, it is more likely to be fake news. The number of fake news that have a duplicate in the training dataset are 1018 whereas, the number of articles with genuine content that have a duplicate article in the training set is 322. We detect the duplicates based on their titles as far as they are distinctive enough and the content is sometimes slightly modified when reposted.
This supports the hypothesis that fake news websites are likely to repost their content. This is also in line with previous research BIBREF24 , which has found it beneficial to find a pattern of how a rumour is reposted over time.
Method
We propose a general framework for finding fake news focusing on the text only. We first create some resources, e.g., dictionaries of words strongly correlated with fake news, which are needed for feature extraction. Then, we design features that model a number of interesting aspects about an article, e.g., style, intent, etc. Moreover, we use a deep neural network to learn task-specific representations of the articles, which includes an attention mechanism that can focus on the most discriminative sentences and words.
Language Resources
As our work is the first attempt at predicting click-baits in Bulgarian, it is organized around building new language-specific resources and analyzing the task.
Word embeddings: We train 300-dimensional domain-specific word embeddings using word2vec BIBREF25 on 100,000 Bulgarian news articles from the same sources as the main dataset. The labelled dataset we use in our system is a subset of these articles. Finally, we end up with 207,270 unique words that occur in five or more documents. We use these embeddings for text representation, and as an input to our attention-based nevural network.
Latent Dirichlet allocation (LDA): We use LDA BIBREF26 in order to build domain-specific topic models, which could be useful for inducing classes of words that signal fake/factual news. The LDA model is trained on the same 100,000 Bulgarian news articles as for training the word embeddings. In our experiments, these LDA classes proved helpful by themselves, but they did not have much to offer on top of the word embeddings. Thus, we ended up not using them in our final system, but we chose to still release them as other researchers might find them useful in the future.
Fact-checking lexicon: Using lexicons of sentiment words has been shown to be very successful for the task of sentiment analysis BIBREF27 , and we applied the same idea to extract a fact-checking lexicon. In particular, we use point-wise mutual information (PMI) to find terms (words, word bi-grams, and named entities) that are highly correlated with the fake/factual news class. We calculated the PMI scores for uni-grams, bi-grams and on extracted named entities. Table TABREF9 shows some of the most significant words for the fake news class. We can see in the table some words that grab people attention, but are not very informative by themselves, such as mysterious or phenomenon. These words are largely context-independent and are likely to remain stable in their usage across different domains and even over an extended period of time. Thus, they should be useful beyond this task and this dataset.
Other lexicons: Finally, we create four lexicons that can help to model the difference in language use between fake and factual news articles. In particular, we explored and merged/cleansed a number of on-line resources in order to put together the following lexicons: (i) common typos in Bulgarian written text, (ii) Bulgarian slang words, (iii) commonly used foreign words, and (iv) English words with Bulgarian equivalents. We separate the latter two, because of the frequent usage of English words in common language. We make these lexicons freely available for future research.
Features
Fake news are written with the intent to deceive, and their authors often use a different style of writing compared to authors that create genuine content. This could be either deliberately, e.g., if the author wants to adapt the text to a specific target group or wants to provoke some particular emotional reaction in the reader, or unintentionally, e.g., because the authors of fake news have different writing style and personality compared to journalists in mainstream media. Disregarding the actual reason, we use features from author profiling and style detection BIBREF28 .
Use of specific words that have strong correlation with one of the classes (48 features). We used the above-described PMI-based fact-checking lexicons to extract features based on the presence of lexicon words in the target article. We end up with the following features: 16 for uni-grams + 16 for bi-grams + 16 for named entities, where we have a feature for the sum and also for the average of the word scores for each of the target classes (click-bait, non-click-bait, fake, non-fake), and we had these features separately for the title and for the body of the article.
Readability index (4 features): We calculate standard readability metrics including the type-token ratio, average word length, Flesch–Kincaid readability test BIBREF29 and Gunning-Fog index BIBREF30 . The last two metrics give scores to the text corresponding to the school grade the reader of the target article should have in order to be able to read and understand it easily. These metrics use statistics about the number of syllables, the number of words, and their length.
Orthographic features (12 features): The orthographic features used in our system include: the number of words in the title and in the content; the number of characters in the title and in the content; the number of specific symbols in the title and in the content, counting the following as symbols $.!;#?:-+%(), ; the number of capital letters in the title and in the content; the fraction of capital letters to all letters in the title and in the content; the number of URLs in the content; the overlap between the words from the title and the words of the content, relying on the fact that click-baits tend to have content that does not quite match their title. These features can be very effective for modelling the author's style.
Use of irregular vocabulary (4 features): During the initial analysis of our training dataset, we noticed the presence of a high number of foreign words. As it is not common in Bulgarian news articles to use words in another language, we thought that their presence could be a valuable feature to use. One of the reasons for their occurrence might be that they were translated from a foreign resource, or that they were borrowed. We further found that many articles that were labelled as fake news contained a high number of slang words, and we added this as a feature as well. Finally, we have a feature that counts the typos in the text.
General lexical features are often used in natural language processing as they are somewhat task-independent and reasonably effective in terms of classification accuracy. In our experiments, we used TF.IDF-based features over the title and over the content of the article we wanted to classify. We had these features twice – once for the title and once for the the content of the article, as we wanted to have two different representations of the same article. Thus, we used a total of 1,100 TF.IDF-weighted features (800 content + 300 title), limiting the vocabulary to the top 800 and 300 words, respectively (which occurred in more than five articles). We should note that TF.IDF features should be used with caution as they may not remain relevant over time or in different contexts without retraining.
The last type of hand-crafted features that we used are the grammatical features. First, we evaluate how often stop words are used in the content of the article. Extensive usage of stop words may indicate irregularities in the text, which would be missed by the above features. Additionally, we extract ten coarse-grained part-of-speech tags from the content of the article and we use part-of-speech occurrence ratios as features. This makes a total of twenty features, as we have separate features for the title and for the contents.
All the above features are hand-crafted, evaluating a specific text metric or checking whether specific words highly correlate with one of the classes. However, we lack features that target the semantic representation of the text itself. Thus, we further use two types of word representations.
Word embeddings (601 features). As we said above, we trained domain-specific word embeddings. In order to incorporate them as features, we calculate the average vector for the title and separately for the content of the news article. We end up with two 300-dimensional embedding representations of the semantics of the articles, which we use as 300+300=600 features. We also compute the cosine similarity between the average vector of the title and the average vector of the content, because we believe that this is a highly indicative measure for at least click-bait articles, whose content differs from what their title says.
Task-specific embeddings. As a more advanced representation, we feed the text into an attention-based deep neural network, which we train to produce a task-specific embedding of the news articles. The network is designed to recognize words and sentences that contribute to the click-bait class attribution. The architecture is described in details in Section UID15
Some Features that we Ignored
As we mentioned above, our method is purely text-based. Thus, we ignored the publishing date of the article. In future work, it could be explored as a useful piece of information about the credibility of the article, as there is interesting research in this direction BIBREF24 . We also disregarded the article source (the URL) because websites that specialize in producing and distributing fake content are often banned and then later reappear under another name. We recognize that the credibility of a specific website could be a very informative feature, but, for the sake of creating a robust method for fake news detection, our system relies only on the text when predicting whether the target article is likely to be fake. We describe our features in more detail below.
Model
Our framework for fake news detection is comprised of two components, which are used one after the other. First, we have an attention-based deep neural network model, which focuses on the segments of the text that are most indicative of the target class identification, and as a side effect learns task-specific representations of the news articles. We extract these representations from the last hidden layer in the network, and we feed it to the SVM classifier together with the hand-crafted features.
The attention network BIBREF31 , BIBREF32 is a powerful mechanism, inspired by the human ability to spot important sections in images or text. We adopt the approach used in BIBREF33 and employ an attention neural networks to build attention over the text of a piece of news with respect to the title it has. As far as it is in the nature of click-baits to have titles that are different from the text of the news, the attentional layers of the neural network should spot when the two texts talk about the same thing and when they are not corresponding or accurate. We implemented the attention mechanism using Keras BIBREF34 with the Tensorflow back-end BIBREF35 .
The architecture of the network with attention layers is shown in Figure FIGREF16 . Our neural model is based on Gated Recurrent Units (GRUs). GRUs are gating mechanism in RNNs which provide the ability to learn long-term dependencies and were first introduced in BIBREF36 . Given the document embedding, the GRUs build representations using input and forget gates, which help storing the valuable information through time. They build embeddings of the title and the text of the news, where at each step the unit has information only about the output from the previous step. This can be considered as a drawback, as far as we would considerably benefit if each step could construct its decision based not only on the previous step's output, but on all of the words that were processed so far. To improve this, the attention layer, for each step in the text sequence, uses the output of the steps in the title sequence. Thus, the layer learns weights, designating the strength of the relatedness between each word in the title and each word in the content.
For the neural network, we take the first 50 symbols of the title and the content of the news, which we choose after experiments. We train the neural network for 20 epochs and the final classification is derived with sigmoid activation. The optimizer used for the training is Adam optimizer. We feed the neural network with the embedding of the words we built earlier with word2vec.
As we will see below, the neural network is inferior in terms of performance to a feature-rich SVM (even though it performs well above the baseline). This is because it only has access to word embeddings, and does not use the manually-crafted features. Yet, its hidden layer represents a 128-dimensional task-specific embedding of the input article, and it turns out that using it as a list of 128 features in the SVM classifier yields even further great improvement, as we will see below. In this way, we combine a deep neural network with an attention mechanism with kernel-based SVM.
We feed the above-described hand-crafted features together with the task-specific embeddings learned by the deep neural neural network (a total of 1,892 attributes combined) into a Support Vector Machines (SVM) classifier BIBREF37 . SVMs have proven to perform well in different classification settings, including in the case of small and noisy datasets.
Experiments and Evaluation
We trained on the 2,815 training examples, and we tested on the 761 testing ones. The test dataset was provided apart from the training one, thus we didn't have to partition the original dataset to receive a testing one. The validation of the models was performed on a randomly chosen subset of sentences - one fifth of the original set. We scaled each feature individually by its maximum absolute value to end up with each feature having values in the [0;1] interval. We used an RBF kernel for the SVM, and we tuned the values of INLINEFORM0 and INLINEFORM1 using cross-validation. We trained the neural network using RMSProp BIBREF38 with a learning rate of 0.001 and mini-batches of size 32, chosen by performing experiments with cross-validation . We evaluated the model after each epoch and we kept the one that performed best on the development dataset.
Table TABREF17 shows the performance of the features in groups as described in Section SECREF7 . We can see that, among the hand-crafted features, the lexical features yield the best results, i.e., words are the most indicative features. The good results of the stylometric features indicate that the intricacies of language use are highly discriminative. The next group is the one with the grammatical features, which shows good performance in terms of Precision. The last one are the embedding features, which although having low individual performance, contribute to the overall performance of the system as shown in next paragraph.
Evaluating the final model, we set as a baseline the prediction of the majority class, i.e., the fake news class. This baseline has an F1 of 41.59% and accuracy of 71.22%. The performance of the built models can be seen in Table TABREF19 . Another stable baseline, apart from just taking the majority class, is the TF.IDF bag-of-words approach, which sets a high bar for the general model score. We then observe how much the attention mechanism embeddings improve the score (AttNN). Finally, we add the hand-crafted features (Feats), which further improve the performance. From the results, we can conclude that both the attention-based task-specific embeddings and the manual features are important for the task of finding fake news.
Conclusion and Future Work
We have presented the first attempt to solve the fake news problem for Bulgarian. Our method is purely text-based, and ignores the publication date and the source of the article. It combines task-specific embeddings, produced by a two-level attention-based deep neural network model, with manually crafted features (stylometric, lexical, grammatical, and semantic), into a kernel-based SVM classifier. We further produced and shared a number of relevant language resources for Bulgarian, which we created for solving the task.
The evaluation results are encouraging and suggest the potential applicability of our approach in a real-world scenario. They further show the potential of combining attention-based task-specific embeddings with manually crafted features. An important advantage of the attention-based neural networks is that the produced representations can be easily visualized and potentially interpreted as shown in BIBREF31 . We consider the implementation of such visualization as an important future work on the task.
Acknowledgements
We would like to thank Lachezar Bozhkov, who was part of our team in the Hack the Fake News hackathon, for his insight. This work is supported by the NSF of Bulgaria under Grant No. DN-02/11/2016 - ITDGate. | F1, accuracy |
363ddc06db5720786ed440927d7fbb7d0a8078ae | 363ddc06db5720786ed440927d7fbb7d0a8078ae_0 | Q: what types of features were used?
Text: Introduction
Fake news are written and published with the intent to mislead in order to gain financially or politically, often targeting specific user groups. Another type of harmful content on the Internet are the so-called click-baits, which are distinguished by their sensational, exaggerated, or deliberately false headlines that grab attention and deceive the user into clicking an article with questionable content.
While the motives behind these two types of fake news are different, they constitute a growing problem as they constitute a sizable fraction of the online news that users encounter on a daily basis. With the recent boom of Internet, mobile, and social networks, the spread of fake news increases exponentially. Using on-line methods for spreading harmful content makes the task of keeping the Internet clean significantly harder as it is very easy to publish an article and there is no easy way to verify its veracity. Currently, domains that consistently spread misinformation are being banned from various platforms, but this is a rather inefficient way to deal with fake news as websites that specialize in spreading misinformation are reappearing with different domain names. That is why our method is based purely on text analysis, without taking into account the domain name or website's reliability as a source of information. Our work is focused on exploring various stylistic and lexical features in order to detect misleading content, and on experiments with neural network architectures in order to evaluate how deep learning can be used for detecting fake news. Moreover, we created various language-specific resources that could be used in future work on fake news and clickbait detection for Bulgarian, including task-specific word embeddings and various lexicons and dictionaries extracted from the training data.
Related Work
Trustworthiness and veracity analytics of on-line statements is an emerging research direction BIBREF0 . This includes predicting credibility of information shared in social media BIBREF1 , stance classification BIBREF2 and contradiction detection in rumours BIBREF3 . For example, Castillo:2011:ICT:1963405.1963500 studied the problem of finding false information about a newsworthy event. They compiled their own dataset, focusing on tweets using a variety of features including user reputation, author writing style, and various time-based features. Canini:2011 analysed the interaction of content and social network structure, and Morris:2012:TBU:2145204.2145274 studied how Twitter users judge truthfulness. They found that this is hard to do based on content alone, and instead users are influenced by heuristics such as user name.
Rumour detection in social media represents yet another angle of information credibility. zubiaga2015analysing studied how people handle rumours in social media. They found that users with higher reputation are more trusted, and thus can spread rumours among other users without raising suspicions about the credibility of the news or of its source. lukasik-cohn-bontcheva:2015:ACL-IJCNLP and Ma:2015:DRU used temporal patterns to detect rumours and to predict their frequency, PlosONE:2016 focused on conversational threads, and RANLP2017:factchecking:external used deep learning to verify claims using the Web as a corpus.
Veracity of information has been also studied in the context of online personal blogs BIBREF4 , community question answering forums BIBREF5 , and political debates BIBREF6 .
Astroturfing and misinformation detection represent another relevant research direction. Their importance is motivated by the strong interest from political science, and research methods are driven by the presence of massive streams of micro-blogging data, e.g., on Twitter BIBREF7 . While astroturfing has been primarily studied in microblogs such as Twitter, here we focus on on-line news and click-baits instead.
Identification of malicious accounts in social networks is another related research direction. This includes detecting spam accounts BIBREF8 , BIBREF9 , fake accounts BIBREF10 , BIBREF11 , compromised accounts and phishing accounts BIBREF12 . Fake profile detection has also been studied in the context of cyber-bullying BIBREF13 . A related problem is that of Web spam detection, which was addressed as a text classification problem BIBREF14 , e.g., using spam keyword spotting BIBREF15 , lexical affinity of arbitrary words to spam content BIBREF16 , frequency of punctuation and word co-occurrence BIBREF17 .
Fake news detection is most closely related to the present work. While social media have been seen for years as the main vehicle for spreading information of questionable veracity, recently there has been a proliferation of fake news, often spread on social media, but also published in specialized websites. This has attracted research attention recently. For example, there has been work on studying credibility, trust, and expertise in news communities BIBREF18 . The credibility of the information published in on-line news portals has been questioned by a number of researchers BIBREF19 , BIBREF20 , BIBREF21 . As timing is crucial when it comes to publishing breaking news, it is simply not possible to double-check the facts and the sources, as is usually standard in respectable printed newspapers and magazines. This is one of the biggest concerns about on-line news media that journalists have BIBREF22 . Finally, conroy2015automatic review various methods for detecting fake news, e.g., using linguistic analysis, discourse, linked data, and social network features.
All the above work was for English. The only work on fact checking for Bulgarian is that of BIBREF23 , but they focused on distinguishing serious news from humorous ones. In contrast, here we are interested in finding news that are not designed to sound funny, but to make the reader believe they are real. Unlike them, we use a deep learning approach.
Fake News & Click-bait Dataset
We use a corpus of Bulgarian news over a fixed period of time, whose factuality had been questioned. The news come from 377 different sources from various domains, including politics, interesting facts and tips&tricks. The dataset was prepared for the Hack the Fake News hackathon. It was provided by the Bulgarian Association of PR Agencies and is available in Gitlab. The corpus was automatically collected, and then annotated by students of journalism. Each entry in the dataset consists of the following elements: URL of the original article, date of publication, article heading, article content, a label indicating whether the article is fake or not, and another label indicating whether it is a click-bait.
The training dataset contains 2,815 examples, where 1,940 (i.e., 69%) are fake news and 1,968 (i.e., 70%) are click-baits; we further have 761 testing examples. However, there is 98% correlation between fake news and click-baits, i.e., a model trained on fake news would do well on click-baits and vice versa. Thus, below we focus on fake news detection only.
One important aspect about the training dataset is that it contains many repetitions. This should not be surprising as it attempts to represent a natural distribution of factual vs. fake news on-line over a period of time. As publishers of fake news often have a group of websites that feature the same deceiving content, we should expect some repetition.
In particular, the training dataset contains 434 unique articles with duplicates. These articles have three reposts each on average, with the most reposted article appearing 45 times. If we take into account the labels of the reposted articles, we can see that if an article is reposted, it is more likely to be fake news. The number of fake news that have a duplicate in the training dataset are 1018 whereas, the number of articles with genuine content that have a duplicate article in the training set is 322. We detect the duplicates based on their titles as far as they are distinctive enough and the content is sometimes slightly modified when reposted.
This supports the hypothesis that fake news websites are likely to repost their content. This is also in line with previous research BIBREF24 , which has found it beneficial to find a pattern of how a rumour is reposted over time.
Method
We propose a general framework for finding fake news focusing on the text only. We first create some resources, e.g., dictionaries of words strongly correlated with fake news, which are needed for feature extraction. Then, we design features that model a number of interesting aspects about an article, e.g., style, intent, etc. Moreover, we use a deep neural network to learn task-specific representations of the articles, which includes an attention mechanism that can focus on the most discriminative sentences and words.
Language Resources
As our work is the first attempt at predicting click-baits in Bulgarian, it is organized around building new language-specific resources and analyzing the task.
Word embeddings: We train 300-dimensional domain-specific word embeddings using word2vec BIBREF25 on 100,000 Bulgarian news articles from the same sources as the main dataset. The labelled dataset we use in our system is a subset of these articles. Finally, we end up with 207,270 unique words that occur in five or more documents. We use these embeddings for text representation, and as an input to our attention-based nevural network.
Latent Dirichlet allocation (LDA): We use LDA BIBREF26 in order to build domain-specific topic models, which could be useful for inducing classes of words that signal fake/factual news. The LDA model is trained on the same 100,000 Bulgarian news articles as for training the word embeddings. In our experiments, these LDA classes proved helpful by themselves, but they did not have much to offer on top of the word embeddings. Thus, we ended up not using them in our final system, but we chose to still release them as other researchers might find them useful in the future.
Fact-checking lexicon: Using lexicons of sentiment words has been shown to be very successful for the task of sentiment analysis BIBREF27 , and we applied the same idea to extract a fact-checking lexicon. In particular, we use point-wise mutual information (PMI) to find terms (words, word bi-grams, and named entities) that are highly correlated with the fake/factual news class. We calculated the PMI scores for uni-grams, bi-grams and on extracted named entities. Table TABREF9 shows some of the most significant words for the fake news class. We can see in the table some words that grab people attention, but are not very informative by themselves, such as mysterious or phenomenon. These words are largely context-independent and are likely to remain stable in their usage across different domains and even over an extended period of time. Thus, they should be useful beyond this task and this dataset.
Other lexicons: Finally, we create four lexicons that can help to model the difference in language use between fake and factual news articles. In particular, we explored and merged/cleansed a number of on-line resources in order to put together the following lexicons: (i) common typos in Bulgarian written text, (ii) Bulgarian slang words, (iii) commonly used foreign words, and (iv) English words with Bulgarian equivalents. We separate the latter two, because of the frequent usage of English words in common language. We make these lexicons freely available for future research.
Features
Fake news are written with the intent to deceive, and their authors often use a different style of writing compared to authors that create genuine content. This could be either deliberately, e.g., if the author wants to adapt the text to a specific target group or wants to provoke some particular emotional reaction in the reader, or unintentionally, e.g., because the authors of fake news have different writing style and personality compared to journalists in mainstream media. Disregarding the actual reason, we use features from author profiling and style detection BIBREF28 .
Use of specific words that have strong correlation with one of the classes (48 features). We used the above-described PMI-based fact-checking lexicons to extract features based on the presence of lexicon words in the target article. We end up with the following features: 16 for uni-grams + 16 for bi-grams + 16 for named entities, where we have a feature for the sum and also for the average of the word scores for each of the target classes (click-bait, non-click-bait, fake, non-fake), and we had these features separately for the title and for the body of the article.
Readability index (4 features): We calculate standard readability metrics including the type-token ratio, average word length, Flesch–Kincaid readability test BIBREF29 and Gunning-Fog index BIBREF30 . The last two metrics give scores to the text corresponding to the school grade the reader of the target article should have in order to be able to read and understand it easily. These metrics use statistics about the number of syllables, the number of words, and their length.
Orthographic features (12 features): The orthographic features used in our system include: the number of words in the title and in the content; the number of characters in the title and in the content; the number of specific symbols in the title and in the content, counting the following as symbols $.!;#?:-+%(), ; the number of capital letters in the title and in the content; the fraction of capital letters to all letters in the title and in the content; the number of URLs in the content; the overlap between the words from the title and the words of the content, relying on the fact that click-baits tend to have content that does not quite match their title. These features can be very effective for modelling the author's style.
Use of irregular vocabulary (4 features): During the initial analysis of our training dataset, we noticed the presence of a high number of foreign words. As it is not common in Bulgarian news articles to use words in another language, we thought that their presence could be a valuable feature to use. One of the reasons for their occurrence might be that they were translated from a foreign resource, or that they were borrowed. We further found that many articles that were labelled as fake news contained a high number of slang words, and we added this as a feature as well. Finally, we have a feature that counts the typos in the text.
General lexical features are often used in natural language processing as they are somewhat task-independent and reasonably effective in terms of classification accuracy. In our experiments, we used TF.IDF-based features over the title and over the content of the article we wanted to classify. We had these features twice – once for the title and once for the the content of the article, as we wanted to have two different representations of the same article. Thus, we used a total of 1,100 TF.IDF-weighted features (800 content + 300 title), limiting the vocabulary to the top 800 and 300 words, respectively (which occurred in more than five articles). We should note that TF.IDF features should be used with caution as they may not remain relevant over time or in different contexts without retraining.
The last type of hand-crafted features that we used are the grammatical features. First, we evaluate how often stop words are used in the content of the article. Extensive usage of stop words may indicate irregularities in the text, which would be missed by the above features. Additionally, we extract ten coarse-grained part-of-speech tags from the content of the article and we use part-of-speech occurrence ratios as features. This makes a total of twenty features, as we have separate features for the title and for the contents.
All the above features are hand-crafted, evaluating a specific text metric or checking whether specific words highly correlate with one of the classes. However, we lack features that target the semantic representation of the text itself. Thus, we further use two types of word representations.
Word embeddings (601 features). As we said above, we trained domain-specific word embeddings. In order to incorporate them as features, we calculate the average vector for the title and separately for the content of the news article. We end up with two 300-dimensional embedding representations of the semantics of the articles, which we use as 300+300=600 features. We also compute the cosine similarity between the average vector of the title and the average vector of the content, because we believe that this is a highly indicative measure for at least click-bait articles, whose content differs from what their title says.
Task-specific embeddings. As a more advanced representation, we feed the text into an attention-based deep neural network, which we train to produce a task-specific embedding of the news articles. The network is designed to recognize words and sentences that contribute to the click-bait class attribution. The architecture is described in details in Section UID15
Some Features that we Ignored
As we mentioned above, our method is purely text-based. Thus, we ignored the publishing date of the article. In future work, it could be explored as a useful piece of information about the credibility of the article, as there is interesting research in this direction BIBREF24 . We also disregarded the article source (the URL) because websites that specialize in producing and distributing fake content are often banned and then later reappear under another name. We recognize that the credibility of a specific website could be a very informative feature, but, for the sake of creating a robust method for fake news detection, our system relies only on the text when predicting whether the target article is likely to be fake. We describe our features in more detail below.
Model
Our framework for fake news detection is comprised of two components, which are used one after the other. First, we have an attention-based deep neural network model, which focuses on the segments of the text that are most indicative of the target class identification, and as a side effect learns task-specific representations of the news articles. We extract these representations from the last hidden layer in the network, and we feed it to the SVM classifier together with the hand-crafted features.
The attention network BIBREF31 , BIBREF32 is a powerful mechanism, inspired by the human ability to spot important sections in images or text. We adopt the approach used in BIBREF33 and employ an attention neural networks to build attention over the text of a piece of news with respect to the title it has. As far as it is in the nature of click-baits to have titles that are different from the text of the news, the attentional layers of the neural network should spot when the two texts talk about the same thing and when they are not corresponding or accurate. We implemented the attention mechanism using Keras BIBREF34 with the Tensorflow back-end BIBREF35 .
The architecture of the network with attention layers is shown in Figure FIGREF16 . Our neural model is based on Gated Recurrent Units (GRUs). GRUs are gating mechanism in RNNs which provide the ability to learn long-term dependencies and were first introduced in BIBREF36 . Given the document embedding, the GRUs build representations using input and forget gates, which help storing the valuable information through time. They build embeddings of the title and the text of the news, where at each step the unit has information only about the output from the previous step. This can be considered as a drawback, as far as we would considerably benefit if each step could construct its decision based not only on the previous step's output, but on all of the words that were processed so far. To improve this, the attention layer, for each step in the text sequence, uses the output of the steps in the title sequence. Thus, the layer learns weights, designating the strength of the relatedness between each word in the title and each word in the content.
For the neural network, we take the first 50 symbols of the title and the content of the news, which we choose after experiments. We train the neural network for 20 epochs and the final classification is derived with sigmoid activation. The optimizer used for the training is Adam optimizer. We feed the neural network with the embedding of the words we built earlier with word2vec.
As we will see below, the neural network is inferior in terms of performance to a feature-rich SVM (even though it performs well above the baseline). This is because it only has access to word embeddings, and does not use the manually-crafted features. Yet, its hidden layer represents a 128-dimensional task-specific embedding of the input article, and it turns out that using it as a list of 128 features in the SVM classifier yields even further great improvement, as we will see below. In this way, we combine a deep neural network with an attention mechanism with kernel-based SVM.
We feed the above-described hand-crafted features together with the task-specific embeddings learned by the deep neural neural network (a total of 1,892 attributes combined) into a Support Vector Machines (SVM) classifier BIBREF37 . SVMs have proven to perform well in different classification settings, including in the case of small and noisy datasets.
Experiments and Evaluation
We trained on the 2,815 training examples, and we tested on the 761 testing ones. The test dataset was provided apart from the training one, thus we didn't have to partition the original dataset to receive a testing one. The validation of the models was performed on a randomly chosen subset of sentences - one fifth of the original set. We scaled each feature individually by its maximum absolute value to end up with each feature having values in the [0;1] interval. We used an RBF kernel for the SVM, and we tuned the values of INLINEFORM0 and INLINEFORM1 using cross-validation. We trained the neural network using RMSProp BIBREF38 with a learning rate of 0.001 and mini-batches of size 32, chosen by performing experiments with cross-validation . We evaluated the model after each epoch and we kept the one that performed best on the development dataset.
Table TABREF17 shows the performance of the features in groups as described in Section SECREF7 . We can see that, among the hand-crafted features, the lexical features yield the best results, i.e., words are the most indicative features. The good results of the stylometric features indicate that the intricacies of language use are highly discriminative. The next group is the one with the grammatical features, which shows good performance in terms of Precision. The last one are the embedding features, which although having low individual performance, contribute to the overall performance of the system as shown in next paragraph.
Evaluating the final model, we set as a baseline the prediction of the majority class, i.e., the fake news class. This baseline has an F1 of 41.59% and accuracy of 71.22%. The performance of the built models can be seen in Table TABREF19 . Another stable baseline, apart from just taking the majority class, is the TF.IDF bag-of-words approach, which sets a high bar for the general model score. We then observe how much the attention mechanism embeddings improve the score (AttNN). Finally, we add the hand-crafted features (Feats), which further improve the performance. From the results, we can conclude that both the attention-based task-specific embeddings and the manual features are important for the task of finding fake news.
Conclusion and Future Work
We have presented the first attempt to solve the fake news problem for Bulgarian. Our method is purely text-based, and ignores the publication date and the source of the article. It combines task-specific embeddings, produced by a two-level attention-based deep neural network model, with manually crafted features (stylometric, lexical, grammatical, and semantic), into a kernel-based SVM classifier. We further produced and shared a number of relevant language resources for Bulgarian, which we created for solving the task.
The evaluation results are encouraging and suggest the potential applicability of our approach in a real-world scenario. They further show the potential of combining attention-based task-specific embeddings with manually crafted features. An important advantage of the attention-based neural networks is that the produced representations can be easily visualized and potentially interpreted as shown in BIBREF31 . We consider the implementation of such visualization as an important future work on the task.
Acknowledgements
We would like to thank Lachezar Bozhkov, who was part of our team in the Hack the Fake News hackathon, for his insight. This work is supported by the NSF of Bulgaria under Grant No. DN-02/11/2016 - ITDGate. | stylometric, lexical, grammatical, and semantic |
f12a282571f842b818d4bee86442751422b52337 | f12a282571f842b818d4bee86442751422b52337_0 | Q: what lexical features did they experiment with?
Text: Introduction
Fake news are written and published with the intent to mislead in order to gain financially or politically, often targeting specific user groups. Another type of harmful content on the Internet are the so-called click-baits, which are distinguished by their sensational, exaggerated, or deliberately false headlines that grab attention and deceive the user into clicking an article with questionable content.
While the motives behind these two types of fake news are different, they constitute a growing problem as they constitute a sizable fraction of the online news that users encounter on a daily basis. With the recent boom of Internet, mobile, and social networks, the spread of fake news increases exponentially. Using on-line methods for spreading harmful content makes the task of keeping the Internet clean significantly harder as it is very easy to publish an article and there is no easy way to verify its veracity. Currently, domains that consistently spread misinformation are being banned from various platforms, but this is a rather inefficient way to deal with fake news as websites that specialize in spreading misinformation are reappearing with different domain names. That is why our method is based purely on text analysis, without taking into account the domain name or website's reliability as a source of information. Our work is focused on exploring various stylistic and lexical features in order to detect misleading content, and on experiments with neural network architectures in order to evaluate how deep learning can be used for detecting fake news. Moreover, we created various language-specific resources that could be used in future work on fake news and clickbait detection for Bulgarian, including task-specific word embeddings and various lexicons and dictionaries extracted from the training data.
Related Work
Trustworthiness and veracity analytics of on-line statements is an emerging research direction BIBREF0 . This includes predicting credibility of information shared in social media BIBREF1 , stance classification BIBREF2 and contradiction detection in rumours BIBREF3 . For example, Castillo:2011:ICT:1963405.1963500 studied the problem of finding false information about a newsworthy event. They compiled their own dataset, focusing on tweets using a variety of features including user reputation, author writing style, and various time-based features. Canini:2011 analysed the interaction of content and social network structure, and Morris:2012:TBU:2145204.2145274 studied how Twitter users judge truthfulness. They found that this is hard to do based on content alone, and instead users are influenced by heuristics such as user name.
Rumour detection in social media represents yet another angle of information credibility. zubiaga2015analysing studied how people handle rumours in social media. They found that users with higher reputation are more trusted, and thus can spread rumours among other users without raising suspicions about the credibility of the news or of its source. lukasik-cohn-bontcheva:2015:ACL-IJCNLP and Ma:2015:DRU used temporal patterns to detect rumours and to predict their frequency, PlosONE:2016 focused on conversational threads, and RANLP2017:factchecking:external used deep learning to verify claims using the Web as a corpus.
Veracity of information has been also studied in the context of online personal blogs BIBREF4 , community question answering forums BIBREF5 , and political debates BIBREF6 .
Astroturfing and misinformation detection represent another relevant research direction. Their importance is motivated by the strong interest from political science, and research methods are driven by the presence of massive streams of micro-blogging data, e.g., on Twitter BIBREF7 . While astroturfing has been primarily studied in microblogs such as Twitter, here we focus on on-line news and click-baits instead.
Identification of malicious accounts in social networks is another related research direction. This includes detecting spam accounts BIBREF8 , BIBREF9 , fake accounts BIBREF10 , BIBREF11 , compromised accounts and phishing accounts BIBREF12 . Fake profile detection has also been studied in the context of cyber-bullying BIBREF13 . A related problem is that of Web spam detection, which was addressed as a text classification problem BIBREF14 , e.g., using spam keyword spotting BIBREF15 , lexical affinity of arbitrary words to spam content BIBREF16 , frequency of punctuation and word co-occurrence BIBREF17 .
Fake news detection is most closely related to the present work. While social media have been seen for years as the main vehicle for spreading information of questionable veracity, recently there has been a proliferation of fake news, often spread on social media, but also published in specialized websites. This has attracted research attention recently. For example, there has been work on studying credibility, trust, and expertise in news communities BIBREF18 . The credibility of the information published in on-line news portals has been questioned by a number of researchers BIBREF19 , BIBREF20 , BIBREF21 . As timing is crucial when it comes to publishing breaking news, it is simply not possible to double-check the facts and the sources, as is usually standard in respectable printed newspapers and magazines. This is one of the biggest concerns about on-line news media that journalists have BIBREF22 . Finally, conroy2015automatic review various methods for detecting fake news, e.g., using linguistic analysis, discourse, linked data, and social network features.
All the above work was for English. The only work on fact checking for Bulgarian is that of BIBREF23 , but they focused on distinguishing serious news from humorous ones. In contrast, here we are interested in finding news that are not designed to sound funny, but to make the reader believe they are real. Unlike them, we use a deep learning approach.
Fake News & Click-bait Dataset
We use a corpus of Bulgarian news over a fixed period of time, whose factuality had been questioned. The news come from 377 different sources from various domains, including politics, interesting facts and tips&tricks. The dataset was prepared for the Hack the Fake News hackathon. It was provided by the Bulgarian Association of PR Agencies and is available in Gitlab. The corpus was automatically collected, and then annotated by students of journalism. Each entry in the dataset consists of the following elements: URL of the original article, date of publication, article heading, article content, a label indicating whether the article is fake or not, and another label indicating whether it is a click-bait.
The training dataset contains 2,815 examples, where 1,940 (i.e., 69%) are fake news and 1,968 (i.e., 70%) are click-baits; we further have 761 testing examples. However, there is 98% correlation between fake news and click-baits, i.e., a model trained on fake news would do well on click-baits and vice versa. Thus, below we focus on fake news detection only.
One important aspect about the training dataset is that it contains many repetitions. This should not be surprising as it attempts to represent a natural distribution of factual vs. fake news on-line over a period of time. As publishers of fake news often have a group of websites that feature the same deceiving content, we should expect some repetition.
In particular, the training dataset contains 434 unique articles with duplicates. These articles have three reposts each on average, with the most reposted article appearing 45 times. If we take into account the labels of the reposted articles, we can see that if an article is reposted, it is more likely to be fake news. The number of fake news that have a duplicate in the training dataset are 1018 whereas, the number of articles with genuine content that have a duplicate article in the training set is 322. We detect the duplicates based on their titles as far as they are distinctive enough and the content is sometimes slightly modified when reposted.
This supports the hypothesis that fake news websites are likely to repost their content. This is also in line with previous research BIBREF24 , which has found it beneficial to find a pattern of how a rumour is reposted over time.
Method
We propose a general framework for finding fake news focusing on the text only. We first create some resources, e.g., dictionaries of words strongly correlated with fake news, which are needed for feature extraction. Then, we design features that model a number of interesting aspects about an article, e.g., style, intent, etc. Moreover, we use a deep neural network to learn task-specific representations of the articles, which includes an attention mechanism that can focus on the most discriminative sentences and words.
Language Resources
As our work is the first attempt at predicting click-baits in Bulgarian, it is organized around building new language-specific resources and analyzing the task.
Word embeddings: We train 300-dimensional domain-specific word embeddings using word2vec BIBREF25 on 100,000 Bulgarian news articles from the same sources as the main dataset. The labelled dataset we use in our system is a subset of these articles. Finally, we end up with 207,270 unique words that occur in five or more documents. We use these embeddings for text representation, and as an input to our attention-based nevural network.
Latent Dirichlet allocation (LDA): We use LDA BIBREF26 in order to build domain-specific topic models, which could be useful for inducing classes of words that signal fake/factual news. The LDA model is trained on the same 100,000 Bulgarian news articles as for training the word embeddings. In our experiments, these LDA classes proved helpful by themselves, but they did not have much to offer on top of the word embeddings. Thus, we ended up not using them in our final system, but we chose to still release them as other researchers might find them useful in the future.
Fact-checking lexicon: Using lexicons of sentiment words has been shown to be very successful for the task of sentiment analysis BIBREF27 , and we applied the same idea to extract a fact-checking lexicon. In particular, we use point-wise mutual information (PMI) to find terms (words, word bi-grams, and named entities) that are highly correlated with the fake/factual news class. We calculated the PMI scores for uni-grams, bi-grams and on extracted named entities. Table TABREF9 shows some of the most significant words for the fake news class. We can see in the table some words that grab people attention, but are not very informative by themselves, such as mysterious or phenomenon. These words are largely context-independent and are likely to remain stable in their usage across different domains and even over an extended period of time. Thus, they should be useful beyond this task and this dataset.
Other lexicons: Finally, we create four lexicons that can help to model the difference in language use between fake and factual news articles. In particular, we explored and merged/cleansed a number of on-line resources in order to put together the following lexicons: (i) common typos in Bulgarian written text, (ii) Bulgarian slang words, (iii) commonly used foreign words, and (iv) English words with Bulgarian equivalents. We separate the latter two, because of the frequent usage of English words in common language. We make these lexicons freely available for future research.
Features
Fake news are written with the intent to deceive, and their authors often use a different style of writing compared to authors that create genuine content. This could be either deliberately, e.g., if the author wants to adapt the text to a specific target group or wants to provoke some particular emotional reaction in the reader, or unintentionally, e.g., because the authors of fake news have different writing style and personality compared to journalists in mainstream media. Disregarding the actual reason, we use features from author profiling and style detection BIBREF28 .
Use of specific words that have strong correlation with one of the classes (48 features). We used the above-described PMI-based fact-checking lexicons to extract features based on the presence of lexicon words in the target article. We end up with the following features: 16 for uni-grams + 16 for bi-grams + 16 for named entities, where we have a feature for the sum and also for the average of the word scores for each of the target classes (click-bait, non-click-bait, fake, non-fake), and we had these features separately for the title and for the body of the article.
Readability index (4 features): We calculate standard readability metrics including the type-token ratio, average word length, Flesch–Kincaid readability test BIBREF29 and Gunning-Fog index BIBREF30 . The last two metrics give scores to the text corresponding to the school grade the reader of the target article should have in order to be able to read and understand it easily. These metrics use statistics about the number of syllables, the number of words, and their length.
Orthographic features (12 features): The orthographic features used in our system include: the number of words in the title and in the content; the number of characters in the title and in the content; the number of specific symbols in the title and in the content, counting the following as symbols $.!;#?:-+%(), ; the number of capital letters in the title and in the content; the fraction of capital letters to all letters in the title and in the content; the number of URLs in the content; the overlap between the words from the title and the words of the content, relying on the fact that click-baits tend to have content that does not quite match their title. These features can be very effective for modelling the author's style.
Use of irregular vocabulary (4 features): During the initial analysis of our training dataset, we noticed the presence of a high number of foreign words. As it is not common in Bulgarian news articles to use words in another language, we thought that their presence could be a valuable feature to use. One of the reasons for their occurrence might be that they were translated from a foreign resource, or that they were borrowed. We further found that many articles that were labelled as fake news contained a high number of slang words, and we added this as a feature as well. Finally, we have a feature that counts the typos in the text.
General lexical features are often used in natural language processing as they are somewhat task-independent and reasonably effective in terms of classification accuracy. In our experiments, we used TF.IDF-based features over the title and over the content of the article we wanted to classify. We had these features twice – once for the title and once for the the content of the article, as we wanted to have two different representations of the same article. Thus, we used a total of 1,100 TF.IDF-weighted features (800 content + 300 title), limiting the vocabulary to the top 800 and 300 words, respectively (which occurred in more than five articles). We should note that TF.IDF features should be used with caution as they may not remain relevant over time or in different contexts without retraining.
The last type of hand-crafted features that we used are the grammatical features. First, we evaluate how often stop words are used in the content of the article. Extensive usage of stop words may indicate irregularities in the text, which would be missed by the above features. Additionally, we extract ten coarse-grained part-of-speech tags from the content of the article and we use part-of-speech occurrence ratios as features. This makes a total of twenty features, as we have separate features for the title and for the contents.
All the above features are hand-crafted, evaluating a specific text metric or checking whether specific words highly correlate with one of the classes. However, we lack features that target the semantic representation of the text itself. Thus, we further use two types of word representations.
Word embeddings (601 features). As we said above, we trained domain-specific word embeddings. In order to incorporate them as features, we calculate the average vector for the title and separately for the content of the news article. We end up with two 300-dimensional embedding representations of the semantics of the articles, which we use as 300+300=600 features. We also compute the cosine similarity between the average vector of the title and the average vector of the content, because we believe that this is a highly indicative measure for at least click-bait articles, whose content differs from what their title says.
Task-specific embeddings. As a more advanced representation, we feed the text into an attention-based deep neural network, which we train to produce a task-specific embedding of the news articles. The network is designed to recognize words and sentences that contribute to the click-bait class attribution. The architecture is described in details in Section UID15
Some Features that we Ignored
As we mentioned above, our method is purely text-based. Thus, we ignored the publishing date of the article. In future work, it could be explored as a useful piece of information about the credibility of the article, as there is interesting research in this direction BIBREF24 . We also disregarded the article source (the URL) because websites that specialize in producing and distributing fake content are often banned and then later reappear under another name. We recognize that the credibility of a specific website could be a very informative feature, but, for the sake of creating a robust method for fake news detection, our system relies only on the text when predicting whether the target article is likely to be fake. We describe our features in more detail below.
Model
Our framework for fake news detection is comprised of two components, which are used one after the other. First, we have an attention-based deep neural network model, which focuses on the segments of the text that are most indicative of the target class identification, and as a side effect learns task-specific representations of the news articles. We extract these representations from the last hidden layer in the network, and we feed it to the SVM classifier together with the hand-crafted features.
The attention network BIBREF31 , BIBREF32 is a powerful mechanism, inspired by the human ability to spot important sections in images or text. We adopt the approach used in BIBREF33 and employ an attention neural networks to build attention over the text of a piece of news with respect to the title it has. As far as it is in the nature of click-baits to have titles that are different from the text of the news, the attentional layers of the neural network should spot when the two texts talk about the same thing and when they are not corresponding or accurate. We implemented the attention mechanism using Keras BIBREF34 with the Tensorflow back-end BIBREF35 .
The architecture of the network with attention layers is shown in Figure FIGREF16 . Our neural model is based on Gated Recurrent Units (GRUs). GRUs are gating mechanism in RNNs which provide the ability to learn long-term dependencies and were first introduced in BIBREF36 . Given the document embedding, the GRUs build representations using input and forget gates, which help storing the valuable information through time. They build embeddings of the title and the text of the news, where at each step the unit has information only about the output from the previous step. This can be considered as a drawback, as far as we would considerably benefit if each step could construct its decision based not only on the previous step's output, but on all of the words that were processed so far. To improve this, the attention layer, for each step in the text sequence, uses the output of the steps in the title sequence. Thus, the layer learns weights, designating the strength of the relatedness between each word in the title and each word in the content.
For the neural network, we take the first 50 symbols of the title and the content of the news, which we choose after experiments. We train the neural network for 20 epochs and the final classification is derived with sigmoid activation. The optimizer used for the training is Adam optimizer. We feed the neural network with the embedding of the words we built earlier with word2vec.
As we will see below, the neural network is inferior in terms of performance to a feature-rich SVM (even though it performs well above the baseline). This is because it only has access to word embeddings, and does not use the manually-crafted features. Yet, its hidden layer represents a 128-dimensional task-specific embedding of the input article, and it turns out that using it as a list of 128 features in the SVM classifier yields even further great improvement, as we will see below. In this way, we combine a deep neural network with an attention mechanism with kernel-based SVM.
We feed the above-described hand-crafted features together with the task-specific embeddings learned by the deep neural neural network (a total of 1,892 attributes combined) into a Support Vector Machines (SVM) classifier BIBREF37 . SVMs have proven to perform well in different classification settings, including in the case of small and noisy datasets.
Experiments and Evaluation
We trained on the 2,815 training examples, and we tested on the 761 testing ones. The test dataset was provided apart from the training one, thus we didn't have to partition the original dataset to receive a testing one. The validation of the models was performed on a randomly chosen subset of sentences - one fifth of the original set. We scaled each feature individually by its maximum absolute value to end up with each feature having values in the [0;1] interval. We used an RBF kernel for the SVM, and we tuned the values of INLINEFORM0 and INLINEFORM1 using cross-validation. We trained the neural network using RMSProp BIBREF38 with a learning rate of 0.001 and mini-batches of size 32, chosen by performing experiments with cross-validation . We evaluated the model after each epoch and we kept the one that performed best on the development dataset.
Table TABREF17 shows the performance of the features in groups as described in Section SECREF7 . We can see that, among the hand-crafted features, the lexical features yield the best results, i.e., words are the most indicative features. The good results of the stylometric features indicate that the intricacies of language use are highly discriminative. The next group is the one with the grammatical features, which shows good performance in terms of Precision. The last one are the embedding features, which although having low individual performance, contribute to the overall performance of the system as shown in next paragraph.
Evaluating the final model, we set as a baseline the prediction of the majority class, i.e., the fake news class. This baseline has an F1 of 41.59% and accuracy of 71.22%. The performance of the built models can be seen in Table TABREF19 . Another stable baseline, apart from just taking the majority class, is the TF.IDF bag-of-words approach, which sets a high bar for the general model score. We then observe how much the attention mechanism embeddings improve the score (AttNN). Finally, we add the hand-crafted features (Feats), which further improve the performance. From the results, we can conclude that both the attention-based task-specific embeddings and the manual features are important for the task of finding fake news.
Conclusion and Future Work
We have presented the first attempt to solve the fake news problem for Bulgarian. Our method is purely text-based, and ignores the publication date and the source of the article. It combines task-specific embeddings, produced by a two-level attention-based deep neural network model, with manually crafted features (stylometric, lexical, grammatical, and semantic), into a kernel-based SVM classifier. We further produced and shared a number of relevant language resources for Bulgarian, which we created for solving the task.
The evaluation results are encouraging and suggest the potential applicability of our approach in a real-world scenario. They further show the potential of combining attention-based task-specific embeddings with manually crafted features. An important advantage of the attention-based neural networks is that the produced representations can be easily visualized and potentially interpreted as shown in BIBREF31 . We consider the implementation of such visualization as an important future work on the task.
Acknowledgements
We would like to thank Lachezar Bozhkov, who was part of our team in the Hack the Fake News hackathon, for his insight. This work is supported by the NSF of Bulgaria under Grant No. DN-02/11/2016 - ITDGate. | TF.IDF-based features |
5b1cd21936aeec85233c978ba8d7282931522a3a | 5b1cd21936aeec85233c978ba8d7282931522a3a_0 | Q: what is the size of the dataset?
Text: Introduction
Fake news are written and published with the intent to mislead in order to gain financially or politically, often targeting specific user groups. Another type of harmful content on the Internet are the so-called click-baits, which are distinguished by their sensational, exaggerated, or deliberately false headlines that grab attention and deceive the user into clicking an article with questionable content.
While the motives behind these two types of fake news are different, they constitute a growing problem as they constitute a sizable fraction of the online news that users encounter on a daily basis. With the recent boom of Internet, mobile, and social networks, the spread of fake news increases exponentially. Using on-line methods for spreading harmful content makes the task of keeping the Internet clean significantly harder as it is very easy to publish an article and there is no easy way to verify its veracity. Currently, domains that consistently spread misinformation are being banned from various platforms, but this is a rather inefficient way to deal with fake news as websites that specialize in spreading misinformation are reappearing with different domain names. That is why our method is based purely on text analysis, without taking into account the domain name or website's reliability as a source of information. Our work is focused on exploring various stylistic and lexical features in order to detect misleading content, and on experiments with neural network architectures in order to evaluate how deep learning can be used for detecting fake news. Moreover, we created various language-specific resources that could be used in future work on fake news and clickbait detection for Bulgarian, including task-specific word embeddings and various lexicons and dictionaries extracted from the training data.
Related Work
Trustworthiness and veracity analytics of on-line statements is an emerging research direction BIBREF0 . This includes predicting credibility of information shared in social media BIBREF1 , stance classification BIBREF2 and contradiction detection in rumours BIBREF3 . For example, Castillo:2011:ICT:1963405.1963500 studied the problem of finding false information about a newsworthy event. They compiled their own dataset, focusing on tweets using a variety of features including user reputation, author writing style, and various time-based features. Canini:2011 analysed the interaction of content and social network structure, and Morris:2012:TBU:2145204.2145274 studied how Twitter users judge truthfulness. They found that this is hard to do based on content alone, and instead users are influenced by heuristics such as user name.
Rumour detection in social media represents yet another angle of information credibility. zubiaga2015analysing studied how people handle rumours in social media. They found that users with higher reputation are more trusted, and thus can spread rumours among other users without raising suspicions about the credibility of the news or of its source. lukasik-cohn-bontcheva:2015:ACL-IJCNLP and Ma:2015:DRU used temporal patterns to detect rumours and to predict their frequency, PlosONE:2016 focused on conversational threads, and RANLP2017:factchecking:external used deep learning to verify claims using the Web as a corpus.
Veracity of information has been also studied in the context of online personal blogs BIBREF4 , community question answering forums BIBREF5 , and political debates BIBREF6 .
Astroturfing and misinformation detection represent another relevant research direction. Their importance is motivated by the strong interest from political science, and research methods are driven by the presence of massive streams of micro-blogging data, e.g., on Twitter BIBREF7 . While astroturfing has been primarily studied in microblogs such as Twitter, here we focus on on-line news and click-baits instead.
Identification of malicious accounts in social networks is another related research direction. This includes detecting spam accounts BIBREF8 , BIBREF9 , fake accounts BIBREF10 , BIBREF11 , compromised accounts and phishing accounts BIBREF12 . Fake profile detection has also been studied in the context of cyber-bullying BIBREF13 . A related problem is that of Web spam detection, which was addressed as a text classification problem BIBREF14 , e.g., using spam keyword spotting BIBREF15 , lexical affinity of arbitrary words to spam content BIBREF16 , frequency of punctuation and word co-occurrence BIBREF17 .
Fake news detection is most closely related to the present work. While social media have been seen for years as the main vehicle for spreading information of questionable veracity, recently there has been a proliferation of fake news, often spread on social media, but also published in specialized websites. This has attracted research attention recently. For example, there has been work on studying credibility, trust, and expertise in news communities BIBREF18 . The credibility of the information published in on-line news portals has been questioned by a number of researchers BIBREF19 , BIBREF20 , BIBREF21 . As timing is crucial when it comes to publishing breaking news, it is simply not possible to double-check the facts and the sources, as is usually standard in respectable printed newspapers and magazines. This is one of the biggest concerns about on-line news media that journalists have BIBREF22 . Finally, conroy2015automatic review various methods for detecting fake news, e.g., using linguistic analysis, discourse, linked data, and social network features.
All the above work was for English. The only work on fact checking for Bulgarian is that of BIBREF23 , but they focused on distinguishing serious news from humorous ones. In contrast, here we are interested in finding news that are not designed to sound funny, but to make the reader believe they are real. Unlike them, we use a deep learning approach.
Fake News & Click-bait Dataset
We use a corpus of Bulgarian news over a fixed period of time, whose factuality had been questioned. The news come from 377 different sources from various domains, including politics, interesting facts and tips&tricks. The dataset was prepared for the Hack the Fake News hackathon. It was provided by the Bulgarian Association of PR Agencies and is available in Gitlab. The corpus was automatically collected, and then annotated by students of journalism. Each entry in the dataset consists of the following elements: URL of the original article, date of publication, article heading, article content, a label indicating whether the article is fake or not, and another label indicating whether it is a click-bait.
The training dataset contains 2,815 examples, where 1,940 (i.e., 69%) are fake news and 1,968 (i.e., 70%) are click-baits; we further have 761 testing examples. However, there is 98% correlation between fake news and click-baits, i.e., a model trained on fake news would do well on click-baits and vice versa. Thus, below we focus on fake news detection only.
One important aspect about the training dataset is that it contains many repetitions. This should not be surprising as it attempts to represent a natural distribution of factual vs. fake news on-line over a period of time. As publishers of fake news often have a group of websites that feature the same deceiving content, we should expect some repetition.
In particular, the training dataset contains 434 unique articles with duplicates. These articles have three reposts each on average, with the most reposted article appearing 45 times. If we take into account the labels of the reposted articles, we can see that if an article is reposted, it is more likely to be fake news. The number of fake news that have a duplicate in the training dataset are 1018 whereas, the number of articles with genuine content that have a duplicate article in the training set is 322. We detect the duplicates based on their titles as far as they are distinctive enough and the content is sometimes slightly modified when reposted.
This supports the hypothesis that fake news websites are likely to repost their content. This is also in line with previous research BIBREF24 , which has found it beneficial to find a pattern of how a rumour is reposted over time.
Method
We propose a general framework for finding fake news focusing on the text only. We first create some resources, e.g., dictionaries of words strongly correlated with fake news, which are needed for feature extraction. Then, we design features that model a number of interesting aspects about an article, e.g., style, intent, etc. Moreover, we use a deep neural network to learn task-specific representations of the articles, which includes an attention mechanism that can focus on the most discriminative sentences and words.
Language Resources
As our work is the first attempt at predicting click-baits in Bulgarian, it is organized around building new language-specific resources and analyzing the task.
Word embeddings: We train 300-dimensional domain-specific word embeddings using word2vec BIBREF25 on 100,000 Bulgarian news articles from the same sources as the main dataset. The labelled dataset we use in our system is a subset of these articles. Finally, we end up with 207,270 unique words that occur in five or more documents. We use these embeddings for text representation, and as an input to our attention-based nevural network.
Latent Dirichlet allocation (LDA): We use LDA BIBREF26 in order to build domain-specific topic models, which could be useful for inducing classes of words that signal fake/factual news. The LDA model is trained on the same 100,000 Bulgarian news articles as for training the word embeddings. In our experiments, these LDA classes proved helpful by themselves, but they did not have much to offer on top of the word embeddings. Thus, we ended up not using them in our final system, but we chose to still release them as other researchers might find them useful in the future.
Fact-checking lexicon: Using lexicons of sentiment words has been shown to be very successful for the task of sentiment analysis BIBREF27 , and we applied the same idea to extract a fact-checking lexicon. In particular, we use point-wise mutual information (PMI) to find terms (words, word bi-grams, and named entities) that are highly correlated with the fake/factual news class. We calculated the PMI scores for uni-grams, bi-grams and on extracted named entities. Table TABREF9 shows some of the most significant words for the fake news class. We can see in the table some words that grab people attention, but are not very informative by themselves, such as mysterious or phenomenon. These words are largely context-independent and are likely to remain stable in their usage across different domains and even over an extended period of time. Thus, they should be useful beyond this task and this dataset.
Other lexicons: Finally, we create four lexicons that can help to model the difference in language use between fake and factual news articles. In particular, we explored and merged/cleansed a number of on-line resources in order to put together the following lexicons: (i) common typos in Bulgarian written text, (ii) Bulgarian slang words, (iii) commonly used foreign words, and (iv) English words with Bulgarian equivalents. We separate the latter two, because of the frequent usage of English words in common language. We make these lexicons freely available for future research.
Features
Fake news are written with the intent to deceive, and their authors often use a different style of writing compared to authors that create genuine content. This could be either deliberately, e.g., if the author wants to adapt the text to a specific target group or wants to provoke some particular emotional reaction in the reader, or unintentionally, e.g., because the authors of fake news have different writing style and personality compared to journalists in mainstream media. Disregarding the actual reason, we use features from author profiling and style detection BIBREF28 .
Use of specific words that have strong correlation with one of the classes (48 features). We used the above-described PMI-based fact-checking lexicons to extract features based on the presence of lexicon words in the target article. We end up with the following features: 16 for uni-grams + 16 for bi-grams + 16 for named entities, where we have a feature for the sum and also for the average of the word scores for each of the target classes (click-bait, non-click-bait, fake, non-fake), and we had these features separately for the title and for the body of the article.
Readability index (4 features): We calculate standard readability metrics including the type-token ratio, average word length, Flesch–Kincaid readability test BIBREF29 and Gunning-Fog index BIBREF30 . The last two metrics give scores to the text corresponding to the school grade the reader of the target article should have in order to be able to read and understand it easily. These metrics use statistics about the number of syllables, the number of words, and their length.
Orthographic features (12 features): The orthographic features used in our system include: the number of words in the title and in the content; the number of characters in the title and in the content; the number of specific symbols in the title and in the content, counting the following as symbols $.!;#?:-+%(), ; the number of capital letters in the title and in the content; the fraction of capital letters to all letters in the title and in the content; the number of URLs in the content; the overlap between the words from the title and the words of the content, relying on the fact that click-baits tend to have content that does not quite match their title. These features can be very effective for modelling the author's style.
Use of irregular vocabulary (4 features): During the initial analysis of our training dataset, we noticed the presence of a high number of foreign words. As it is not common in Bulgarian news articles to use words in another language, we thought that their presence could be a valuable feature to use. One of the reasons for their occurrence might be that they were translated from a foreign resource, or that they were borrowed. We further found that many articles that were labelled as fake news contained a high number of slang words, and we added this as a feature as well. Finally, we have a feature that counts the typos in the text.
General lexical features are often used in natural language processing as they are somewhat task-independent and reasonably effective in terms of classification accuracy. In our experiments, we used TF.IDF-based features over the title and over the content of the article we wanted to classify. We had these features twice – once for the title and once for the the content of the article, as we wanted to have two different representations of the same article. Thus, we used a total of 1,100 TF.IDF-weighted features (800 content + 300 title), limiting the vocabulary to the top 800 and 300 words, respectively (which occurred in more than five articles). We should note that TF.IDF features should be used with caution as they may not remain relevant over time or in different contexts without retraining.
The last type of hand-crafted features that we used are the grammatical features. First, we evaluate how often stop words are used in the content of the article. Extensive usage of stop words may indicate irregularities in the text, which would be missed by the above features. Additionally, we extract ten coarse-grained part-of-speech tags from the content of the article and we use part-of-speech occurrence ratios as features. This makes a total of twenty features, as we have separate features for the title and for the contents.
All the above features are hand-crafted, evaluating a specific text metric or checking whether specific words highly correlate with one of the classes. However, we lack features that target the semantic representation of the text itself. Thus, we further use two types of word representations.
Word embeddings (601 features). As we said above, we trained domain-specific word embeddings. In order to incorporate them as features, we calculate the average vector for the title and separately for the content of the news article. We end up with two 300-dimensional embedding representations of the semantics of the articles, which we use as 300+300=600 features. We also compute the cosine similarity between the average vector of the title and the average vector of the content, because we believe that this is a highly indicative measure for at least click-bait articles, whose content differs from what their title says.
Task-specific embeddings. As a more advanced representation, we feed the text into an attention-based deep neural network, which we train to produce a task-specific embedding of the news articles. The network is designed to recognize words and sentences that contribute to the click-bait class attribution. The architecture is described in details in Section UID15
Some Features that we Ignored
As we mentioned above, our method is purely text-based. Thus, we ignored the publishing date of the article. In future work, it could be explored as a useful piece of information about the credibility of the article, as there is interesting research in this direction BIBREF24 . We also disregarded the article source (the URL) because websites that specialize in producing and distributing fake content are often banned and then later reappear under another name. We recognize that the credibility of a specific website could be a very informative feature, but, for the sake of creating a robust method for fake news detection, our system relies only on the text when predicting whether the target article is likely to be fake. We describe our features in more detail below.
Model
Our framework for fake news detection is comprised of two components, which are used one after the other. First, we have an attention-based deep neural network model, which focuses on the segments of the text that are most indicative of the target class identification, and as a side effect learns task-specific representations of the news articles. We extract these representations from the last hidden layer in the network, and we feed it to the SVM classifier together with the hand-crafted features.
The attention network BIBREF31 , BIBREF32 is a powerful mechanism, inspired by the human ability to spot important sections in images or text. We adopt the approach used in BIBREF33 and employ an attention neural networks to build attention over the text of a piece of news with respect to the title it has. As far as it is in the nature of click-baits to have titles that are different from the text of the news, the attentional layers of the neural network should spot when the two texts talk about the same thing and when they are not corresponding or accurate. We implemented the attention mechanism using Keras BIBREF34 with the Tensorflow back-end BIBREF35 .
The architecture of the network with attention layers is shown in Figure FIGREF16 . Our neural model is based on Gated Recurrent Units (GRUs). GRUs are gating mechanism in RNNs which provide the ability to learn long-term dependencies and were first introduced in BIBREF36 . Given the document embedding, the GRUs build representations using input and forget gates, which help storing the valuable information through time. They build embeddings of the title and the text of the news, where at each step the unit has information only about the output from the previous step. This can be considered as a drawback, as far as we would considerably benefit if each step could construct its decision based not only on the previous step's output, but on all of the words that were processed so far. To improve this, the attention layer, for each step in the text sequence, uses the output of the steps in the title sequence. Thus, the layer learns weights, designating the strength of the relatedness between each word in the title and each word in the content.
For the neural network, we take the first 50 symbols of the title and the content of the news, which we choose after experiments. We train the neural network for 20 epochs and the final classification is derived with sigmoid activation. The optimizer used for the training is Adam optimizer. We feed the neural network with the embedding of the words we built earlier with word2vec.
As we will see below, the neural network is inferior in terms of performance to a feature-rich SVM (even though it performs well above the baseline). This is because it only has access to word embeddings, and does not use the manually-crafted features. Yet, its hidden layer represents a 128-dimensional task-specific embedding of the input article, and it turns out that using it as a list of 128 features in the SVM classifier yields even further great improvement, as we will see below. In this way, we combine a deep neural network with an attention mechanism with kernel-based SVM.
We feed the above-described hand-crafted features together with the task-specific embeddings learned by the deep neural neural network (a total of 1,892 attributes combined) into a Support Vector Machines (SVM) classifier BIBREF37 . SVMs have proven to perform well in different classification settings, including in the case of small and noisy datasets.
Experiments and Evaluation
We trained on the 2,815 training examples, and we tested on the 761 testing ones. The test dataset was provided apart from the training one, thus we didn't have to partition the original dataset to receive a testing one. The validation of the models was performed on a randomly chosen subset of sentences - one fifth of the original set. We scaled each feature individually by its maximum absolute value to end up with each feature having values in the [0;1] interval. We used an RBF kernel for the SVM, and we tuned the values of INLINEFORM0 and INLINEFORM1 using cross-validation. We trained the neural network using RMSProp BIBREF38 with a learning rate of 0.001 and mini-batches of size 32, chosen by performing experiments with cross-validation . We evaluated the model after each epoch and we kept the one that performed best on the development dataset.
Table TABREF17 shows the performance of the features in groups as described in Section SECREF7 . We can see that, among the hand-crafted features, the lexical features yield the best results, i.e., words are the most indicative features. The good results of the stylometric features indicate that the intricacies of language use are highly discriminative. The next group is the one with the grammatical features, which shows good performance in terms of Precision. The last one are the embedding features, which although having low individual performance, contribute to the overall performance of the system as shown in next paragraph.
Evaluating the final model, we set as a baseline the prediction of the majority class, i.e., the fake news class. This baseline has an F1 of 41.59% and accuracy of 71.22%. The performance of the built models can be seen in Table TABREF19 . Another stable baseline, apart from just taking the majority class, is the TF.IDF bag-of-words approach, which sets a high bar for the general model score. We then observe how much the attention mechanism embeddings improve the score (AttNN). Finally, we add the hand-crafted features (Feats), which further improve the performance. From the results, we can conclude that both the attention-based task-specific embeddings and the manual features are important for the task of finding fake news.
Conclusion and Future Work
We have presented the first attempt to solve the fake news problem for Bulgarian. Our method is purely text-based, and ignores the publication date and the source of the article. It combines task-specific embeddings, produced by a two-level attention-based deep neural network model, with manually crafted features (stylometric, lexical, grammatical, and semantic), into a kernel-based SVM classifier. We further produced and shared a number of relevant language resources for Bulgarian, which we created for solving the task.
The evaluation results are encouraging and suggest the potential applicability of our approach in a real-world scenario. They further show the potential of combining attention-based task-specific embeddings with manually crafted features. An important advantage of the attention-based neural networks is that the produced representations can be easily visualized and potentially interpreted as shown in BIBREF31 . We consider the implementation of such visualization as an important future work on the task.
Acknowledgements
We would like to thank Lachezar Bozhkov, who was part of our team in the Hack the Fake News hackathon, for his insight. This work is supported by the NSF of Bulgaria under Grant No. DN-02/11/2016 - ITDGate. | The number of fake news that have a duplicate in the training dataset are 1018 whereas, the number of articles with genuine content that have a duplicate article in the training set is 322. |
964705a100e53a9181d1a5ac8150696de12ecaf0 | 964705a100e53a9181d1a5ac8150696de12ecaf0_0 | Q: what datasets were used?
Text: Introduction
Fake news are written and published with the intent to mislead in order to gain financially or politically, often targeting specific user groups. Another type of harmful content on the Internet are the so-called click-baits, which are distinguished by their sensational, exaggerated, or deliberately false headlines that grab attention and deceive the user into clicking an article with questionable content.
While the motives behind these two types of fake news are different, they constitute a growing problem as they constitute a sizable fraction of the online news that users encounter on a daily basis. With the recent boom of Internet, mobile, and social networks, the spread of fake news increases exponentially. Using on-line methods for spreading harmful content makes the task of keeping the Internet clean significantly harder as it is very easy to publish an article and there is no easy way to verify its veracity. Currently, domains that consistently spread misinformation are being banned from various platforms, but this is a rather inefficient way to deal with fake news as websites that specialize in spreading misinformation are reappearing with different domain names. That is why our method is based purely on text analysis, without taking into account the domain name or website's reliability as a source of information. Our work is focused on exploring various stylistic and lexical features in order to detect misleading content, and on experiments with neural network architectures in order to evaluate how deep learning can be used for detecting fake news. Moreover, we created various language-specific resources that could be used in future work on fake news and clickbait detection for Bulgarian, including task-specific word embeddings and various lexicons and dictionaries extracted from the training data.
Related Work
Trustworthiness and veracity analytics of on-line statements is an emerging research direction BIBREF0 . This includes predicting credibility of information shared in social media BIBREF1 , stance classification BIBREF2 and contradiction detection in rumours BIBREF3 . For example, Castillo:2011:ICT:1963405.1963500 studied the problem of finding false information about a newsworthy event. They compiled their own dataset, focusing on tweets using a variety of features including user reputation, author writing style, and various time-based features. Canini:2011 analysed the interaction of content and social network structure, and Morris:2012:TBU:2145204.2145274 studied how Twitter users judge truthfulness. They found that this is hard to do based on content alone, and instead users are influenced by heuristics such as user name.
Rumour detection in social media represents yet another angle of information credibility. zubiaga2015analysing studied how people handle rumours in social media. They found that users with higher reputation are more trusted, and thus can spread rumours among other users without raising suspicions about the credibility of the news or of its source. lukasik-cohn-bontcheva:2015:ACL-IJCNLP and Ma:2015:DRU used temporal patterns to detect rumours and to predict their frequency, PlosONE:2016 focused on conversational threads, and RANLP2017:factchecking:external used deep learning to verify claims using the Web as a corpus.
Veracity of information has been also studied in the context of online personal blogs BIBREF4 , community question answering forums BIBREF5 , and political debates BIBREF6 .
Astroturfing and misinformation detection represent another relevant research direction. Their importance is motivated by the strong interest from political science, and research methods are driven by the presence of massive streams of micro-blogging data, e.g., on Twitter BIBREF7 . While astroturfing has been primarily studied in microblogs such as Twitter, here we focus on on-line news and click-baits instead.
Identification of malicious accounts in social networks is another related research direction. This includes detecting spam accounts BIBREF8 , BIBREF9 , fake accounts BIBREF10 , BIBREF11 , compromised accounts and phishing accounts BIBREF12 . Fake profile detection has also been studied in the context of cyber-bullying BIBREF13 . A related problem is that of Web spam detection, which was addressed as a text classification problem BIBREF14 , e.g., using spam keyword spotting BIBREF15 , lexical affinity of arbitrary words to spam content BIBREF16 , frequency of punctuation and word co-occurrence BIBREF17 .
Fake news detection is most closely related to the present work. While social media have been seen for years as the main vehicle for spreading information of questionable veracity, recently there has been a proliferation of fake news, often spread on social media, but also published in specialized websites. This has attracted research attention recently. For example, there has been work on studying credibility, trust, and expertise in news communities BIBREF18 . The credibility of the information published in on-line news portals has been questioned by a number of researchers BIBREF19 , BIBREF20 , BIBREF21 . As timing is crucial when it comes to publishing breaking news, it is simply not possible to double-check the facts and the sources, as is usually standard in respectable printed newspapers and magazines. This is one of the biggest concerns about on-line news media that journalists have BIBREF22 . Finally, conroy2015automatic review various methods for detecting fake news, e.g., using linguistic analysis, discourse, linked data, and social network features.
All the above work was for English. The only work on fact checking for Bulgarian is that of BIBREF23 , but they focused on distinguishing serious news from humorous ones. In contrast, here we are interested in finding news that are not designed to sound funny, but to make the reader believe they are real. Unlike them, we use a deep learning approach.
Fake News & Click-bait Dataset
We use a corpus of Bulgarian news over a fixed period of time, whose factuality had been questioned. The news come from 377 different sources from various domains, including politics, interesting facts and tips&tricks. The dataset was prepared for the Hack the Fake News hackathon. It was provided by the Bulgarian Association of PR Agencies and is available in Gitlab. The corpus was automatically collected, and then annotated by students of journalism. Each entry in the dataset consists of the following elements: URL of the original article, date of publication, article heading, article content, a label indicating whether the article is fake or not, and another label indicating whether it is a click-bait.
The training dataset contains 2,815 examples, where 1,940 (i.e., 69%) are fake news and 1,968 (i.e., 70%) are click-baits; we further have 761 testing examples. However, there is 98% correlation between fake news and click-baits, i.e., a model trained on fake news would do well on click-baits and vice versa. Thus, below we focus on fake news detection only.
One important aspect about the training dataset is that it contains many repetitions. This should not be surprising as it attempts to represent a natural distribution of factual vs. fake news on-line over a period of time. As publishers of fake news often have a group of websites that feature the same deceiving content, we should expect some repetition.
In particular, the training dataset contains 434 unique articles with duplicates. These articles have three reposts each on average, with the most reposted article appearing 45 times. If we take into account the labels of the reposted articles, we can see that if an article is reposted, it is more likely to be fake news. The number of fake news that have a duplicate in the training dataset are 1018 whereas, the number of articles with genuine content that have a duplicate article in the training set is 322. We detect the duplicates based on their titles as far as they are distinctive enough and the content is sometimes slightly modified when reposted.
This supports the hypothesis that fake news websites are likely to repost their content. This is also in line with previous research BIBREF24 , which has found it beneficial to find a pattern of how a rumour is reposted over time.
Method
We propose a general framework for finding fake news focusing on the text only. We first create some resources, e.g., dictionaries of words strongly correlated with fake news, which are needed for feature extraction. Then, we design features that model a number of interesting aspects about an article, e.g., style, intent, etc. Moreover, we use a deep neural network to learn task-specific representations of the articles, which includes an attention mechanism that can focus on the most discriminative sentences and words.
Language Resources
As our work is the first attempt at predicting click-baits in Bulgarian, it is organized around building new language-specific resources and analyzing the task.
Word embeddings: We train 300-dimensional domain-specific word embeddings using word2vec BIBREF25 on 100,000 Bulgarian news articles from the same sources as the main dataset. The labelled dataset we use in our system is a subset of these articles. Finally, we end up with 207,270 unique words that occur in five or more documents. We use these embeddings for text representation, and as an input to our attention-based nevural network.
Latent Dirichlet allocation (LDA): We use LDA BIBREF26 in order to build domain-specific topic models, which could be useful for inducing classes of words that signal fake/factual news. The LDA model is trained on the same 100,000 Bulgarian news articles as for training the word embeddings. In our experiments, these LDA classes proved helpful by themselves, but they did not have much to offer on top of the word embeddings. Thus, we ended up not using them in our final system, but we chose to still release them as other researchers might find them useful in the future.
Fact-checking lexicon: Using lexicons of sentiment words has been shown to be very successful for the task of sentiment analysis BIBREF27 , and we applied the same idea to extract a fact-checking lexicon. In particular, we use point-wise mutual information (PMI) to find terms (words, word bi-grams, and named entities) that are highly correlated with the fake/factual news class. We calculated the PMI scores for uni-grams, bi-grams and on extracted named entities. Table TABREF9 shows some of the most significant words for the fake news class. We can see in the table some words that grab people attention, but are not very informative by themselves, such as mysterious or phenomenon. These words are largely context-independent and are likely to remain stable in their usage across different domains and even over an extended period of time. Thus, they should be useful beyond this task and this dataset.
Other lexicons: Finally, we create four lexicons that can help to model the difference in language use between fake and factual news articles. In particular, we explored and merged/cleansed a number of on-line resources in order to put together the following lexicons: (i) common typos in Bulgarian written text, (ii) Bulgarian slang words, (iii) commonly used foreign words, and (iv) English words with Bulgarian equivalents. We separate the latter two, because of the frequent usage of English words in common language. We make these lexicons freely available for future research.
Features
Fake news are written with the intent to deceive, and their authors often use a different style of writing compared to authors that create genuine content. This could be either deliberately, e.g., if the author wants to adapt the text to a specific target group or wants to provoke some particular emotional reaction in the reader, or unintentionally, e.g., because the authors of fake news have different writing style and personality compared to journalists in mainstream media. Disregarding the actual reason, we use features from author profiling and style detection BIBREF28 .
Use of specific words that have strong correlation with one of the classes (48 features). We used the above-described PMI-based fact-checking lexicons to extract features based on the presence of lexicon words in the target article. We end up with the following features: 16 for uni-grams + 16 for bi-grams + 16 for named entities, where we have a feature for the sum and also for the average of the word scores for each of the target classes (click-bait, non-click-bait, fake, non-fake), and we had these features separately for the title and for the body of the article.
Readability index (4 features): We calculate standard readability metrics including the type-token ratio, average word length, Flesch–Kincaid readability test BIBREF29 and Gunning-Fog index BIBREF30 . The last two metrics give scores to the text corresponding to the school grade the reader of the target article should have in order to be able to read and understand it easily. These metrics use statistics about the number of syllables, the number of words, and their length.
Orthographic features (12 features): The orthographic features used in our system include: the number of words in the title and in the content; the number of characters in the title and in the content; the number of specific symbols in the title and in the content, counting the following as symbols $.!;#?:-+%(), ; the number of capital letters in the title and in the content; the fraction of capital letters to all letters in the title and in the content; the number of URLs in the content; the overlap between the words from the title and the words of the content, relying on the fact that click-baits tend to have content that does not quite match their title. These features can be very effective for modelling the author's style.
Use of irregular vocabulary (4 features): During the initial analysis of our training dataset, we noticed the presence of a high number of foreign words. As it is not common in Bulgarian news articles to use words in another language, we thought that their presence could be a valuable feature to use. One of the reasons for their occurrence might be that they were translated from a foreign resource, or that they were borrowed. We further found that many articles that were labelled as fake news contained a high number of slang words, and we added this as a feature as well. Finally, we have a feature that counts the typos in the text.
General lexical features are often used in natural language processing as they are somewhat task-independent and reasonably effective in terms of classification accuracy. In our experiments, we used TF.IDF-based features over the title and over the content of the article we wanted to classify. We had these features twice – once for the title and once for the the content of the article, as we wanted to have two different representations of the same article. Thus, we used a total of 1,100 TF.IDF-weighted features (800 content + 300 title), limiting the vocabulary to the top 800 and 300 words, respectively (which occurred in more than five articles). We should note that TF.IDF features should be used with caution as they may not remain relevant over time or in different contexts without retraining.
The last type of hand-crafted features that we used are the grammatical features. First, we evaluate how often stop words are used in the content of the article. Extensive usage of stop words may indicate irregularities in the text, which would be missed by the above features. Additionally, we extract ten coarse-grained part-of-speech tags from the content of the article and we use part-of-speech occurrence ratios as features. This makes a total of twenty features, as we have separate features for the title and for the contents.
All the above features are hand-crafted, evaluating a specific text metric or checking whether specific words highly correlate with one of the classes. However, we lack features that target the semantic representation of the text itself. Thus, we further use two types of word representations.
Word embeddings (601 features). As we said above, we trained domain-specific word embeddings. In order to incorporate them as features, we calculate the average vector for the title and separately for the content of the news article. We end up with two 300-dimensional embedding representations of the semantics of the articles, which we use as 300+300=600 features. We also compute the cosine similarity between the average vector of the title and the average vector of the content, because we believe that this is a highly indicative measure for at least click-bait articles, whose content differs from what their title says.
Task-specific embeddings. As a more advanced representation, we feed the text into an attention-based deep neural network, which we train to produce a task-specific embedding of the news articles. The network is designed to recognize words and sentences that contribute to the click-bait class attribution. The architecture is described in details in Section UID15
Some Features that we Ignored
As we mentioned above, our method is purely text-based. Thus, we ignored the publishing date of the article. In future work, it could be explored as a useful piece of information about the credibility of the article, as there is interesting research in this direction BIBREF24 . We also disregarded the article source (the URL) because websites that specialize in producing and distributing fake content are often banned and then later reappear under another name. We recognize that the credibility of a specific website could be a very informative feature, but, for the sake of creating a robust method for fake news detection, our system relies only on the text when predicting whether the target article is likely to be fake. We describe our features in more detail below.
Model
Our framework for fake news detection is comprised of two components, which are used one after the other. First, we have an attention-based deep neural network model, which focuses on the segments of the text that are most indicative of the target class identification, and as a side effect learns task-specific representations of the news articles. We extract these representations from the last hidden layer in the network, and we feed it to the SVM classifier together with the hand-crafted features.
The attention network BIBREF31 , BIBREF32 is a powerful mechanism, inspired by the human ability to spot important sections in images or text. We adopt the approach used in BIBREF33 and employ an attention neural networks to build attention over the text of a piece of news with respect to the title it has. As far as it is in the nature of click-baits to have titles that are different from the text of the news, the attentional layers of the neural network should spot when the two texts talk about the same thing and when they are not corresponding or accurate. We implemented the attention mechanism using Keras BIBREF34 with the Tensorflow back-end BIBREF35 .
The architecture of the network with attention layers is shown in Figure FIGREF16 . Our neural model is based on Gated Recurrent Units (GRUs). GRUs are gating mechanism in RNNs which provide the ability to learn long-term dependencies and were first introduced in BIBREF36 . Given the document embedding, the GRUs build representations using input and forget gates, which help storing the valuable information through time. They build embeddings of the title and the text of the news, where at each step the unit has information only about the output from the previous step. This can be considered as a drawback, as far as we would considerably benefit if each step could construct its decision based not only on the previous step's output, but on all of the words that were processed so far. To improve this, the attention layer, for each step in the text sequence, uses the output of the steps in the title sequence. Thus, the layer learns weights, designating the strength of the relatedness between each word in the title and each word in the content.
For the neural network, we take the first 50 symbols of the title and the content of the news, which we choose after experiments. We train the neural network for 20 epochs and the final classification is derived with sigmoid activation. The optimizer used for the training is Adam optimizer. We feed the neural network with the embedding of the words we built earlier with word2vec.
As we will see below, the neural network is inferior in terms of performance to a feature-rich SVM (even though it performs well above the baseline). This is because it only has access to word embeddings, and does not use the manually-crafted features. Yet, its hidden layer represents a 128-dimensional task-specific embedding of the input article, and it turns out that using it as a list of 128 features in the SVM classifier yields even further great improvement, as we will see below. In this way, we combine a deep neural network with an attention mechanism with kernel-based SVM.
We feed the above-described hand-crafted features together with the task-specific embeddings learned by the deep neural neural network (a total of 1,892 attributes combined) into a Support Vector Machines (SVM) classifier BIBREF37 . SVMs have proven to perform well in different classification settings, including in the case of small and noisy datasets.
Experiments and Evaluation
We trained on the 2,815 training examples, and we tested on the 761 testing ones. The test dataset was provided apart from the training one, thus we didn't have to partition the original dataset to receive a testing one. The validation of the models was performed on a randomly chosen subset of sentences - one fifth of the original set. We scaled each feature individually by its maximum absolute value to end up with each feature having values in the [0;1] interval. We used an RBF kernel for the SVM, and we tuned the values of INLINEFORM0 and INLINEFORM1 using cross-validation. We trained the neural network using RMSProp BIBREF38 with a learning rate of 0.001 and mini-batches of size 32, chosen by performing experiments with cross-validation . We evaluated the model after each epoch and we kept the one that performed best on the development dataset.
Table TABREF17 shows the performance of the features in groups as described in Section SECREF7 . We can see that, among the hand-crafted features, the lexical features yield the best results, i.e., words are the most indicative features. The good results of the stylometric features indicate that the intricacies of language use are highly discriminative. The next group is the one with the grammatical features, which shows good performance in terms of Precision. The last one are the embedding features, which although having low individual performance, contribute to the overall performance of the system as shown in next paragraph.
Evaluating the final model, we set as a baseline the prediction of the majority class, i.e., the fake news class. This baseline has an F1 of 41.59% and accuracy of 71.22%. The performance of the built models can be seen in Table TABREF19 . Another stable baseline, apart from just taking the majority class, is the TF.IDF bag-of-words approach, which sets a high bar for the general model score. We then observe how much the attention mechanism embeddings improve the score (AttNN). Finally, we add the hand-crafted features (Feats), which further improve the performance. From the results, we can conclude that both the attention-based task-specific embeddings and the manual features are important for the task of finding fake news.
Conclusion and Future Work
We have presented the first attempt to solve the fake news problem for Bulgarian. Our method is purely text-based, and ignores the publication date and the source of the article. It combines task-specific embeddings, produced by a two-level attention-based deep neural network model, with manually crafted features (stylometric, lexical, grammatical, and semantic), into a kernel-based SVM classifier. We further produced and shared a number of relevant language resources for Bulgarian, which we created for solving the task.
The evaluation results are encouraging and suggest the potential applicability of our approach in a real-world scenario. They further show the potential of combining attention-based task-specific embeddings with manually crafted features. An important advantage of the attention-based neural networks is that the produced representations can be easily visualized and potentially interpreted as shown in BIBREF31 . We consider the implementation of such visualization as an important future work on the task.
Acknowledgements
We would like to thank Lachezar Bozhkov, who was part of our team in the Hack the Fake News hackathon, for his insight. This work is supported by the NSF of Bulgaria under Grant No. DN-02/11/2016 - ITDGate. | training dataset contains 2,815 examples, 761 testing examples |
f08a66665f01c91cb9dfe082e9d1015ecf3df71d | f08a66665f01c91cb9dfe082e9d1015ecf3df71d_0 | Q: what are the three reasons everybody hates them?
Text: Introduction
Fake news are written and published with the intent to mislead in order to gain financially or politically, often targeting specific user groups. Another type of harmful content on the Internet are the so-called click-baits, which are distinguished by their sensational, exaggerated, or deliberately false headlines that grab attention and deceive the user into clicking an article with questionable content.
While the motives behind these two types of fake news are different, they constitute a growing problem as they constitute a sizable fraction of the online news that users encounter on a daily basis. With the recent boom of Internet, mobile, and social networks, the spread of fake news increases exponentially. Using on-line methods for spreading harmful content makes the task of keeping the Internet clean significantly harder as it is very easy to publish an article and there is no easy way to verify its veracity. Currently, domains that consistently spread misinformation are being banned from various platforms, but this is a rather inefficient way to deal with fake news as websites that specialize in spreading misinformation are reappearing with different domain names. That is why our method is based purely on text analysis, without taking into account the domain name or website's reliability as a source of information. Our work is focused on exploring various stylistic and lexical features in order to detect misleading content, and on experiments with neural network architectures in order to evaluate how deep learning can be used for detecting fake news. Moreover, we created various language-specific resources that could be used in future work on fake news and clickbait detection for Bulgarian, including task-specific word embeddings and various lexicons and dictionaries extracted from the training data.
Related Work
Trustworthiness and veracity analytics of on-line statements is an emerging research direction BIBREF0 . This includes predicting credibility of information shared in social media BIBREF1 , stance classification BIBREF2 and contradiction detection in rumours BIBREF3 . For example, Castillo:2011:ICT:1963405.1963500 studied the problem of finding false information about a newsworthy event. They compiled their own dataset, focusing on tweets using a variety of features including user reputation, author writing style, and various time-based features. Canini:2011 analysed the interaction of content and social network structure, and Morris:2012:TBU:2145204.2145274 studied how Twitter users judge truthfulness. They found that this is hard to do based on content alone, and instead users are influenced by heuristics such as user name.
Rumour detection in social media represents yet another angle of information credibility. zubiaga2015analysing studied how people handle rumours in social media. They found that users with higher reputation are more trusted, and thus can spread rumours among other users without raising suspicions about the credibility of the news or of its source. lukasik-cohn-bontcheva:2015:ACL-IJCNLP and Ma:2015:DRU used temporal patterns to detect rumours and to predict their frequency, PlosONE:2016 focused on conversational threads, and RANLP2017:factchecking:external used deep learning to verify claims using the Web as a corpus.
Veracity of information has been also studied in the context of online personal blogs BIBREF4 , community question answering forums BIBREF5 , and political debates BIBREF6 .
Astroturfing and misinformation detection represent another relevant research direction. Their importance is motivated by the strong interest from political science, and research methods are driven by the presence of massive streams of micro-blogging data, e.g., on Twitter BIBREF7 . While astroturfing has been primarily studied in microblogs such as Twitter, here we focus on on-line news and click-baits instead.
Identification of malicious accounts in social networks is another related research direction. This includes detecting spam accounts BIBREF8 , BIBREF9 , fake accounts BIBREF10 , BIBREF11 , compromised accounts and phishing accounts BIBREF12 . Fake profile detection has also been studied in the context of cyber-bullying BIBREF13 . A related problem is that of Web spam detection, which was addressed as a text classification problem BIBREF14 , e.g., using spam keyword spotting BIBREF15 , lexical affinity of arbitrary words to spam content BIBREF16 , frequency of punctuation and word co-occurrence BIBREF17 .
Fake news detection is most closely related to the present work. While social media have been seen for years as the main vehicle for spreading information of questionable veracity, recently there has been a proliferation of fake news, often spread on social media, but also published in specialized websites. This has attracted research attention recently. For example, there has been work on studying credibility, trust, and expertise in news communities BIBREF18 . The credibility of the information published in on-line news portals has been questioned by a number of researchers BIBREF19 , BIBREF20 , BIBREF21 . As timing is crucial when it comes to publishing breaking news, it is simply not possible to double-check the facts and the sources, as is usually standard in respectable printed newspapers and magazines. This is one of the biggest concerns about on-line news media that journalists have BIBREF22 . Finally, conroy2015automatic review various methods for detecting fake news, e.g., using linguistic analysis, discourse, linked data, and social network features.
All the above work was for English. The only work on fact checking for Bulgarian is that of BIBREF23 , but they focused on distinguishing serious news from humorous ones. In contrast, here we are interested in finding news that are not designed to sound funny, but to make the reader believe they are real. Unlike them, we use a deep learning approach.
Fake News & Click-bait Dataset
We use a corpus of Bulgarian news over a fixed period of time, whose factuality had been questioned. The news come from 377 different sources from various domains, including politics, interesting facts and tips&tricks. The dataset was prepared for the Hack the Fake News hackathon. It was provided by the Bulgarian Association of PR Agencies and is available in Gitlab. The corpus was automatically collected, and then annotated by students of journalism. Each entry in the dataset consists of the following elements: URL of the original article, date of publication, article heading, article content, a label indicating whether the article is fake or not, and another label indicating whether it is a click-bait.
The training dataset contains 2,815 examples, where 1,940 (i.e., 69%) are fake news and 1,968 (i.e., 70%) are click-baits; we further have 761 testing examples. However, there is 98% correlation between fake news and click-baits, i.e., a model trained on fake news would do well on click-baits and vice versa. Thus, below we focus on fake news detection only.
One important aspect about the training dataset is that it contains many repetitions. This should not be surprising as it attempts to represent a natural distribution of factual vs. fake news on-line over a period of time. As publishers of fake news often have a group of websites that feature the same deceiving content, we should expect some repetition.
In particular, the training dataset contains 434 unique articles with duplicates. These articles have three reposts each on average, with the most reposted article appearing 45 times. If we take into account the labels of the reposted articles, we can see that if an article is reposted, it is more likely to be fake news. The number of fake news that have a duplicate in the training dataset are 1018 whereas, the number of articles with genuine content that have a duplicate article in the training set is 322. We detect the duplicates based on their titles as far as they are distinctive enough and the content is sometimes slightly modified when reposted.
This supports the hypothesis that fake news websites are likely to repost their content. This is also in line with previous research BIBREF24 , which has found it beneficial to find a pattern of how a rumour is reposted over time.
Method
We propose a general framework for finding fake news focusing on the text only. We first create some resources, e.g., dictionaries of words strongly correlated with fake news, which are needed for feature extraction. Then, we design features that model a number of interesting aspects about an article, e.g., style, intent, etc. Moreover, we use a deep neural network to learn task-specific representations of the articles, which includes an attention mechanism that can focus on the most discriminative sentences and words.
Language Resources
As our work is the first attempt at predicting click-baits in Bulgarian, it is organized around building new language-specific resources and analyzing the task.
Word embeddings: We train 300-dimensional domain-specific word embeddings using word2vec BIBREF25 on 100,000 Bulgarian news articles from the same sources as the main dataset. The labelled dataset we use in our system is a subset of these articles. Finally, we end up with 207,270 unique words that occur in five or more documents. We use these embeddings for text representation, and as an input to our attention-based nevural network.
Latent Dirichlet allocation (LDA): We use LDA BIBREF26 in order to build domain-specific topic models, which could be useful for inducing classes of words that signal fake/factual news. The LDA model is trained on the same 100,000 Bulgarian news articles as for training the word embeddings. In our experiments, these LDA classes proved helpful by themselves, but they did not have much to offer on top of the word embeddings. Thus, we ended up not using them in our final system, but we chose to still release them as other researchers might find them useful in the future.
Fact-checking lexicon: Using lexicons of sentiment words has been shown to be very successful for the task of sentiment analysis BIBREF27 , and we applied the same idea to extract a fact-checking lexicon. In particular, we use point-wise mutual information (PMI) to find terms (words, word bi-grams, and named entities) that are highly correlated with the fake/factual news class. We calculated the PMI scores for uni-grams, bi-grams and on extracted named entities. Table TABREF9 shows some of the most significant words for the fake news class. We can see in the table some words that grab people attention, but are not very informative by themselves, such as mysterious or phenomenon. These words are largely context-independent and are likely to remain stable in their usage across different domains and even over an extended period of time. Thus, they should be useful beyond this task and this dataset.
Other lexicons: Finally, we create four lexicons that can help to model the difference in language use between fake and factual news articles. In particular, we explored and merged/cleansed a number of on-line resources in order to put together the following lexicons: (i) common typos in Bulgarian written text, (ii) Bulgarian slang words, (iii) commonly used foreign words, and (iv) English words with Bulgarian equivalents. We separate the latter two, because of the frequent usage of English words in common language. We make these lexicons freely available for future research.
Features
Fake news are written with the intent to deceive, and their authors often use a different style of writing compared to authors that create genuine content. This could be either deliberately, e.g., if the author wants to adapt the text to a specific target group or wants to provoke some particular emotional reaction in the reader, or unintentionally, e.g., because the authors of fake news have different writing style and personality compared to journalists in mainstream media. Disregarding the actual reason, we use features from author profiling and style detection BIBREF28 .
Use of specific words that have strong correlation with one of the classes (48 features). We used the above-described PMI-based fact-checking lexicons to extract features based on the presence of lexicon words in the target article. We end up with the following features: 16 for uni-grams + 16 for bi-grams + 16 for named entities, where we have a feature for the sum and also for the average of the word scores for each of the target classes (click-bait, non-click-bait, fake, non-fake), and we had these features separately for the title and for the body of the article.
Readability index (4 features): We calculate standard readability metrics including the type-token ratio, average word length, Flesch–Kincaid readability test BIBREF29 and Gunning-Fog index BIBREF30 . The last two metrics give scores to the text corresponding to the school grade the reader of the target article should have in order to be able to read and understand it easily. These metrics use statistics about the number of syllables, the number of words, and their length.
Orthographic features (12 features): The orthographic features used in our system include: the number of words in the title and in the content; the number of characters in the title and in the content; the number of specific symbols in the title and in the content, counting the following as symbols $.!;#?:-+%(), ; the number of capital letters in the title and in the content; the fraction of capital letters to all letters in the title and in the content; the number of URLs in the content; the overlap between the words from the title and the words of the content, relying on the fact that click-baits tend to have content that does not quite match their title. These features can be very effective for modelling the author's style.
Use of irregular vocabulary (4 features): During the initial analysis of our training dataset, we noticed the presence of a high number of foreign words. As it is not common in Bulgarian news articles to use words in another language, we thought that their presence could be a valuable feature to use. One of the reasons for their occurrence might be that they were translated from a foreign resource, or that they were borrowed. We further found that many articles that were labelled as fake news contained a high number of slang words, and we added this as a feature as well. Finally, we have a feature that counts the typos in the text.
General lexical features are often used in natural language processing as they are somewhat task-independent and reasonably effective in terms of classification accuracy. In our experiments, we used TF.IDF-based features over the title and over the content of the article we wanted to classify. We had these features twice – once for the title and once for the the content of the article, as we wanted to have two different representations of the same article. Thus, we used a total of 1,100 TF.IDF-weighted features (800 content + 300 title), limiting the vocabulary to the top 800 and 300 words, respectively (which occurred in more than five articles). We should note that TF.IDF features should be used with caution as they may not remain relevant over time or in different contexts without retraining.
The last type of hand-crafted features that we used are the grammatical features. First, we evaluate how often stop words are used in the content of the article. Extensive usage of stop words may indicate irregularities in the text, which would be missed by the above features. Additionally, we extract ten coarse-grained part-of-speech tags from the content of the article and we use part-of-speech occurrence ratios as features. This makes a total of twenty features, as we have separate features for the title and for the contents.
All the above features are hand-crafted, evaluating a specific text metric or checking whether specific words highly correlate with one of the classes. However, we lack features that target the semantic representation of the text itself. Thus, we further use two types of word representations.
Word embeddings (601 features). As we said above, we trained domain-specific word embeddings. In order to incorporate them as features, we calculate the average vector for the title and separately for the content of the news article. We end up with two 300-dimensional embedding representations of the semantics of the articles, which we use as 300+300=600 features. We also compute the cosine similarity between the average vector of the title and the average vector of the content, because we believe that this is a highly indicative measure for at least click-bait articles, whose content differs from what their title says.
Task-specific embeddings. As a more advanced representation, we feed the text into an attention-based deep neural network, which we train to produce a task-specific embedding of the news articles. The network is designed to recognize words and sentences that contribute to the click-bait class attribution. The architecture is described in details in Section UID15
Some Features that we Ignored
As we mentioned above, our method is purely text-based. Thus, we ignored the publishing date of the article. In future work, it could be explored as a useful piece of information about the credibility of the article, as there is interesting research in this direction BIBREF24 . We also disregarded the article source (the URL) because websites that specialize in producing and distributing fake content are often banned and then later reappear under another name. We recognize that the credibility of a specific website could be a very informative feature, but, for the sake of creating a robust method for fake news detection, our system relies only on the text when predicting whether the target article is likely to be fake. We describe our features in more detail below.
Model
Our framework for fake news detection is comprised of two components, which are used one after the other. First, we have an attention-based deep neural network model, which focuses on the segments of the text that are most indicative of the target class identification, and as a side effect learns task-specific representations of the news articles. We extract these representations from the last hidden layer in the network, and we feed it to the SVM classifier together with the hand-crafted features.
The attention network BIBREF31 , BIBREF32 is a powerful mechanism, inspired by the human ability to spot important sections in images or text. We adopt the approach used in BIBREF33 and employ an attention neural networks to build attention over the text of a piece of news with respect to the title it has. As far as it is in the nature of click-baits to have titles that are different from the text of the news, the attentional layers of the neural network should spot when the two texts talk about the same thing and when they are not corresponding or accurate. We implemented the attention mechanism using Keras BIBREF34 with the Tensorflow back-end BIBREF35 .
The architecture of the network with attention layers is shown in Figure FIGREF16 . Our neural model is based on Gated Recurrent Units (GRUs). GRUs are gating mechanism in RNNs which provide the ability to learn long-term dependencies and were first introduced in BIBREF36 . Given the document embedding, the GRUs build representations using input and forget gates, which help storing the valuable information through time. They build embeddings of the title and the text of the news, where at each step the unit has information only about the output from the previous step. This can be considered as a drawback, as far as we would considerably benefit if each step could construct its decision based not only on the previous step's output, but on all of the words that were processed so far. To improve this, the attention layer, for each step in the text sequence, uses the output of the steps in the title sequence. Thus, the layer learns weights, designating the strength of the relatedness between each word in the title and each word in the content.
For the neural network, we take the first 50 symbols of the title and the content of the news, which we choose after experiments. We train the neural network for 20 epochs and the final classification is derived with sigmoid activation. The optimizer used for the training is Adam optimizer. We feed the neural network with the embedding of the words we built earlier with word2vec.
As we will see below, the neural network is inferior in terms of performance to a feature-rich SVM (even though it performs well above the baseline). This is because it only has access to word embeddings, and does not use the manually-crafted features. Yet, its hidden layer represents a 128-dimensional task-specific embedding of the input article, and it turns out that using it as a list of 128 features in the SVM classifier yields even further great improvement, as we will see below. In this way, we combine a deep neural network with an attention mechanism with kernel-based SVM.
We feed the above-described hand-crafted features together with the task-specific embeddings learned by the deep neural neural network (a total of 1,892 attributes combined) into a Support Vector Machines (SVM) classifier BIBREF37 . SVMs have proven to perform well in different classification settings, including in the case of small and noisy datasets.
Experiments and Evaluation
We trained on the 2,815 training examples, and we tested on the 761 testing ones. The test dataset was provided apart from the training one, thus we didn't have to partition the original dataset to receive a testing one. The validation of the models was performed on a randomly chosen subset of sentences - one fifth of the original set. We scaled each feature individually by its maximum absolute value to end up with each feature having values in the [0;1] interval. We used an RBF kernel for the SVM, and we tuned the values of INLINEFORM0 and INLINEFORM1 using cross-validation. We trained the neural network using RMSProp BIBREF38 with a learning rate of 0.001 and mini-batches of size 32, chosen by performing experiments with cross-validation . We evaluated the model after each epoch and we kept the one that performed best on the development dataset.
Table TABREF17 shows the performance of the features in groups as described in Section SECREF7 . We can see that, among the hand-crafted features, the lexical features yield the best results, i.e., words are the most indicative features. The good results of the stylometric features indicate that the intricacies of language use are highly discriminative. The next group is the one with the grammatical features, which shows good performance in terms of Precision. The last one are the embedding features, which although having low individual performance, contribute to the overall performance of the system as shown in next paragraph.
Evaluating the final model, we set as a baseline the prediction of the majority class, i.e., the fake news class. This baseline has an F1 of 41.59% and accuracy of 71.22%. The performance of the built models can be seen in Table TABREF19 . Another stable baseline, apart from just taking the majority class, is the TF.IDF bag-of-words approach, which sets a high bar for the general model score. We then observe how much the attention mechanism embeddings improve the score (AttNN). Finally, we add the hand-crafted features (Feats), which further improve the performance. From the results, we can conclude that both the attention-based task-specific embeddings and the manual features are important for the task of finding fake news.
Conclusion and Future Work
We have presented the first attempt to solve the fake news problem for Bulgarian. Our method is purely text-based, and ignores the publication date and the source of the article. It combines task-specific embeddings, produced by a two-level attention-based deep neural network model, with manually crafted features (stylometric, lexical, grammatical, and semantic), into a kernel-based SVM classifier. We further produced and shared a number of relevant language resources for Bulgarian, which we created for solving the task.
The evaluation results are encouraging and suggest the potential applicability of our approach in a real-world scenario. They further show the potential of combining attention-based task-specific embeddings with manually crafted features. An important advantage of the attention-based neural networks is that the produced representations can be easily visualized and potentially interpreted as shown in BIBREF31 . We consider the implementation of such visualization as an important future work on the task.
Acknowledgements
We would like to thank Lachezar Bozhkov, who was part of our team in the Hack the Fake News hackathon, for his insight. This work is supported by the NSF of Bulgaria under Grant No. DN-02/11/2016 - ITDGate. | Unanswerable |
c65b6470b7ed0a035548cc08e0bc541c2c4a95a7 | c65b6470b7ed0a035548cc08e0bc541c2c4a95a7_0 | Q: How are seed dictionaries obtained by fully unsupervised methods?
Text: Introduction and Motivation
The wide use and success of monolingual word embeddings in NLP tasks BIBREF0 , BIBREF1 has inspired further research focus on the induction of cross-lingual word embeddings (CLWEs). CLWE methods learn a shared cross-lingual word vector space where words with similar meanings obtain similar vectors regardless of their actual language. CLWEs benefit cross-lingual NLP, enabling multilingual modeling of meaning and supporting cross-lingual transfer for downstream tasks and resource-lean languages. CLWEs provide invaluable cross-lingual knowledge for, inter alia, bilingual lexicon induction BIBREF2 , BIBREF3 , information retrieval BIBREF4 , BIBREF5 , machine translation BIBREF6 , BIBREF7 , document classification BIBREF8 , cross-lingual plagiarism detection BIBREF9 , domain adaptation BIBREF10 , cross-lingual POS tagging BIBREF11 , BIBREF12 , and cross-lingual dependency parsing BIBREF13 , BIBREF14 .
The landscape of CLWE methods has recently been dominated by the so-called projection-based methods BIBREF15 , BIBREF16 , BIBREF17 . They align two monolingual embedding spaces by learning a projection/mapping based on a training dictionary of translation pairs. Besides their simple conceptual design and competitive performance, their popularity originates from the fact that they rely on rather weak cross-lingual supervision. Originally, the seed dictionaries typically spanned several thousand word pairs BIBREF15 , BIBREF18 , BIBREF19 , but more recent work has shown that CLWEs can be induced with even weaker supervision from small dictionaries spanning several hundred pairs BIBREF20 , identical strings BIBREF21 , or even only shared numerals BIBREF22 .
Taking the idea of reducing cross-lingual supervision to the extreme, the latest CLWE developments almost exclusively focus on fully unsupervised approaches BIBREF23 , BIBREF24 , BIBREF25 , BIBREF26 , BIBREF27 , BIBREF28 , BIBREF29 , BIBREF30 : they fully abandon any source of (even weak) supervision and extract the initial seed dictionary by exploiting topological similarities between pre-trained monolingual embedding spaces. Their modus operandi can roughly be described by three main components: C1) unsupervised extraction of a seed dictionary; C2) a self-learning procedure that iteratively refines the dictionary to learn projections of increasingly higher quality; and C3) a set of preprocessing and postprocessing steps (e.g., unit length normalization, mean centering, (de)whitening) BIBREF31 that make the entire learning process more robust.
The induction of fully unsupervised CLWEs is an inherently interesting research topic per se. Nonetheless, the main practical motivation for developing such approaches in the first place is to facilitate the construction of multilingual NLP tools and widen the access to language technology for resource-poor languages and language pairs. However, the first attempts at fully unsupervised CLWE induction failed exactly for these use cases, as shown by sogaard2018on. Therefore, the follow-up work aimed to improve the robustness of unsupervised CLWE induction by introducing more robust self-learning procedures BIBREF24 , BIBREF32 . Besides increased robustness, recent work claims that fully unsupervised projection-based CLWEs can even match or surpass their supervised counterparts BIBREF23 , BIBREF24 , BIBREF27 , BIBREF33 , BIBREF34 .
In this paper, we critically examine these claims on robustness and improved performance of unsupervised CLWEs by running a large-scale evaluation in the bilingual lexicon induction (BLI) task on 15 languages (i.e., 210 languages pairs, see Table 2 in § "Experimental Setup" ). The languages were selected to represent different language families and morphological types, as we argue that fully unsupervised CLWEs have been designed to support exactly these setups. However, we show that even the most robust unsupervised CLWE method BIBREF24 still fails for a large number of language pairs: 87/210 BLI setups are unsuccessful, yielding (near-)zero BLI performance. Further, even when the unsupervised method succeeds, it is because the components C2 (self-learning) and C3 (pre-/post-processing) can mitigate the undesired effects of noisy seed lexicon extraction. We then demonstrate that the combination of C2 and C3 with a small provided seed dictionary (e.g., 500 or 1K pairs) outscores the unsupervised method in all cases, often with a huge margin, and does not fail for any language pair. Furthermore, we show that the most robust unsupervised CLWE approach still fails completely when it relies on monolingual word vectors trained on domain-dissimilar corpora. We also empirically verify that unsupervised approaches cannot outperform weakly supervised approaches also for closely related languages (e.g., Swedish–Danish, Spanish–Catalan).
While the “no supervision at all” premise behind fully unsupervised CLWE methods is indeed seductive, our study strongly suggests that future research efforts should revisit the main motivation behind these methods and focus on designing even more robust solutions, given their current inability to support a wide spectrum of language pairs. In hope of boosting induction of CLWEs for more diverse and distant language pairs, we make all 210 training and test dictionaries used in this work publicly available at: https://github.com/ivulic/panlex-bli.
Methodology
We now dissect a general framework for unsupervised CLWE learning, and show that the “bag of tricks of the trade” used to increase their robustness (which often slips under the radar) can be equally applied to (weakly) supervised projection-based approaches, leading to their fair(er) comparison.
Projection-Based CLWE Approaches
In short, projection-based CLWE methods learn to (linearly) align independently trained monolingual spaces $\mathbf {X}$ and $\mathbf {Z}$ , using a word translation dictionary $D_0$ to guide the alignment process. Let $\mathbf {X}_D \subset \mathbf {X}$ and $\mathbf {Z}_D \subset \mathbf {Z}$ be the row-aligned subsets of monolingual spaces containing vectors of aligned words from $D_0$ . Alignment matrices $\mathbf {X}_D$ and $\mathbf {Z}_D$ are then used to learn orthogonal transformations $\mathbf {W}_x$ and $\mathbf {W}_z$ that define the joint bilingual space $\mathbf {Z}$0 . While supervised projection-based CLWE models learn the mapping using a provided external (clean) dictionary $\mathbf {Z}$1 , their unsupervised counterparts automatically induce the seed dictionary in an unsupervised way (C1) and then refine it in an iterative fashion (C2).
Unsupervised CLWEs. These methods first induce a seed dictionary $D^{(1)}$ leveraging only two unaligned monolingual spaces (C1). While the algorithms for unsupervised seed dictionary induction differ, they all strongly rely on the assumption of similar topological structure between the two pretrained monolingual spaces. Once the seed dictionary is obtained, the two-step iterative self-learning procedure (C2) takes place: 1) a dictionary $D^{(k)}$ is first used to learn the joint space $\mathbf {Y}^{(k)} = \mathbf {X{W}}^{(k)}_x \cup \mathbf {Z{W}}^{(k)}_z$ ; 2) the nearest neighbours in $\mathbf {Y}^{(k)}$ then form the new dictionary $D^{(k+1)}$ . We illustrate the general structure in Figure 1 .
A recent empirical survey paper BIBREF17 has compared a variety of latest unsupervised CLWE methods BIBREF23 , BIBREF27 , BIBREF33 , BIBREF24 in several downstream tasks (e.g., BLI, cross-lingual information retrieval, document classification). The results of their study indicate that the vecmap model of artetxe2018robust is by far the most robust and best performing unsupervised CLWE model. For the actual results and analyses, we refer the interested reader to the original paper of glavas2019howto. Another recent evaluation paper BIBREF35 as well as our own preliminary BLI tests (not shown for brevity) have further verified their findings. We thus focus on vecmap in our analyses, and base the following description of the components C1-C3 on that model.
Three Key Components
C1. Seed Lexicon Extraction. vecmap induces the initial seed dictionary using the following heuristic: monolingual similarity distributions for words with similar meaning will be similar across languages. The monolingual similarity distributions for the two languages are given as rows (or columns; the matrices are symmetric) of $\mathbf {M}_x = \mathbf {X}\mathbf {X}^T$ and $\mathbf {M}_z = \mathbf {Z}\mathbf {Z}^T$ . For the distributions of similarity scores to be comparable, the values in each row of $\mathbf {M}_x$ and $\mathbf {M}_z$ are first sorted. The initial dictionary $D^{(1)}$ is finally obtained by searching for mutual nearest neighbours between the rows of $\sqrt{\mathbf {M}_x}$ and of $\sqrt{\mathbf {M}_z}$ .
C2. Self-Learning. Not counting the preprocessing and postprocessing steps (component C3), self-learning then iteratively repeats two steps:
1) Let $\mathbf {D}^{(k)}$ be the binary matrix indicating the aligned words in the dictionary $D^{(k)}$ . The orthogonal transformation matrices are then obtained as $\mathbf {W}^{(k)}_x = \mathbf {U}$ and $\mathbf {W}^{(k)}_z = \mathbf {V}$ , where $\mathbf {U}\mathbf {\Sigma }\mathbf {V}^T$ is the singular value decomposition of the matrix $\mathbf {X}^T\mathbf {D}^{(k)}\mathbf {Z}$ . The cross-lingual space of the $D^{(k)}$0 -th iteration is then $D^{(k)}$1 .
2) The new dictionary $D^{(k+1)}$ is then built by identifying nearest neighbours in $\mathbf {Y}^{(k)}$ . These can be easily extracted from the matrix $\mathbf {P} = \mathbf {X}\mathbf {W}^{(k)}_x( \mathbf {Z}\mathbf {W}^{(k)}_z)^T$ . All nearest neighbours can be used, or additional symmetry constraints can be imposed to extract only mutual nearest neighbours: all pairs of indices ( $i, j$ ) for which $\mathbf {P}_{ij}$ is the largest value both in row $i$ and column $j$ .
The above procedure, however, often converges to poor local optima. To remedy for this, the second step (i.e., dictionary induction) is extended with techniques that make self-learning more robust. First, the vocabularies of $\mathbf {X}$ and $\mathbf {Z}$ are cut to the top $k$ most frequent words. Second, similarity scores in $\mathbf {P}$ are kept with probability $p$ , and set to zero otherwise. This dropout allows for a wider exploration of possible word pairs in the dictionary and contributes to escaping poor local optima given the noisy seed lexicon in the first iterations.
C3. Preprocessing and Postprocessing Steps. While iteratively learning orthogonal transformations $\mathbf {W}_{x}$ and $\mathbf {W}_{z}$ for $\mathbf {X}$ and $\mathbf {Z}$ is the central step of unsupervised projection-based CLWE methods, preprocessing and postprocessing techniques are additionally applied before and after the transformation. While such techniques are often overlooked in model comparisons, they may have a great impact on the model's final performance, as we validate in § "Results and Discussion" . We briefly summarize two pre-processing (S1 and S2) and post-processing (S3 and S4) steps used in our evaluation, originating from the framework of artetxe2018generalizing.
S1) Normalization and mean centering. We first apply unit length normalization: all vectors in $\mathbf {X}$ and $\mathbf {Z}$ are normalized to have a unit Euclidean norm. Following that, $\mathbf {X}$ and $\mathbf {Z}$ are mean centered dimension-wise and then again length-normalized.
S2) Whitening. ZCA whitening BIBREF36 is applied on (S1-processed) $\mathbf {X}$ and $\mathbf {Z}$ : it transforms the matrices such that each dimension has unit variance and that the dimensions are uncorrelated. Intuitively, the vector spaces are easier to align along directions of high variance.
S3) Dewhitening. A transformation inverse to S2: for improved performance it is important to restore the variance information after the projection, if whitening was applied in S2 BIBREF31 .
S4) Symmetric re-weighting. This step attempts to further align the embeddings in the cross-lingual embedding space by measuring how well a dimension in the space correlates across languages for the current iteration dictionary $D^{(k)}$ . The best results are obtained when re-weighting is neutral to the projection direction, that is, when it is applied symmetrically in both languages.
In the actual implementation S1 is applied only once, before self-learning. S2, S3 and S4 are applied in each self-learning iteration.
Model Configurations. Note that C2 and C3 can be equally used on top of any (provided) seed lexicon (i.e., $D^{(1)}$ := $D_0$ ) to enable weakly supervised learning, as we propose here. In fact, the variations of the three key components, C1) seed lexicon, C2) self-learning, and C3) preprocessing and postprocessing, construct various model configurations which can be analyzed to probe the importance of each component in the CLWE induction process. A selection of representative configurations evaluated later in § "Results and Discussion" is summarized in Table 1 .
Experimental Setup
Evaluation Task. Our task is bilingual lexicon induction (BLI). It has become the de facto standard evaluation for projection-based CLWEs BIBREF16 , BIBREF17 . In short, after a shared CLWE space has been induced, the task is to retrieve target language translations for a test set of source language words. Its lightweight nature allows us to conduct a comprehensive evaluation across a large number of language pairs. Since BLI is cast as a ranking task, following glavas2019howto we use mean average precision (MAP) as the main evaluation metric: in our BLI setup with only one correct translation for each “query” word, MAP is equal to mean reciprocal rank (MRR).
(Selection of) Language Pairs. Our selection of test languages is guided by the following goals: a) following recent initiatives in other NLP research (e.g., for language modeling) BIBREF39 , BIBREF40 , we aim to ensure the coverage of different genealogical and typological language properties, and b) we aim to analyze a large set of language pairs and offer new evaluation data which extends and surpasses other work in the CLWE literature. These two properties will facilitate analyses between (dis)similar language pairs and offer a comprehensive set of evaluation setups that test the robustness and portability of fully unsupervised CLWEs. The final list of 15 diverse test languages is provided in Table 2 , and includes samples from different languages types and families. We run BLI evaluations for all language pairs in both directions, for a total of 15 $\times $ 14=210 BLI setups.
Monolingual Embeddings. We use the 300-dim vectors of Grave:2018lrec for all 15 languages, pretrained on Common Crawl and Wikipedia with fastText BIBREF41 . We trim all vocabularies to the 200K most frequent words.
Training and Test Dictionaries. They are derived from PanLex BIBREF43 , BIBREF44 , which was used in prior work on cross-lingual word embeddings BIBREF45 , BIBREF46 . PanLex currently spans around 1,300 language varieties with over 12M expressions: it offers some support and supervision also for low-resource language pairs BIBREF47 . For each source language ( $L_1$ ), we automatically translate their vocabulary words (if they are present in PanLex) to all 14 target ( $L_2$ ) languages. To ensure the reliability of the translation pairs, we retain only unigrams found in the vocabularies of the respective $L_2$ monolingual spaces which scored above a PanLex-predefined threshold.
As in prior work BIBREF23 , BIBREF17 , we then reserve the 5K pairs created from the more frequent $L_1$ words for training, while the next 2K pairs are used for test. Smaller training dictionaries (1K and 500 pairs) are created by again selecting pairs comprising the most frequent $L_1$ words.
Training Setup. In all experiments, we set the hyper-parameters to values that were tuned in prior research. When extracting the unsupervised seed lexicon, the 4K most frequent words of each language are used; self-learning operates on the 20K most frequent words of each language; with dropout the keep probability $p$ is 0.1; CSLS with $k=10$ nearest neighbors BIBREF24 .
Again, Table 1 lists the main model configurations in our comparison. For the fully unsupervised model we always report the best performing configuration after probing different self-learning strategies (i.e., +sl, +sl+nod, and +sl+sym are tested). The results for unsupervised are always reported as averages over 5 restarts: this means that with unsupervised we count BLI setups as unsuccessful only if the results are close to zero in all 5/5 runs. orthg-super is the standard supervised model with orthogonal projections from prior work BIBREF21 , BIBREF17 .
Results and Discussion
Main BLI results averaged over each source language ( $L_1$ ) are provided in Table 3 and Table 4 . We now summarize and discuss the main findings across several dimensions of comparison.
Unsupervised vs. (Weakly) Supervised. First, when exactly the same components C2 and C3 are used, unsupervised is unable to outperform a (weakly) supervised full+sl+sym variant, and the gap in final performance is often substantial. In fact, full+sl+sym and full+sl+nod outperform the best unsupervised for all 210/210 BLI setups: we observe the same phenomenon with varying dictionary sizes, that is, it equally holds when we seed self-learning with 5K, 1K, and 500 translation pairs, see also Figure 2 . This also suggests that the main reason why unsupervised approaches were considered on-par with supervised approaches in prior work BIBREF23 , BIBREF24 is because they were not compared under fair circumstances: while unsupervised relied heavily on the components C2 and C3, these were omitted when running supervised baselines. Our unbiased comparison reveals that there is a huge gap even when supervised projection-based approaches consume only several hundred translation pairs to initiate self-learning.
Are Unsupervised CLWEs Robust? The results also indicate that, contrary to the beliefs established by very recent work BIBREF24 , BIBREF30 , fully unsupervised approaches are still prone to getting stuck in local optima, and still suffer from robustness issues when dealing with distant language pairs: 87 out of 210 BLI setups ( $=41.4\%$ ) result in (near-)zero BLI performance, see also Table 4 . At the same time, weakly supervised methods with a seed lexicon of 1k or 500 pairs do not suffer from the robustness problem and always converge to a good solution, as also illustrated by the results reported in Table 5 .
How Important are Preprocessing and Postprocessing? The comparisons between orthg-super (and orthg+sl+sym) on the one hand, and full-super (and full+sl+sym) on the other hand clearly indicate that the component C3 plays a substantial role in effective CLWE learning. full-super, which employs all steps S1-S4 (see § "Methodology" ), outperforms orthg-super in 208/210 setups with $|D_0|$ =5k and in 210/210 setups with $|D_0|$ =1k. Similarly, full+sl+sym is better than orthg+sl+sym in 210/210 setups (both for $|D_0|$ =1k,5k). The scores also indicate that dropout with self-learning is useful only when we work with noisy unsupervised seed lexicons: full+sl+nod and full+sl+sym without dropout consistently outperform full+sl across the board.
How Important is (Robust) Self-Learning? We note that the best self-learning method is often useful even when $|D_0|=5k$ (i.e., full+sl+sym is better than full-super in 164/210 setups). However, the importance of robust self-learning gets more pronounced as we decrease the size of $D_0$ : full+sl+sym is better than full-super in 210/210 setups when $|D_0|=500$ or $|D_0|=1,000$ . The gap between the two models, as shown in Figure 2 , increases dramatically in favor of full+sl+sym as we decrease $|D_0|$ .
Again, just comparing full-super and unsupervised in Figure 2 might give a false impression that fully unsupervised CLWE methods can match their supervised counterparts, but the comparison to full+sl+sym reveals the true extent of performance drop when we abandon even weak supervision. The scores also reveal that the choice of self-learning (C2) does matter: all best performing BLI runs with $|D_0|=1k$ are obtained by two configs with self-learning, and full+sl+sym is the best configuration for 177/210 setups (see Table 4 ).
Language Pairs. As suggested before by sogaard2018on and further verified by glavas2019howto and doval2019onthe, the language pair at hand can have a huge impact on CLWE induction: the adversarial method of conneau2018word often gets stuck in poor local optima and yields degenerate solutions for distant language pairs such as English-Finnish. More recent CLWE methods BIBREF24 , BIBREF30 focus on mitigating this robustness issue. However, they still rely on one critical assumption which leads them to degraded performance for distant language pairs: they assume approximate isomorphism BIBREF49 , BIBREF48 between monolingual embedding spaces to learn the initial seed dictionary. In other words, they assume very similar geometric constellations between two monolingual spaces: due to the Zipfian phenomena in language BIBREF50 such near-isomorphism can be satisfied only for similar languages and for similar domains used for training monolingual vectors. This property is reflected in the results reported in Table 3 , the number of unsuccessful setups in Table 4 , as well as later in Figure 4 .
For instance, the largest number of unsuccessful BLI setups with the unsupervised model is reported for Korean, Thai (a tonal language), and Basque (a language isolate): their morphological and genealogical properties are furthest away from other languages in our comparison. A substantial number of unsuccessful setups is also observed with other two language outliers from our set (see Table 2 again), Georgian and Indonesian, as well as with morphologically-rich languages such as Estonian or Turkish.
One setting in which fully unsupervised methods did show impressive results in prior work are similar language pairs. However, even in these settings when the comparison to the weakly supervised full-super+sym is completely fair (i.e., the same components C2 and C3 are used for both), unsupervised still falls short of full-super+sym. These results for three source languages are summarized in Figure 3 . What is more, one could argue that we do not need unsupervised CLWEs for similar languages in the first place: we can harvest cheap supervision here, e.g., cognates. The main motivation behind unsupervised approaches is to support dissimilar and resource-poor language pairs for which supervision cannot be guaranteed.
Domain Differences. Finally, we also verify that unsupervised CLWEs still cannot account for domain differences when training monolingual vectors. We rely on the probing test of sogaard2018on: 300-dim fastText vectors are trained on 1.1M sentences on three corpora: 1) EuroParl.v7 BIBREF51 (parliamentary proceedings); 2) Wikipedia BIBREF52 , and 3) EMEA BIBREF53 (medical), and BLI evaluation for three language pairs is conducted on standard MUSE BLI test sets BIBREF23 . The results, summarized in Figure 4 , reveal that unsupervised methods are able to yield a good solution only when there is no domain mismatch and for the pair with two most similar languages (English-Spanish), again questioning their robustness and portability to truly low-resource and more challenging setups. Weakly supervised methods ( $|D_0|=500$ or $D_0$ seeded with identical strings), in contrast, yield good solutions for all setups.
Further Discussion and Conclusion
The superiority of weakly supervised methods (e.g., full+sl+sym) over unsupervised methods is especially pronounced for distant and typologically heterogeneous language pairs. However, our study also indicates that even carefully engineered projection-based methods with some seed supervision yield lower absolute performance for such pairs. While we have witnessed the proliferation of fully unsupervised CLWE models recently, some fundamental questions still remain. For instance, the underlying assumption of all projection-based methods (both supervised and unsupervised) is the topological similarity between monolingual spaces, which is why standard simple linear projections result in lower absolute BLI scores for distant pairs (see Table 4 and results in the supplemental material). Unsupervised approaches even exploit the assumption twice as their seed extraction is fully based on the topological similarity.
Future work should move beyond the restrictive assumption by exploring new methods that can, e.g., 1) increase the isomorphism between monolingual spaces BIBREF54 by distinguishing between language-specific and language-pair-invariant subspaces; 2) learn effective non-linear or multiple local projections between monolingual spaces similar to the preliminary work of nakashole2018norma; 3) similar to vulic2016on and Lubin:2019naacl “denoisify” seed lexicons during the self-learning procedure. For instance, keeping only mutual/symmetric nearest neighbour as in full+sl+sym can be seen as a form of rudimentary denoisifying: it is indicative to see that the best overall performance in this work is reported with that model configuration.
Further, the most important contributions of unsupervised CLWE models are, in fact, the improved and more robust self-learning procedures (component C2) and technical enhancements (component C3). In this work we have demonstrated that these components can be equally applied to weakly supervised approaches: starting from a set of only several hundred pairs, they can guarantee consistently improved performance across the board. As there is still no clear-cut use case scenario for unsupervised CLWEs, instead of “going fully unsupervised”, one pragmatic approach to widen the scope of CLWE learning and its application might invest more effort into extracting at least some seed supervision for a variety of language pairs BIBREF22 . This finding aligns well with the ongoing initiatives of the PanLex project BIBREF44 and the ASJP database BIBREF56 , which aim to collate at least some translation pairs in most of the world’s languages.
Finally, this paper demonstrates that, in order to enable fair comparisons, future work on CLWEs should focus on evaluating the CLWE methods' constituent components (e.g, components C1-C3 from this work) instead of full-blown composite systems directly. One goal of the paper is to acknowledge that the work on fully unsupervised CLWE methods has indeed advanced state-of-the-art in cross-lingual word representation learning by offering new solutions also to weakly supervised CLWE methods. However, the robustness problems are still prominent with fully unsupervised CLWEs, and future work should invest more time and effort into developing more robust and more effective methods, e.g., by reaching beyond projection-based methods towards joint approaches BIBREF16 , BIBREF57 .
Acknowledgments
This work is supported by the ERC Consolidator Grant LEXICAL: Lexical Acquisition Across Languages (no 648909). The work of Goran Glavaš is supported by the Baden-Württemberg Stiftung (AGREE grant of the Eliteprogramm). Roi Reichart is partially funded by ISF personal grants No. 1625/18. We thank the three anonymous reviewers for their encouraging comments and suggestions. | the latest CLWE developments almost exclusively focus on fully unsupervised approaches BIBREF23 , BIBREF24 , BIBREF25 , BIBREF26 , BIBREF27 , BIBREF28 , BIBREF29 , BIBREF30 : they fully abandon any source of (even weak) supervision and extract the initial seed dictionary by exploiting topological similarities between pre-trained monolingual embedding spaces |
6e2899c444baaeb0469599f65722780894f90f29 | 6e2899c444baaeb0469599f65722780894f90f29_0 | Q: How does BLI measure alignment quality?
Text: Introduction and Motivation
The wide use and success of monolingual word embeddings in NLP tasks BIBREF0 , BIBREF1 has inspired further research focus on the induction of cross-lingual word embeddings (CLWEs). CLWE methods learn a shared cross-lingual word vector space where words with similar meanings obtain similar vectors regardless of their actual language. CLWEs benefit cross-lingual NLP, enabling multilingual modeling of meaning and supporting cross-lingual transfer for downstream tasks and resource-lean languages. CLWEs provide invaluable cross-lingual knowledge for, inter alia, bilingual lexicon induction BIBREF2 , BIBREF3 , information retrieval BIBREF4 , BIBREF5 , machine translation BIBREF6 , BIBREF7 , document classification BIBREF8 , cross-lingual plagiarism detection BIBREF9 , domain adaptation BIBREF10 , cross-lingual POS tagging BIBREF11 , BIBREF12 , and cross-lingual dependency parsing BIBREF13 , BIBREF14 .
The landscape of CLWE methods has recently been dominated by the so-called projection-based methods BIBREF15 , BIBREF16 , BIBREF17 . They align two monolingual embedding spaces by learning a projection/mapping based on a training dictionary of translation pairs. Besides their simple conceptual design and competitive performance, their popularity originates from the fact that they rely on rather weak cross-lingual supervision. Originally, the seed dictionaries typically spanned several thousand word pairs BIBREF15 , BIBREF18 , BIBREF19 , but more recent work has shown that CLWEs can be induced with even weaker supervision from small dictionaries spanning several hundred pairs BIBREF20 , identical strings BIBREF21 , or even only shared numerals BIBREF22 .
Taking the idea of reducing cross-lingual supervision to the extreme, the latest CLWE developments almost exclusively focus on fully unsupervised approaches BIBREF23 , BIBREF24 , BIBREF25 , BIBREF26 , BIBREF27 , BIBREF28 , BIBREF29 , BIBREF30 : they fully abandon any source of (even weak) supervision and extract the initial seed dictionary by exploiting topological similarities between pre-trained monolingual embedding spaces. Their modus operandi can roughly be described by three main components: C1) unsupervised extraction of a seed dictionary; C2) a self-learning procedure that iteratively refines the dictionary to learn projections of increasingly higher quality; and C3) a set of preprocessing and postprocessing steps (e.g., unit length normalization, mean centering, (de)whitening) BIBREF31 that make the entire learning process more robust.
The induction of fully unsupervised CLWEs is an inherently interesting research topic per se. Nonetheless, the main practical motivation for developing such approaches in the first place is to facilitate the construction of multilingual NLP tools and widen the access to language technology for resource-poor languages and language pairs. However, the first attempts at fully unsupervised CLWE induction failed exactly for these use cases, as shown by sogaard2018on. Therefore, the follow-up work aimed to improve the robustness of unsupervised CLWE induction by introducing more robust self-learning procedures BIBREF24 , BIBREF32 . Besides increased robustness, recent work claims that fully unsupervised projection-based CLWEs can even match or surpass their supervised counterparts BIBREF23 , BIBREF24 , BIBREF27 , BIBREF33 , BIBREF34 .
In this paper, we critically examine these claims on robustness and improved performance of unsupervised CLWEs by running a large-scale evaluation in the bilingual lexicon induction (BLI) task on 15 languages (i.e., 210 languages pairs, see Table 2 in § "Experimental Setup" ). The languages were selected to represent different language families and morphological types, as we argue that fully unsupervised CLWEs have been designed to support exactly these setups. However, we show that even the most robust unsupervised CLWE method BIBREF24 still fails for a large number of language pairs: 87/210 BLI setups are unsuccessful, yielding (near-)zero BLI performance. Further, even when the unsupervised method succeeds, it is because the components C2 (self-learning) and C3 (pre-/post-processing) can mitigate the undesired effects of noisy seed lexicon extraction. We then demonstrate that the combination of C2 and C3 with a small provided seed dictionary (e.g., 500 or 1K pairs) outscores the unsupervised method in all cases, often with a huge margin, and does not fail for any language pair. Furthermore, we show that the most robust unsupervised CLWE approach still fails completely when it relies on monolingual word vectors trained on domain-dissimilar corpora. We also empirically verify that unsupervised approaches cannot outperform weakly supervised approaches also for closely related languages (e.g., Swedish–Danish, Spanish–Catalan).
While the “no supervision at all” premise behind fully unsupervised CLWE methods is indeed seductive, our study strongly suggests that future research efforts should revisit the main motivation behind these methods and focus on designing even more robust solutions, given their current inability to support a wide spectrum of language pairs. In hope of boosting induction of CLWEs for more diverse and distant language pairs, we make all 210 training and test dictionaries used in this work publicly available at: https://github.com/ivulic/panlex-bli.
Methodology
We now dissect a general framework for unsupervised CLWE learning, and show that the “bag of tricks of the trade” used to increase their robustness (which often slips under the radar) can be equally applied to (weakly) supervised projection-based approaches, leading to their fair(er) comparison.
Projection-Based CLWE Approaches
In short, projection-based CLWE methods learn to (linearly) align independently trained monolingual spaces $\mathbf {X}$ and $\mathbf {Z}$ , using a word translation dictionary $D_0$ to guide the alignment process. Let $\mathbf {X}_D \subset \mathbf {X}$ and $\mathbf {Z}_D \subset \mathbf {Z}$ be the row-aligned subsets of monolingual spaces containing vectors of aligned words from $D_0$ . Alignment matrices $\mathbf {X}_D$ and $\mathbf {Z}_D$ are then used to learn orthogonal transformations $\mathbf {W}_x$ and $\mathbf {W}_z$ that define the joint bilingual space $\mathbf {Z}$0 . While supervised projection-based CLWE models learn the mapping using a provided external (clean) dictionary $\mathbf {Z}$1 , their unsupervised counterparts automatically induce the seed dictionary in an unsupervised way (C1) and then refine it in an iterative fashion (C2).
Unsupervised CLWEs. These methods first induce a seed dictionary $D^{(1)}$ leveraging only two unaligned monolingual spaces (C1). While the algorithms for unsupervised seed dictionary induction differ, they all strongly rely on the assumption of similar topological structure between the two pretrained monolingual spaces. Once the seed dictionary is obtained, the two-step iterative self-learning procedure (C2) takes place: 1) a dictionary $D^{(k)}$ is first used to learn the joint space $\mathbf {Y}^{(k)} = \mathbf {X{W}}^{(k)}_x \cup \mathbf {Z{W}}^{(k)}_z$ ; 2) the nearest neighbours in $\mathbf {Y}^{(k)}$ then form the new dictionary $D^{(k+1)}$ . We illustrate the general structure in Figure 1 .
A recent empirical survey paper BIBREF17 has compared a variety of latest unsupervised CLWE methods BIBREF23 , BIBREF27 , BIBREF33 , BIBREF24 in several downstream tasks (e.g., BLI, cross-lingual information retrieval, document classification). The results of their study indicate that the vecmap model of artetxe2018robust is by far the most robust and best performing unsupervised CLWE model. For the actual results and analyses, we refer the interested reader to the original paper of glavas2019howto. Another recent evaluation paper BIBREF35 as well as our own preliminary BLI tests (not shown for brevity) have further verified their findings. We thus focus on vecmap in our analyses, and base the following description of the components C1-C3 on that model.
Three Key Components
C1. Seed Lexicon Extraction. vecmap induces the initial seed dictionary using the following heuristic: monolingual similarity distributions for words with similar meaning will be similar across languages. The monolingual similarity distributions for the two languages are given as rows (or columns; the matrices are symmetric) of $\mathbf {M}_x = \mathbf {X}\mathbf {X}^T$ and $\mathbf {M}_z = \mathbf {Z}\mathbf {Z}^T$ . For the distributions of similarity scores to be comparable, the values in each row of $\mathbf {M}_x$ and $\mathbf {M}_z$ are first sorted. The initial dictionary $D^{(1)}$ is finally obtained by searching for mutual nearest neighbours between the rows of $\sqrt{\mathbf {M}_x}$ and of $\sqrt{\mathbf {M}_z}$ .
C2. Self-Learning. Not counting the preprocessing and postprocessing steps (component C3), self-learning then iteratively repeats two steps:
1) Let $\mathbf {D}^{(k)}$ be the binary matrix indicating the aligned words in the dictionary $D^{(k)}$ . The orthogonal transformation matrices are then obtained as $\mathbf {W}^{(k)}_x = \mathbf {U}$ and $\mathbf {W}^{(k)}_z = \mathbf {V}$ , where $\mathbf {U}\mathbf {\Sigma }\mathbf {V}^T$ is the singular value decomposition of the matrix $\mathbf {X}^T\mathbf {D}^{(k)}\mathbf {Z}$ . The cross-lingual space of the $D^{(k)}$0 -th iteration is then $D^{(k)}$1 .
2) The new dictionary $D^{(k+1)}$ is then built by identifying nearest neighbours in $\mathbf {Y}^{(k)}$ . These can be easily extracted from the matrix $\mathbf {P} = \mathbf {X}\mathbf {W}^{(k)}_x( \mathbf {Z}\mathbf {W}^{(k)}_z)^T$ . All nearest neighbours can be used, or additional symmetry constraints can be imposed to extract only mutual nearest neighbours: all pairs of indices ( $i, j$ ) for which $\mathbf {P}_{ij}$ is the largest value both in row $i$ and column $j$ .
The above procedure, however, often converges to poor local optima. To remedy for this, the second step (i.e., dictionary induction) is extended with techniques that make self-learning more robust. First, the vocabularies of $\mathbf {X}$ and $\mathbf {Z}$ are cut to the top $k$ most frequent words. Second, similarity scores in $\mathbf {P}$ are kept with probability $p$ , and set to zero otherwise. This dropout allows for a wider exploration of possible word pairs in the dictionary and contributes to escaping poor local optima given the noisy seed lexicon in the first iterations.
C3. Preprocessing and Postprocessing Steps. While iteratively learning orthogonal transformations $\mathbf {W}_{x}$ and $\mathbf {W}_{z}$ for $\mathbf {X}$ and $\mathbf {Z}$ is the central step of unsupervised projection-based CLWE methods, preprocessing and postprocessing techniques are additionally applied before and after the transformation. While such techniques are often overlooked in model comparisons, they may have a great impact on the model's final performance, as we validate in § "Results and Discussion" . We briefly summarize two pre-processing (S1 and S2) and post-processing (S3 and S4) steps used in our evaluation, originating from the framework of artetxe2018generalizing.
S1) Normalization and mean centering. We first apply unit length normalization: all vectors in $\mathbf {X}$ and $\mathbf {Z}$ are normalized to have a unit Euclidean norm. Following that, $\mathbf {X}$ and $\mathbf {Z}$ are mean centered dimension-wise and then again length-normalized.
S2) Whitening. ZCA whitening BIBREF36 is applied on (S1-processed) $\mathbf {X}$ and $\mathbf {Z}$ : it transforms the matrices such that each dimension has unit variance and that the dimensions are uncorrelated. Intuitively, the vector spaces are easier to align along directions of high variance.
S3) Dewhitening. A transformation inverse to S2: for improved performance it is important to restore the variance information after the projection, if whitening was applied in S2 BIBREF31 .
S4) Symmetric re-weighting. This step attempts to further align the embeddings in the cross-lingual embedding space by measuring how well a dimension in the space correlates across languages for the current iteration dictionary $D^{(k)}$ . The best results are obtained when re-weighting is neutral to the projection direction, that is, when it is applied symmetrically in both languages.
In the actual implementation S1 is applied only once, before self-learning. S2, S3 and S4 are applied in each self-learning iteration.
Model Configurations. Note that C2 and C3 can be equally used on top of any (provided) seed lexicon (i.e., $D^{(1)}$ := $D_0$ ) to enable weakly supervised learning, as we propose here. In fact, the variations of the three key components, C1) seed lexicon, C2) self-learning, and C3) preprocessing and postprocessing, construct various model configurations which can be analyzed to probe the importance of each component in the CLWE induction process. A selection of representative configurations evaluated later in § "Results and Discussion" is summarized in Table 1 .
Experimental Setup
Evaluation Task. Our task is bilingual lexicon induction (BLI). It has become the de facto standard evaluation for projection-based CLWEs BIBREF16 , BIBREF17 . In short, after a shared CLWE space has been induced, the task is to retrieve target language translations for a test set of source language words. Its lightweight nature allows us to conduct a comprehensive evaluation across a large number of language pairs. Since BLI is cast as a ranking task, following glavas2019howto we use mean average precision (MAP) as the main evaluation metric: in our BLI setup with only one correct translation for each “query” word, MAP is equal to mean reciprocal rank (MRR).
(Selection of) Language Pairs. Our selection of test languages is guided by the following goals: a) following recent initiatives in other NLP research (e.g., for language modeling) BIBREF39 , BIBREF40 , we aim to ensure the coverage of different genealogical and typological language properties, and b) we aim to analyze a large set of language pairs and offer new evaluation data which extends and surpasses other work in the CLWE literature. These two properties will facilitate analyses between (dis)similar language pairs and offer a comprehensive set of evaluation setups that test the robustness and portability of fully unsupervised CLWEs. The final list of 15 diverse test languages is provided in Table 2 , and includes samples from different languages types and families. We run BLI evaluations for all language pairs in both directions, for a total of 15 $\times $ 14=210 BLI setups.
Monolingual Embeddings. We use the 300-dim vectors of Grave:2018lrec for all 15 languages, pretrained on Common Crawl and Wikipedia with fastText BIBREF41 . We trim all vocabularies to the 200K most frequent words.
Training and Test Dictionaries. They are derived from PanLex BIBREF43 , BIBREF44 , which was used in prior work on cross-lingual word embeddings BIBREF45 , BIBREF46 . PanLex currently spans around 1,300 language varieties with over 12M expressions: it offers some support and supervision also for low-resource language pairs BIBREF47 . For each source language ( $L_1$ ), we automatically translate their vocabulary words (if they are present in PanLex) to all 14 target ( $L_2$ ) languages. To ensure the reliability of the translation pairs, we retain only unigrams found in the vocabularies of the respective $L_2$ monolingual spaces which scored above a PanLex-predefined threshold.
As in prior work BIBREF23 , BIBREF17 , we then reserve the 5K pairs created from the more frequent $L_1$ words for training, while the next 2K pairs are used for test. Smaller training dictionaries (1K and 500 pairs) are created by again selecting pairs comprising the most frequent $L_1$ words.
Training Setup. In all experiments, we set the hyper-parameters to values that were tuned in prior research. When extracting the unsupervised seed lexicon, the 4K most frequent words of each language are used; self-learning operates on the 20K most frequent words of each language; with dropout the keep probability $p$ is 0.1; CSLS with $k=10$ nearest neighbors BIBREF24 .
Again, Table 1 lists the main model configurations in our comparison. For the fully unsupervised model we always report the best performing configuration after probing different self-learning strategies (i.e., +sl, +sl+nod, and +sl+sym are tested). The results for unsupervised are always reported as averages over 5 restarts: this means that with unsupervised we count BLI setups as unsuccessful only if the results are close to zero in all 5/5 runs. orthg-super is the standard supervised model with orthogonal projections from prior work BIBREF21 , BIBREF17 .
Results and Discussion
Main BLI results averaged over each source language ( $L_1$ ) are provided in Table 3 and Table 4 . We now summarize and discuss the main findings across several dimensions of comparison.
Unsupervised vs. (Weakly) Supervised. First, when exactly the same components C2 and C3 are used, unsupervised is unable to outperform a (weakly) supervised full+sl+sym variant, and the gap in final performance is often substantial. In fact, full+sl+sym and full+sl+nod outperform the best unsupervised for all 210/210 BLI setups: we observe the same phenomenon with varying dictionary sizes, that is, it equally holds when we seed self-learning with 5K, 1K, and 500 translation pairs, see also Figure 2 . This also suggests that the main reason why unsupervised approaches were considered on-par with supervised approaches in prior work BIBREF23 , BIBREF24 is because they were not compared under fair circumstances: while unsupervised relied heavily on the components C2 and C3, these were omitted when running supervised baselines. Our unbiased comparison reveals that there is a huge gap even when supervised projection-based approaches consume only several hundred translation pairs to initiate self-learning.
Are Unsupervised CLWEs Robust? The results also indicate that, contrary to the beliefs established by very recent work BIBREF24 , BIBREF30 , fully unsupervised approaches are still prone to getting stuck in local optima, and still suffer from robustness issues when dealing with distant language pairs: 87 out of 210 BLI setups ( $=41.4\%$ ) result in (near-)zero BLI performance, see also Table 4 . At the same time, weakly supervised methods with a seed lexicon of 1k or 500 pairs do not suffer from the robustness problem and always converge to a good solution, as also illustrated by the results reported in Table 5 .
How Important are Preprocessing and Postprocessing? The comparisons between orthg-super (and orthg+sl+sym) on the one hand, and full-super (and full+sl+sym) on the other hand clearly indicate that the component C3 plays a substantial role in effective CLWE learning. full-super, which employs all steps S1-S4 (see § "Methodology" ), outperforms orthg-super in 208/210 setups with $|D_0|$ =5k and in 210/210 setups with $|D_0|$ =1k. Similarly, full+sl+sym is better than orthg+sl+sym in 210/210 setups (both for $|D_0|$ =1k,5k). The scores also indicate that dropout with self-learning is useful only when we work with noisy unsupervised seed lexicons: full+sl+nod and full+sl+sym without dropout consistently outperform full+sl across the board.
How Important is (Robust) Self-Learning? We note that the best self-learning method is often useful even when $|D_0|=5k$ (i.e., full+sl+sym is better than full-super in 164/210 setups). However, the importance of robust self-learning gets more pronounced as we decrease the size of $D_0$ : full+sl+sym is better than full-super in 210/210 setups when $|D_0|=500$ or $|D_0|=1,000$ . The gap between the two models, as shown in Figure 2 , increases dramatically in favor of full+sl+sym as we decrease $|D_0|$ .
Again, just comparing full-super and unsupervised in Figure 2 might give a false impression that fully unsupervised CLWE methods can match their supervised counterparts, but the comparison to full+sl+sym reveals the true extent of performance drop when we abandon even weak supervision. The scores also reveal that the choice of self-learning (C2) does matter: all best performing BLI runs with $|D_0|=1k$ are obtained by two configs with self-learning, and full+sl+sym is the best configuration for 177/210 setups (see Table 4 ).
Language Pairs. As suggested before by sogaard2018on and further verified by glavas2019howto and doval2019onthe, the language pair at hand can have a huge impact on CLWE induction: the adversarial method of conneau2018word often gets stuck in poor local optima and yields degenerate solutions for distant language pairs such as English-Finnish. More recent CLWE methods BIBREF24 , BIBREF30 focus on mitigating this robustness issue. However, they still rely on one critical assumption which leads them to degraded performance for distant language pairs: they assume approximate isomorphism BIBREF49 , BIBREF48 between monolingual embedding spaces to learn the initial seed dictionary. In other words, they assume very similar geometric constellations between two monolingual spaces: due to the Zipfian phenomena in language BIBREF50 such near-isomorphism can be satisfied only for similar languages and for similar domains used for training monolingual vectors. This property is reflected in the results reported in Table 3 , the number of unsuccessful setups in Table 4 , as well as later in Figure 4 .
For instance, the largest number of unsuccessful BLI setups with the unsupervised model is reported for Korean, Thai (a tonal language), and Basque (a language isolate): their morphological and genealogical properties are furthest away from other languages in our comparison. A substantial number of unsuccessful setups is also observed with other two language outliers from our set (see Table 2 again), Georgian and Indonesian, as well as with morphologically-rich languages such as Estonian or Turkish.
One setting in which fully unsupervised methods did show impressive results in prior work are similar language pairs. However, even in these settings when the comparison to the weakly supervised full-super+sym is completely fair (i.e., the same components C2 and C3 are used for both), unsupervised still falls short of full-super+sym. These results for three source languages are summarized in Figure 3 . What is more, one could argue that we do not need unsupervised CLWEs for similar languages in the first place: we can harvest cheap supervision here, e.g., cognates. The main motivation behind unsupervised approaches is to support dissimilar and resource-poor language pairs for which supervision cannot be guaranteed.
Domain Differences. Finally, we also verify that unsupervised CLWEs still cannot account for domain differences when training monolingual vectors. We rely on the probing test of sogaard2018on: 300-dim fastText vectors are trained on 1.1M sentences on three corpora: 1) EuroParl.v7 BIBREF51 (parliamentary proceedings); 2) Wikipedia BIBREF52 , and 3) EMEA BIBREF53 (medical), and BLI evaluation for three language pairs is conducted on standard MUSE BLI test sets BIBREF23 . The results, summarized in Figure 4 , reveal that unsupervised methods are able to yield a good solution only when there is no domain mismatch and for the pair with two most similar languages (English-Spanish), again questioning their robustness and portability to truly low-resource and more challenging setups. Weakly supervised methods ( $|D_0|=500$ or $D_0$ seeded with identical strings), in contrast, yield good solutions for all setups.
Further Discussion and Conclusion
The superiority of weakly supervised methods (e.g., full+sl+sym) over unsupervised methods is especially pronounced for distant and typologically heterogeneous language pairs. However, our study also indicates that even carefully engineered projection-based methods with some seed supervision yield lower absolute performance for such pairs. While we have witnessed the proliferation of fully unsupervised CLWE models recently, some fundamental questions still remain. For instance, the underlying assumption of all projection-based methods (both supervised and unsupervised) is the topological similarity between monolingual spaces, which is why standard simple linear projections result in lower absolute BLI scores for distant pairs (see Table 4 and results in the supplemental material). Unsupervised approaches even exploit the assumption twice as their seed extraction is fully based on the topological similarity.
Future work should move beyond the restrictive assumption by exploring new methods that can, e.g., 1) increase the isomorphism between monolingual spaces BIBREF54 by distinguishing between language-specific and language-pair-invariant subspaces; 2) learn effective non-linear or multiple local projections between monolingual spaces similar to the preliminary work of nakashole2018norma; 3) similar to vulic2016on and Lubin:2019naacl “denoisify” seed lexicons during the self-learning procedure. For instance, keeping only mutual/symmetric nearest neighbour as in full+sl+sym can be seen as a form of rudimentary denoisifying: it is indicative to see that the best overall performance in this work is reported with that model configuration.
Further, the most important contributions of unsupervised CLWE models are, in fact, the improved and more robust self-learning procedures (component C2) and technical enhancements (component C3). In this work we have demonstrated that these components can be equally applied to weakly supervised approaches: starting from a set of only several hundred pairs, they can guarantee consistently improved performance across the board. As there is still no clear-cut use case scenario for unsupervised CLWEs, instead of “going fully unsupervised”, one pragmatic approach to widen the scope of CLWE learning and its application might invest more effort into extracting at least some seed supervision for a variety of language pairs BIBREF22 . This finding aligns well with the ongoing initiatives of the PanLex project BIBREF44 and the ASJP database BIBREF56 , which aim to collate at least some translation pairs in most of the world’s languages.
Finally, this paper demonstrates that, in order to enable fair comparisons, future work on CLWEs should focus on evaluating the CLWE methods' constituent components (e.g, components C1-C3 from this work) instead of full-blown composite systems directly. One goal of the paper is to acknowledge that the work on fully unsupervised CLWE methods has indeed advanced state-of-the-art in cross-lingual word representation learning by offering new solutions also to weakly supervised CLWE methods. However, the robustness problems are still prominent with fully unsupervised CLWEs, and future work should invest more time and effort into developing more robust and more effective methods, e.g., by reaching beyond projection-based methods towards joint approaches BIBREF16 , BIBREF57 .
Acknowledgments
This work is supported by the ERC Consolidator Grant LEXICAL: Lexical Acquisition Across Languages (no 648909). The work of Goran Glavaš is supported by the Baden-Württemberg Stiftung (AGREE grant of the Eliteprogramm). Roi Reichart is partially funded by ISF personal grants No. 1625/18. We thank the three anonymous reviewers for their encouraging comments and suggestions. | we use mean average precision (MAP) as the main evaluation metric |
896e99d7f8f957f6217185ff787e94f84c136087 | 896e99d7f8f957f6217185ff787e94f84c136087_0 | Q: What methods were used for unsupervised CLWE?
Text: Introduction and Motivation
The wide use and success of monolingual word embeddings in NLP tasks BIBREF0 , BIBREF1 has inspired further research focus on the induction of cross-lingual word embeddings (CLWEs). CLWE methods learn a shared cross-lingual word vector space where words with similar meanings obtain similar vectors regardless of their actual language. CLWEs benefit cross-lingual NLP, enabling multilingual modeling of meaning and supporting cross-lingual transfer for downstream tasks and resource-lean languages. CLWEs provide invaluable cross-lingual knowledge for, inter alia, bilingual lexicon induction BIBREF2 , BIBREF3 , information retrieval BIBREF4 , BIBREF5 , machine translation BIBREF6 , BIBREF7 , document classification BIBREF8 , cross-lingual plagiarism detection BIBREF9 , domain adaptation BIBREF10 , cross-lingual POS tagging BIBREF11 , BIBREF12 , and cross-lingual dependency parsing BIBREF13 , BIBREF14 .
The landscape of CLWE methods has recently been dominated by the so-called projection-based methods BIBREF15 , BIBREF16 , BIBREF17 . They align two monolingual embedding spaces by learning a projection/mapping based on a training dictionary of translation pairs. Besides their simple conceptual design and competitive performance, their popularity originates from the fact that they rely on rather weak cross-lingual supervision. Originally, the seed dictionaries typically spanned several thousand word pairs BIBREF15 , BIBREF18 , BIBREF19 , but more recent work has shown that CLWEs can be induced with even weaker supervision from small dictionaries spanning several hundred pairs BIBREF20 , identical strings BIBREF21 , or even only shared numerals BIBREF22 .
Taking the idea of reducing cross-lingual supervision to the extreme, the latest CLWE developments almost exclusively focus on fully unsupervised approaches BIBREF23 , BIBREF24 , BIBREF25 , BIBREF26 , BIBREF27 , BIBREF28 , BIBREF29 , BIBREF30 : they fully abandon any source of (even weak) supervision and extract the initial seed dictionary by exploiting topological similarities between pre-trained monolingual embedding spaces. Their modus operandi can roughly be described by three main components: C1) unsupervised extraction of a seed dictionary; C2) a self-learning procedure that iteratively refines the dictionary to learn projections of increasingly higher quality; and C3) a set of preprocessing and postprocessing steps (e.g., unit length normalization, mean centering, (de)whitening) BIBREF31 that make the entire learning process more robust.
The induction of fully unsupervised CLWEs is an inherently interesting research topic per se. Nonetheless, the main practical motivation for developing such approaches in the first place is to facilitate the construction of multilingual NLP tools and widen the access to language technology for resource-poor languages and language pairs. However, the first attempts at fully unsupervised CLWE induction failed exactly for these use cases, as shown by sogaard2018on. Therefore, the follow-up work aimed to improve the robustness of unsupervised CLWE induction by introducing more robust self-learning procedures BIBREF24 , BIBREF32 . Besides increased robustness, recent work claims that fully unsupervised projection-based CLWEs can even match or surpass their supervised counterparts BIBREF23 , BIBREF24 , BIBREF27 , BIBREF33 , BIBREF34 .
In this paper, we critically examine these claims on robustness and improved performance of unsupervised CLWEs by running a large-scale evaluation in the bilingual lexicon induction (BLI) task on 15 languages (i.e., 210 languages pairs, see Table 2 in § "Experimental Setup" ). The languages were selected to represent different language families and morphological types, as we argue that fully unsupervised CLWEs have been designed to support exactly these setups. However, we show that even the most robust unsupervised CLWE method BIBREF24 still fails for a large number of language pairs: 87/210 BLI setups are unsuccessful, yielding (near-)zero BLI performance. Further, even when the unsupervised method succeeds, it is because the components C2 (self-learning) and C3 (pre-/post-processing) can mitigate the undesired effects of noisy seed lexicon extraction. We then demonstrate that the combination of C2 and C3 with a small provided seed dictionary (e.g., 500 or 1K pairs) outscores the unsupervised method in all cases, often with a huge margin, and does not fail for any language pair. Furthermore, we show that the most robust unsupervised CLWE approach still fails completely when it relies on monolingual word vectors trained on domain-dissimilar corpora. We also empirically verify that unsupervised approaches cannot outperform weakly supervised approaches also for closely related languages (e.g., Swedish–Danish, Spanish–Catalan).
While the “no supervision at all” premise behind fully unsupervised CLWE methods is indeed seductive, our study strongly suggests that future research efforts should revisit the main motivation behind these methods and focus on designing even more robust solutions, given their current inability to support a wide spectrum of language pairs. In hope of boosting induction of CLWEs for more diverse and distant language pairs, we make all 210 training and test dictionaries used in this work publicly available at: https://github.com/ivulic/panlex-bli.
Methodology
We now dissect a general framework for unsupervised CLWE learning, and show that the “bag of tricks of the trade” used to increase their robustness (which often slips under the radar) can be equally applied to (weakly) supervised projection-based approaches, leading to their fair(er) comparison.
Projection-Based CLWE Approaches
In short, projection-based CLWE methods learn to (linearly) align independently trained monolingual spaces $\mathbf {X}$ and $\mathbf {Z}$ , using a word translation dictionary $D_0$ to guide the alignment process. Let $\mathbf {X}_D \subset \mathbf {X}$ and $\mathbf {Z}_D \subset \mathbf {Z}$ be the row-aligned subsets of monolingual spaces containing vectors of aligned words from $D_0$ . Alignment matrices $\mathbf {X}_D$ and $\mathbf {Z}_D$ are then used to learn orthogonal transformations $\mathbf {W}_x$ and $\mathbf {W}_z$ that define the joint bilingual space $\mathbf {Z}$0 . While supervised projection-based CLWE models learn the mapping using a provided external (clean) dictionary $\mathbf {Z}$1 , their unsupervised counterparts automatically induce the seed dictionary in an unsupervised way (C1) and then refine it in an iterative fashion (C2).
Unsupervised CLWEs. These methods first induce a seed dictionary $D^{(1)}$ leveraging only two unaligned monolingual spaces (C1). While the algorithms for unsupervised seed dictionary induction differ, they all strongly rely on the assumption of similar topological structure between the two pretrained monolingual spaces. Once the seed dictionary is obtained, the two-step iterative self-learning procedure (C2) takes place: 1) a dictionary $D^{(k)}$ is first used to learn the joint space $\mathbf {Y}^{(k)} = \mathbf {X{W}}^{(k)}_x \cup \mathbf {Z{W}}^{(k)}_z$ ; 2) the nearest neighbours in $\mathbf {Y}^{(k)}$ then form the new dictionary $D^{(k+1)}$ . We illustrate the general structure in Figure 1 .
A recent empirical survey paper BIBREF17 has compared a variety of latest unsupervised CLWE methods BIBREF23 , BIBREF27 , BIBREF33 , BIBREF24 in several downstream tasks (e.g., BLI, cross-lingual information retrieval, document classification). The results of their study indicate that the vecmap model of artetxe2018robust is by far the most robust and best performing unsupervised CLWE model. For the actual results and analyses, we refer the interested reader to the original paper of glavas2019howto. Another recent evaluation paper BIBREF35 as well as our own preliminary BLI tests (not shown for brevity) have further verified their findings. We thus focus on vecmap in our analyses, and base the following description of the components C1-C3 on that model.
Three Key Components
C1. Seed Lexicon Extraction. vecmap induces the initial seed dictionary using the following heuristic: monolingual similarity distributions for words with similar meaning will be similar across languages. The monolingual similarity distributions for the two languages are given as rows (or columns; the matrices are symmetric) of $\mathbf {M}_x = \mathbf {X}\mathbf {X}^T$ and $\mathbf {M}_z = \mathbf {Z}\mathbf {Z}^T$ . For the distributions of similarity scores to be comparable, the values in each row of $\mathbf {M}_x$ and $\mathbf {M}_z$ are first sorted. The initial dictionary $D^{(1)}$ is finally obtained by searching for mutual nearest neighbours between the rows of $\sqrt{\mathbf {M}_x}$ and of $\sqrt{\mathbf {M}_z}$ .
C2. Self-Learning. Not counting the preprocessing and postprocessing steps (component C3), self-learning then iteratively repeats two steps:
1) Let $\mathbf {D}^{(k)}$ be the binary matrix indicating the aligned words in the dictionary $D^{(k)}$ . The orthogonal transformation matrices are then obtained as $\mathbf {W}^{(k)}_x = \mathbf {U}$ and $\mathbf {W}^{(k)}_z = \mathbf {V}$ , where $\mathbf {U}\mathbf {\Sigma }\mathbf {V}^T$ is the singular value decomposition of the matrix $\mathbf {X}^T\mathbf {D}^{(k)}\mathbf {Z}$ . The cross-lingual space of the $D^{(k)}$0 -th iteration is then $D^{(k)}$1 .
2) The new dictionary $D^{(k+1)}$ is then built by identifying nearest neighbours in $\mathbf {Y}^{(k)}$ . These can be easily extracted from the matrix $\mathbf {P} = \mathbf {X}\mathbf {W}^{(k)}_x( \mathbf {Z}\mathbf {W}^{(k)}_z)^T$ . All nearest neighbours can be used, or additional symmetry constraints can be imposed to extract only mutual nearest neighbours: all pairs of indices ( $i, j$ ) for which $\mathbf {P}_{ij}$ is the largest value both in row $i$ and column $j$ .
The above procedure, however, often converges to poor local optima. To remedy for this, the second step (i.e., dictionary induction) is extended with techniques that make self-learning more robust. First, the vocabularies of $\mathbf {X}$ and $\mathbf {Z}$ are cut to the top $k$ most frequent words. Second, similarity scores in $\mathbf {P}$ are kept with probability $p$ , and set to zero otherwise. This dropout allows for a wider exploration of possible word pairs in the dictionary and contributes to escaping poor local optima given the noisy seed lexicon in the first iterations.
C3. Preprocessing and Postprocessing Steps. While iteratively learning orthogonal transformations $\mathbf {W}_{x}$ and $\mathbf {W}_{z}$ for $\mathbf {X}$ and $\mathbf {Z}$ is the central step of unsupervised projection-based CLWE methods, preprocessing and postprocessing techniques are additionally applied before and after the transformation. While such techniques are often overlooked in model comparisons, they may have a great impact on the model's final performance, as we validate in § "Results and Discussion" . We briefly summarize two pre-processing (S1 and S2) and post-processing (S3 and S4) steps used in our evaluation, originating from the framework of artetxe2018generalizing.
S1) Normalization and mean centering. We first apply unit length normalization: all vectors in $\mathbf {X}$ and $\mathbf {Z}$ are normalized to have a unit Euclidean norm. Following that, $\mathbf {X}$ and $\mathbf {Z}$ are mean centered dimension-wise and then again length-normalized.
S2) Whitening. ZCA whitening BIBREF36 is applied on (S1-processed) $\mathbf {X}$ and $\mathbf {Z}$ : it transforms the matrices such that each dimension has unit variance and that the dimensions are uncorrelated. Intuitively, the vector spaces are easier to align along directions of high variance.
S3) Dewhitening. A transformation inverse to S2: for improved performance it is important to restore the variance information after the projection, if whitening was applied in S2 BIBREF31 .
S4) Symmetric re-weighting. This step attempts to further align the embeddings in the cross-lingual embedding space by measuring how well a dimension in the space correlates across languages for the current iteration dictionary $D^{(k)}$ . The best results are obtained when re-weighting is neutral to the projection direction, that is, when it is applied symmetrically in both languages.
In the actual implementation S1 is applied only once, before self-learning. S2, S3 and S4 are applied in each self-learning iteration.
Model Configurations. Note that C2 and C3 can be equally used on top of any (provided) seed lexicon (i.e., $D^{(1)}$ := $D_0$ ) to enable weakly supervised learning, as we propose here. In fact, the variations of the three key components, C1) seed lexicon, C2) self-learning, and C3) preprocessing and postprocessing, construct various model configurations which can be analyzed to probe the importance of each component in the CLWE induction process. A selection of representative configurations evaluated later in § "Results and Discussion" is summarized in Table 1 .
Experimental Setup
Evaluation Task. Our task is bilingual lexicon induction (BLI). It has become the de facto standard evaluation for projection-based CLWEs BIBREF16 , BIBREF17 . In short, after a shared CLWE space has been induced, the task is to retrieve target language translations for a test set of source language words. Its lightweight nature allows us to conduct a comprehensive evaluation across a large number of language pairs. Since BLI is cast as a ranking task, following glavas2019howto we use mean average precision (MAP) as the main evaluation metric: in our BLI setup with only one correct translation for each “query” word, MAP is equal to mean reciprocal rank (MRR).
(Selection of) Language Pairs. Our selection of test languages is guided by the following goals: a) following recent initiatives in other NLP research (e.g., for language modeling) BIBREF39 , BIBREF40 , we aim to ensure the coverage of different genealogical and typological language properties, and b) we aim to analyze a large set of language pairs and offer new evaluation data which extends and surpasses other work in the CLWE literature. These two properties will facilitate analyses between (dis)similar language pairs and offer a comprehensive set of evaluation setups that test the robustness and portability of fully unsupervised CLWEs. The final list of 15 diverse test languages is provided in Table 2 , and includes samples from different languages types and families. We run BLI evaluations for all language pairs in both directions, for a total of 15 $\times $ 14=210 BLI setups.
Monolingual Embeddings. We use the 300-dim vectors of Grave:2018lrec for all 15 languages, pretrained on Common Crawl and Wikipedia with fastText BIBREF41 . We trim all vocabularies to the 200K most frequent words.
Training and Test Dictionaries. They are derived from PanLex BIBREF43 , BIBREF44 , which was used in prior work on cross-lingual word embeddings BIBREF45 , BIBREF46 . PanLex currently spans around 1,300 language varieties with over 12M expressions: it offers some support and supervision also for low-resource language pairs BIBREF47 . For each source language ( $L_1$ ), we automatically translate their vocabulary words (if they are present in PanLex) to all 14 target ( $L_2$ ) languages. To ensure the reliability of the translation pairs, we retain only unigrams found in the vocabularies of the respective $L_2$ monolingual spaces which scored above a PanLex-predefined threshold.
As in prior work BIBREF23 , BIBREF17 , we then reserve the 5K pairs created from the more frequent $L_1$ words for training, while the next 2K pairs are used for test. Smaller training dictionaries (1K and 500 pairs) are created by again selecting pairs comprising the most frequent $L_1$ words.
Training Setup. In all experiments, we set the hyper-parameters to values that were tuned in prior research. When extracting the unsupervised seed lexicon, the 4K most frequent words of each language are used; self-learning operates on the 20K most frequent words of each language; with dropout the keep probability $p$ is 0.1; CSLS with $k=10$ nearest neighbors BIBREF24 .
Again, Table 1 lists the main model configurations in our comparison. For the fully unsupervised model we always report the best performing configuration after probing different self-learning strategies (i.e., +sl, +sl+nod, and +sl+sym are tested). The results for unsupervised are always reported as averages over 5 restarts: this means that with unsupervised we count BLI setups as unsuccessful only if the results are close to zero in all 5/5 runs. orthg-super is the standard supervised model with orthogonal projections from prior work BIBREF21 , BIBREF17 .
Results and Discussion
Main BLI results averaged over each source language ( $L_1$ ) are provided in Table 3 and Table 4 . We now summarize and discuss the main findings across several dimensions of comparison.
Unsupervised vs. (Weakly) Supervised. First, when exactly the same components C2 and C3 are used, unsupervised is unable to outperform a (weakly) supervised full+sl+sym variant, and the gap in final performance is often substantial. In fact, full+sl+sym and full+sl+nod outperform the best unsupervised for all 210/210 BLI setups: we observe the same phenomenon with varying dictionary sizes, that is, it equally holds when we seed self-learning with 5K, 1K, and 500 translation pairs, see also Figure 2 . This also suggests that the main reason why unsupervised approaches were considered on-par with supervised approaches in prior work BIBREF23 , BIBREF24 is because they were not compared under fair circumstances: while unsupervised relied heavily on the components C2 and C3, these were omitted when running supervised baselines. Our unbiased comparison reveals that there is a huge gap even when supervised projection-based approaches consume only several hundred translation pairs to initiate self-learning.
Are Unsupervised CLWEs Robust? The results also indicate that, contrary to the beliefs established by very recent work BIBREF24 , BIBREF30 , fully unsupervised approaches are still prone to getting stuck in local optima, and still suffer from robustness issues when dealing with distant language pairs: 87 out of 210 BLI setups ( $=41.4\%$ ) result in (near-)zero BLI performance, see also Table 4 . At the same time, weakly supervised methods with a seed lexicon of 1k or 500 pairs do not suffer from the robustness problem and always converge to a good solution, as also illustrated by the results reported in Table 5 .
How Important are Preprocessing and Postprocessing? The comparisons between orthg-super (and orthg+sl+sym) on the one hand, and full-super (and full+sl+sym) on the other hand clearly indicate that the component C3 plays a substantial role in effective CLWE learning. full-super, which employs all steps S1-S4 (see § "Methodology" ), outperforms orthg-super in 208/210 setups with $|D_0|$ =5k and in 210/210 setups with $|D_0|$ =1k. Similarly, full+sl+sym is better than orthg+sl+sym in 210/210 setups (both for $|D_0|$ =1k,5k). The scores also indicate that dropout with self-learning is useful only when we work with noisy unsupervised seed lexicons: full+sl+nod and full+sl+sym without dropout consistently outperform full+sl across the board.
How Important is (Robust) Self-Learning? We note that the best self-learning method is often useful even when $|D_0|=5k$ (i.e., full+sl+sym is better than full-super in 164/210 setups). However, the importance of robust self-learning gets more pronounced as we decrease the size of $D_0$ : full+sl+sym is better than full-super in 210/210 setups when $|D_0|=500$ or $|D_0|=1,000$ . The gap between the two models, as shown in Figure 2 , increases dramatically in favor of full+sl+sym as we decrease $|D_0|$ .
Again, just comparing full-super and unsupervised in Figure 2 might give a false impression that fully unsupervised CLWE methods can match their supervised counterparts, but the comparison to full+sl+sym reveals the true extent of performance drop when we abandon even weak supervision. The scores also reveal that the choice of self-learning (C2) does matter: all best performing BLI runs with $|D_0|=1k$ are obtained by two configs with self-learning, and full+sl+sym is the best configuration for 177/210 setups (see Table 4 ).
Language Pairs. As suggested before by sogaard2018on and further verified by glavas2019howto and doval2019onthe, the language pair at hand can have a huge impact on CLWE induction: the adversarial method of conneau2018word often gets stuck in poor local optima and yields degenerate solutions for distant language pairs such as English-Finnish. More recent CLWE methods BIBREF24 , BIBREF30 focus on mitigating this robustness issue. However, they still rely on one critical assumption which leads them to degraded performance for distant language pairs: they assume approximate isomorphism BIBREF49 , BIBREF48 between monolingual embedding spaces to learn the initial seed dictionary. In other words, they assume very similar geometric constellations between two monolingual spaces: due to the Zipfian phenomena in language BIBREF50 such near-isomorphism can be satisfied only for similar languages and for similar domains used for training monolingual vectors. This property is reflected in the results reported in Table 3 , the number of unsuccessful setups in Table 4 , as well as later in Figure 4 .
For instance, the largest number of unsuccessful BLI setups with the unsupervised model is reported for Korean, Thai (a tonal language), and Basque (a language isolate): their morphological and genealogical properties are furthest away from other languages in our comparison. A substantial number of unsuccessful setups is also observed with other two language outliers from our set (see Table 2 again), Georgian and Indonesian, as well as with morphologically-rich languages such as Estonian or Turkish.
One setting in which fully unsupervised methods did show impressive results in prior work are similar language pairs. However, even in these settings when the comparison to the weakly supervised full-super+sym is completely fair (i.e., the same components C2 and C3 are used for both), unsupervised still falls short of full-super+sym. These results for three source languages are summarized in Figure 3 . What is more, one could argue that we do not need unsupervised CLWEs for similar languages in the first place: we can harvest cheap supervision here, e.g., cognates. The main motivation behind unsupervised approaches is to support dissimilar and resource-poor language pairs for which supervision cannot be guaranteed.
Domain Differences. Finally, we also verify that unsupervised CLWEs still cannot account for domain differences when training monolingual vectors. We rely on the probing test of sogaard2018on: 300-dim fastText vectors are trained on 1.1M sentences on three corpora: 1) EuroParl.v7 BIBREF51 (parliamentary proceedings); 2) Wikipedia BIBREF52 , and 3) EMEA BIBREF53 (medical), and BLI evaluation for three language pairs is conducted on standard MUSE BLI test sets BIBREF23 . The results, summarized in Figure 4 , reveal that unsupervised methods are able to yield a good solution only when there is no domain mismatch and for the pair with two most similar languages (English-Spanish), again questioning their robustness and portability to truly low-resource and more challenging setups. Weakly supervised methods ( $|D_0|=500$ or $D_0$ seeded with identical strings), in contrast, yield good solutions for all setups.
Further Discussion and Conclusion
The superiority of weakly supervised methods (e.g., full+sl+sym) over unsupervised methods is especially pronounced for distant and typologically heterogeneous language pairs. However, our study also indicates that even carefully engineered projection-based methods with some seed supervision yield lower absolute performance for such pairs. While we have witnessed the proliferation of fully unsupervised CLWE models recently, some fundamental questions still remain. For instance, the underlying assumption of all projection-based methods (both supervised and unsupervised) is the topological similarity between monolingual spaces, which is why standard simple linear projections result in lower absolute BLI scores for distant pairs (see Table 4 and results in the supplemental material). Unsupervised approaches even exploit the assumption twice as their seed extraction is fully based on the topological similarity.
Future work should move beyond the restrictive assumption by exploring new methods that can, e.g., 1) increase the isomorphism between monolingual spaces BIBREF54 by distinguishing between language-specific and language-pair-invariant subspaces; 2) learn effective non-linear or multiple local projections between monolingual spaces similar to the preliminary work of nakashole2018norma; 3) similar to vulic2016on and Lubin:2019naacl “denoisify” seed lexicons during the self-learning procedure. For instance, keeping only mutual/symmetric nearest neighbour as in full+sl+sym can be seen as a form of rudimentary denoisifying: it is indicative to see that the best overall performance in this work is reported with that model configuration.
Further, the most important contributions of unsupervised CLWE models are, in fact, the improved and more robust self-learning procedures (component C2) and technical enhancements (component C3). In this work we have demonstrated that these components can be equally applied to weakly supervised approaches: starting from a set of only several hundred pairs, they can guarantee consistently improved performance across the board. As there is still no clear-cut use case scenario for unsupervised CLWEs, instead of “going fully unsupervised”, one pragmatic approach to widen the scope of CLWE learning and its application might invest more effort into extracting at least some seed supervision for a variety of language pairs BIBREF22 . This finding aligns well with the ongoing initiatives of the PanLex project BIBREF44 and the ASJP database BIBREF56 , which aim to collate at least some translation pairs in most of the world’s languages.
Finally, this paper demonstrates that, in order to enable fair comparisons, future work on CLWEs should focus on evaluating the CLWE methods' constituent components (e.g, components C1-C3 from this work) instead of full-blown composite systems directly. One goal of the paper is to acknowledge that the work on fully unsupervised CLWE methods has indeed advanced state-of-the-art in cross-lingual word representation learning by offering new solutions also to weakly supervised CLWE methods. However, the robustness problems are still prominent with fully unsupervised CLWEs, and future work should invest more time and effort into developing more robust and more effective methods, e.g., by reaching beyond projection-based methods towards joint approaches BIBREF16 , BIBREF57 .
Acknowledgments
This work is supported by the ERC Consolidator Grant LEXICAL: Lexical Acquisition Across Languages (no 648909). The work of Goran Glavaš is supported by the Baden-Württemberg Stiftung (AGREE grant of the Eliteprogramm). Roi Reichart is partially funded by ISF personal grants No. 1625/18. We thank the three anonymous reviewers for their encouraging comments and suggestions. | Unsupervised CLWEs. These methods first induce a seed dictionary $D^{(1)}$ leveraging only two unaligned monolingual spaces (C1). While the algorithms for unsupervised seed dictionary induction differ, they all strongly rely on the assumption of similar topological structure between the two pretrained monolingual spaces. Once the seed dictionary is obtained, the two-step iterative self-learning procedure (C2) takes place: 1) a dictionary $D^{(k)}$ is first used to learn the joint space $\mathbf {Y}^{(k)} = \mathbf {X{W}}^{(k)}_x \cup \mathbf {Z{W}}^{(k)}_z$ ; 2) the nearest neighbours in $\mathbf {Y}^{(k)}$ then form the new dictionary $D^{(k+1)}$ . We illustrate the general structure in Figure 1 . |
63c0128935446e26eacc7418edbd9f50cba74455 | 63c0128935446e26eacc7418edbd9f50cba74455_0 | Q: What is the size of the released dataset?
Text: Introduction
This work is licensed under a Creative Commons Attribution 4.0 International License. License details: http://creativecommons.org/licenses/by/4.0/
The scientific literature is growing at a rapid rate BIBREF0 . To make sense of this flood of literature, for example, to extract cancer pathways BIBREF1 or find geological features BIBREF2 , increasingly requires the application of natural language processing. Given the diversity of information and its constant flux, the use of unsupervised or distantly supervised techniques are of interest BIBREF3 . In this paper, we investigate one such unsupervised method, namely, Open Information Extraction (OIE) BIBREF4 . OIE is the task of the unsupervised creation of structured information from text. OIE is often used as a starting point for a number of downstream tasks including knowledge base construction, relation extraction, and question answering BIBREF5 .
While OIE has been applied to the scientific literature before BIBREF6 , we have not found a systematic evaluation of OIE as applied to scientific publications. The most recent evaluations of OIE extraction tools BIBREF7 , BIBREF8 have instead looked at the performance of these tools on traditional NLP information sources (i.e. encyclopedic and news-wire text). Indeed, as BIBREF8 noted, there is little work on the evaluation of OIE systems. Thus, the goal of this paper is to evaluate the performance of the state of the art in OIE systems on scientific text.
Specifically, we aim to test two hypotheses:
Additionally, we seek to gain insight into the value of unsupervised approaches to information extraction and also provide information useful to implementors of these systems. We note that our evaluation differs from existing OIE evaluations in that we use crowd-sourcing annotations instead of expert annotators. This allows for a larger number of annotators to be used. All of our data, annotations and analyses are made openly available.
The rest of the paper is organized as follows. We begin with a discussion of existing evaluation approaches and then describe the OIE systems that we evaluated. We then proceed to describe the datasets used in the evaluation and the annotation process that was employed. This is followed by the results of the evaluation including an error analysis. Finally, we conclude.
Existing Evaluation Approaches
OIE systems analyze sentences and emit relations between one predicate and two or more arguments (e.g. Washington :: was :: president). The arguments and predicates are not fixed to a given domain. (Note, that throughout this paper we use the word `triple” to refer interchangeably to binary relations.) Existing evaluation approaches for OIE systems have primarily taken a ground truth-based approach. Human annotators analyze sentences and determine correct relations to be extracted. Systems are then evaluated with respect to the overlap or similarity of their extractions to the ground truth annotations, allowing the standard metrics of precision and recall to be reported.
This seems sensible but is actually problematic because of different but equivalent representations of the information in an article. For example, consider the sentence “The patient was treated with Emtricitabine, Etravirine, and Darunavir”. One possible extraction is:
(The patient :: was treated with :: Emtricitabine, Etravirine, and Darunavir)
Another possible extraction is:
(The patient :: was treated with :: Emtricitabine)
(The patient :: was treated with :: Etravirine)
(The patient :: was treated with :: Darunavir)
Neither of these is wrong, but by choosing one approach or the other a pre-constructed gold set will falsely penalize a system that uses the other approach.
From such evaluations and their own cross dataset evaluation, BIBREF8 list the following common errors committed by OIE systems:
In our evaluation, we take a different approach. We do not define ground truth relation extractions from the sentences in advance. Instead, we manually judge the correctness of each extraction after the fact. We feel that this is the crux of the information extraction challenge. Is what is being extracted correct or not? This approach enables us to consider many more relations through the use of a crowd-sourced annotation process. Our evaluation approach is similar to the qualitative analysis performed in BIBREF8 and the evaluation performed in BIBREF7 . However, our evaluation is able to use more judges (5 instead of 2) because we apply crowd sourcing. For our labelling instructions, we adapted those used by BIBREF7 to the crowd sourcing setting.
As previously noted existing evaluations have also only looked at encyclopedic or newspaper corpora. Several systems (e.g. BIBREF4 , BIBREF9 ) have looked at text from the web as well, however, as far as we know, none have specifically looked at evaluation for scientific and medical text.
Systems
We evaluate two OIE systems (i.e. extractors). The first, OpenIE 4 BIBREF5 , descends from two popular OIE systems OLLIE BIBREF10 and Reverb BIBREF10 . We view this as a baseline system. The second was MinIE BIBREF7 , which is reported as performing better than OLLIE, ClauseIE BIBREF9 and Stanford OIE BIBREF9 . MinIE focuses on the notion of minimization - producing compact extractions from sentences. In our experience using OIE on scientific text, we have found that these systems often produce overly specific extractions that do not provide the redundancy useful for downstream tasks. Hence, we thought this was a useful package to explore.
We note that both OpenIE 4 and MiniIE support relation extractions that go beyond binary tuples, supporting the extraction of n-ary relations. We note that the most recent version of Open IE (version 5) is focused on n-ary relations. For ease of judgement, we focused on binary relations. Additionally, both systems support the detection of negative relations.
In terms of settings, we used the off the shelf settings for OpenIE 4. For MinIE, we used their “safe mode" option, which uses slightly more aggressive minimization than the standard setting. In the recent evaluation of MiniIE, this setting performed roughly on par with the default options BIBREF7 . Driver code showing how we ran each system is available.
Datasets
We used two different data sources in our evaluation. The first dataset (WIKI) was the same set of 200 sentences from Wikipedia used in BIBREF7 . These sentences were randomly selected by the creators of the dataset. This choice allows for a rough comparison between our results and theirs.
The second dataset (SCI) was a set of 220 sentences from the scientific literature. We sourced the sentences from the OA-STM corpus. This corpus is derived from the 10 most published in disciplines. It includes 11 articles each from the following domains: agriculture, astronomy, biology, chemistry, computer science, earth science, engineering, materials science, math, and medicine. The article text is made freely available and the corpus provides both an XML and a simple text version of each article.
We randomly selected 2 sentences with more than two words from each paper using the simple text version of the paper. We maintained the id of the source article and the line number for each sentence.
Annotation Process
We employed the following annotation process. Each OIE extractor was applied to both datasets with the settings described above. This resulted in the generation of triples for 199 of the 200 WIKI sentences and 206 of the 220 SCI sentences. That is there were some sentences in which no triples were extracted. We discuss later the sentences in which no triples were extracted. In total 2247 triples were extracted.
The sentences and their corresponding triples were then divided. Each task contained 10 sentences and all of their unique corresponding triples from a particular OIE systems. Half of the ten sentences were randomly selected from SCI and the other half were randomly selected from WIKI. Crowd workers were asked to mark whether a triple was correct, namely, did the triple reflect the consequence of the sentence. Examples of correct and incorrect triples were provided. Complete labelling instructions and the presentation of the HITS can be found with the dataset. All triples were labelled by at least 5 workers.
Note, to ensure the every HIT had 10 sentences, some sentences were duplicated. Furthermore, we did not mandate that all workers complete all HITS.
We followed recommended practices for the use of crowd sourcing in linguistics BIBREF11 . We used Amazon Mechanical Turk as a means to present the sentences and their corresponding triples to a crowd for annotation. Within Mechanical Turk tasks are called Human Intelligence Tasks (HITs). To begin, we collected a small set of sentences and triples with known correct answers. We did this by creating a series of internal HITs and loaded them the Mechanical Turk development environment called the Mechanical Turk Sandbox. The HITs were visible to a trusted group of colleagues who were asked to complete the HITs.
Having an internal team of workers attempt HITs provides us with two valuable aspects of the eventual production HITs. First, internal users are able to provide feedback related to usability and clarity of the task. They were asked to read the instructions and let us know if there was anything that was unclear. After taking the HITs, they are able to ask questions about anomalies or confusing situations they encounter and allow us to determine if specific types of HITs are either not appropriate for the task or might need further explanation in the instructions. In addition to the internal users direct feedback, we were also able to use the Mechanical Turk Requester functionality to monitor how long (in minutes and seconds) it took each worker to complete each HIT. This would come into factor how we decided on how much to pay each Worker per HIT after they were made available to the public.
The second significant outcome from the internal annotations was the generation of a set of `expected' correct triples. Having a this set of annotations is an integral part of two aspects of our crowdsourcing process. First, it allows us to create a qualification HIT. A qualification HIT is a HIT that is made available to the public with the understanding the Workers will be evaluated based on how closely they matched the annotations of the internal annotators. Based upon this, the Workers with the most matches would be invited to work on additional tasks. Second, we are able to add the internal set of triples randomly amongst the other relations we were seeking to have annotated. This allows us to monitor quality of the individual Workers over the course of the project. Note, none of this data was used in the actual evaluation. It was only for the purposes of qualifying Workers.
We are sensitive to issues that other researchers have in regards to Mechanical Turk Workers earning fair payment in exchange for their contributions to the HITs BIBREF12 . We used the time estimates from our internal annotation to price the task in order to be above US minimum wage. All workers were qualified before being issued tasks. Overall, we employed 10 crowd workers. On average it took 30 minutes for a worker to complete a HIT. In line with BIBREF13 , we monitored for potential non-performance or spam by looking for long response times and consecutive submitted results. We saw no indicators of low quality responses.
Judgement Data and Inter-Annotator Agreement
In total, 11262 judgements were obtained after running the annotation process. Every triple had at least 5 judgements from different annotators. All judgement data is made available. The proportion of overall agreement between annotators is 0.76 with a standard deviation of 0.25 on whether a triple is consequence of the given sentence. We also calculated inter-annotator agreement statistics. Using Krippendorff's alpha inter-annotator agreement was 0.44. This calculation was performed over all data and annotators as Krippendorff's alpha is designed to account for missing data and work across more than two annotators. Additionally, Fleiss' Kappa and Scott's pi were calculated pairwise between all annotators where there were overlapping ratings (i.e. raters had rated at least one triple in common). The average Fleiss's Kappa was 0.41 and the average of Scott's pi was 0.37. Using BIBREF14 as a guide, we interpret these statistics as suggesting there is moderate agreement between annotators and that agreement is above random chance. This moderate level of agreement is to be expected as the task itself can be difficult and requires judgement from the annotators at the margin.
Table 1 shows examples of triples that were associated with higher disagreement between annotators. One can see for example, in the third example, that annotators might be confused by the use of a pronoun (him). Another example is in the last sentence of the table, where one can see that there might be disagreement on whether the subsequent prepositional phrase behind light microscope analysis should be included as part of the extracted triple.
We take the variability of judgements into account when using this data to compute the performance of the two extraction tools. Hence, to make assessments as to whether a triple correctly reflects the content from which it is extracted, we rely on the unanimous positive agreement between crowd workers. That is to say that if we have 100% inter-annotator agreement that a triple was correctly extracted we label it as correct.
Experimental Results
Table 2 show the results for the combinations of systems and data sources. The Correct Triples column contains the number of triples that are labelled as being correct by all annotators. Total Triples are the total number of triples extracted by the given systems over the specified data. Precision is calculated as typical where Correct Triples are treated as true positives. On average, 3.1 triples were extracted per sentence.
Figure 1 shows the performance of extractors in terms of precision as inter-annotator agreement decreases. In this figure, we look only at agreement on triples where the majority agree that the triple is correct. Furthermore, to ease comparison, we only consider triples with 5 judgements this excludes 9 triples. We indicate not only the pair-wise inter-annotator agreement but also the number of annotators who have judged a triple to be correct. For example, at the 40% agreement level at least 3 annotators have agreed that a triple is true. The figure separates the results by extractor and by data source.
We see that as expected the amount of triples agreed to as correct grows larger as we relax the requirement for agreement. For example, analyzing Open IE's results, at the 100% agreement level we see a precision of 0.56 whereas at the 40% agreement level we see a precision of 0.78. Table 3 shows the total number of correct extractions at the three agreement levels.
Testing H1: Comparing the Performance of OIE on Scientific vs. Encyclopedic Text
From the data, we see that extractors perform better on sentences from Wikipedia (0.54 P) than scientific text (0.34 P). Additionally, we see that there is higher annotator agreement on whether triples extracted from Wikipedia and scientific text are correct or incorrect: 0.80 - SD 0.24 (WIKI) vs. 0.72 - SD 0.25 (SCI). A similar difference in agreement is observed when only looking at triples that are considered to be correct by the majority of annotators: 0.87 - SD 0.21 (WIKI) vs. 0.78 - SD 0.25 (SCI) . In both cases, the difference is significant with p-values $<$ 0.01 using Welch's t-test. The differences between data sources are also seen when looking at the individual extraction tools. For instance, for Open IE 4 the precision is 0.19 higher for wikipedia extractions over those from scientific text. With this evidence, we reject our first hypothesis that the performance of these extractors are similar across data sources.
Testing H2: Comparing the Performance of Systems
We also compare the output of the two extractors. In terms precision, Open IE 4 performs much better across the two datasets (0.56P vs 0.39P). Looking at triples considered to be correct by the majority of annotators, we see that Open IE 4 has higher inter-annotator agreement 0.87 - SD 0.22 (Open IE) vs 0.81 - SD 0.24 (MinIE). Focusing on scientific and medical text (SCI), again where the triples are majority annotated as being correct, Open IE has higher inter-annotator agreement (Open IE: 0.83 - SD 0.24 vs MiniIE: 0.76 - SD 0.25). In both cases, the difference is significant with p-values $<$ 0.01 using Welch's t-test. This leads us to conclude that Open IE produces triples that annotators are more likely to agree as being correct.
MinIE provides many more correct extractions than OpenIE 4 (935 more across both datasets). The true recall numbers of the two systems can not be calculated with the data available, but the 40% difference in the numbers of correct extractions is strong evidence that the two systems do not have equivalent behavior.
A third indication of differences in their outputs comes from examining the complexity of the extracted relations. Open IE 4 generates longer triples on average (11.5 words) vs. 8.5 words for MinIE across all argument positions. However, Open IE 4 generates shorter relation types than MinIE (Open IE - 3.7 words; MiniIE 6.27 words) and the standard deviation in terms of word length is much more compact for Open IE 4 - 1 word vs 3 words for MinIE. Overall, our conclusion is that Open IE 4 performs better than MinIE both in terms of precision and compactness of relation types, while not matching MinIE's recall, and thus we reject our second hypothesis.
Other Observations
The amount of triples extracted from the scientific text is slightly larger than that extracted from the Wikipedia text. This follows from the fact that the scientific sentences are on average roughly 7 words longer than encyclopedic text.
The results of our experiment also confirm the notion that an unsupervised approach to extracting relations is important. We have identified 698 unique relation types that are part of triples agreed to be correct by all annotators. This number of relation types is derived from only 400 sentences. While not every relation type is essential for downstream tasks, it is clear that building specific extractors for each relation type in a supervised setting would be difficult.
Error Analysis
We now look more closely at the various errors that were generated by the two extractors.
Table 4 shows the sentences in which neither extractor produced triples. We see 3 distinct groups. The first are phrases that are incomplete sentences usually originating from headings (e.g. Materials and methods). The next group are descriptive headings potentially coming from paper titles or figure captions. We also see a group with more complex prepositional phrases. In general, these errors could be avoided by being more selective of the sentences used for random selection. Additionally, these systems could look at potentially just extracting noun phrases with variable relation types, hence, expressing a cooccurrence relation.
We also looked at where there was complete agreement by all annotators that a triple extraction was incorrect. In total there were 138 of these triples originating from 76 unique sentences. There were several patterns that appeared in these sentences.
We also see similar errors to those pointed out by BIBREF8 , namely, uninformative extractions, the difficulty in handling n-ary relations that are latent in the text, difficulties handling negations, and very large argument lengths. In general, these errors together point to several areas for further improvement including:
Conclusion
The pace of change in the scientific literature means that interconnections and facts in the form of relations between entities are constantly being created. Open information extraction provides an important tool to keep up with that pace of change. We have provided evidence that unsupervised techniques are needed to be able to deal with the variety of relations present in text. The work presented here provides an independent evaluation of these tools in their use on scientific text. Past evaluations have focused on encyclopedic or news corpora which often have simpler structures. We have shown that existing OIE systems perform worse on scientific and medical content than on general audience content.
There are a range of avenues for future work. First, the application of Crowd Truth framework BIBREF15 in the analysis of these results might prove to be useful as we believe that the use of unanimous agreement tends to negatively impact the perceived performance of the OIE tools. Second, we think the application to n-ary relations and a deeper analysis of negative relations would be of interest. To do this kind of evaluation, an important area of future work is the development of guidelines and tasks for more complex analysis of sentences in a crowd sourcing environment. The ability, for example, to indicate argument boundaries or correct sentences can be expected of expert annotators but needs to implemented in a manner that is efficient and easy for the general crowd worker. Third, we would like to expand the evaluation dataset to an even larger numbers of sentences. Lastly, there are a number of core natural language processing components that might be useful for OIE in this setting, for example, the use of syntactic features as suggested by BIBREF16 . Furthermore, we think that coreference is a crucial missing component and we are actively investigating improved coreference resolution for scientific texts.
To conclude, we hope that this evaluation provides further insights for implementors of these extraction tools to deal with the complexity of scientific and medical text. | 440 sentences, 2247 triples extracted from those sentences, and 11262 judgements on those triples. |
9a94dcee17cdb9a39d39977191e643adece58dfc | 9a94dcee17cdb9a39d39977191e643adece58dfc_0 | Q: Were the OpenIE systems more accurate on some scientific disciplines than others?
Text: Introduction
This work is licensed under a Creative Commons Attribution 4.0 International License. License details: http://creativecommons.org/licenses/by/4.0/
The scientific literature is growing at a rapid rate BIBREF0 . To make sense of this flood of literature, for example, to extract cancer pathways BIBREF1 or find geological features BIBREF2 , increasingly requires the application of natural language processing. Given the diversity of information and its constant flux, the use of unsupervised or distantly supervised techniques are of interest BIBREF3 . In this paper, we investigate one such unsupervised method, namely, Open Information Extraction (OIE) BIBREF4 . OIE is the task of the unsupervised creation of structured information from text. OIE is often used as a starting point for a number of downstream tasks including knowledge base construction, relation extraction, and question answering BIBREF5 .
While OIE has been applied to the scientific literature before BIBREF6 , we have not found a systematic evaluation of OIE as applied to scientific publications. The most recent evaluations of OIE extraction tools BIBREF7 , BIBREF8 have instead looked at the performance of these tools on traditional NLP information sources (i.e. encyclopedic and news-wire text). Indeed, as BIBREF8 noted, there is little work on the evaluation of OIE systems. Thus, the goal of this paper is to evaluate the performance of the state of the art in OIE systems on scientific text.
Specifically, we aim to test two hypotheses:
Additionally, we seek to gain insight into the value of unsupervised approaches to information extraction and also provide information useful to implementors of these systems. We note that our evaluation differs from existing OIE evaluations in that we use crowd-sourcing annotations instead of expert annotators. This allows for a larger number of annotators to be used. All of our data, annotations and analyses are made openly available.
The rest of the paper is organized as follows. We begin with a discussion of existing evaluation approaches and then describe the OIE systems that we evaluated. We then proceed to describe the datasets used in the evaluation and the annotation process that was employed. This is followed by the results of the evaluation including an error analysis. Finally, we conclude.
Existing Evaluation Approaches
OIE systems analyze sentences and emit relations between one predicate and two or more arguments (e.g. Washington :: was :: president). The arguments and predicates are not fixed to a given domain. (Note, that throughout this paper we use the word `triple” to refer interchangeably to binary relations.) Existing evaluation approaches for OIE systems have primarily taken a ground truth-based approach. Human annotators analyze sentences and determine correct relations to be extracted. Systems are then evaluated with respect to the overlap or similarity of their extractions to the ground truth annotations, allowing the standard metrics of precision and recall to be reported.
This seems sensible but is actually problematic because of different but equivalent representations of the information in an article. For example, consider the sentence “The patient was treated with Emtricitabine, Etravirine, and Darunavir”. One possible extraction is:
(The patient :: was treated with :: Emtricitabine, Etravirine, and Darunavir)
Another possible extraction is:
(The patient :: was treated with :: Emtricitabine)
(The patient :: was treated with :: Etravirine)
(The patient :: was treated with :: Darunavir)
Neither of these is wrong, but by choosing one approach or the other a pre-constructed gold set will falsely penalize a system that uses the other approach.
From such evaluations and their own cross dataset evaluation, BIBREF8 list the following common errors committed by OIE systems:
In our evaluation, we take a different approach. We do not define ground truth relation extractions from the sentences in advance. Instead, we manually judge the correctness of each extraction after the fact. We feel that this is the crux of the information extraction challenge. Is what is being extracted correct or not? This approach enables us to consider many more relations through the use of a crowd-sourced annotation process. Our evaluation approach is similar to the qualitative analysis performed in BIBREF8 and the evaluation performed in BIBREF7 . However, our evaluation is able to use more judges (5 instead of 2) because we apply crowd sourcing. For our labelling instructions, we adapted those used by BIBREF7 to the crowd sourcing setting.
As previously noted existing evaluations have also only looked at encyclopedic or newspaper corpora. Several systems (e.g. BIBREF4 , BIBREF9 ) have looked at text from the web as well, however, as far as we know, none have specifically looked at evaluation for scientific and medical text.
Systems
We evaluate two OIE systems (i.e. extractors). The first, OpenIE 4 BIBREF5 , descends from two popular OIE systems OLLIE BIBREF10 and Reverb BIBREF10 . We view this as a baseline system. The second was MinIE BIBREF7 , which is reported as performing better than OLLIE, ClauseIE BIBREF9 and Stanford OIE BIBREF9 . MinIE focuses on the notion of minimization - producing compact extractions from sentences. In our experience using OIE on scientific text, we have found that these systems often produce overly specific extractions that do not provide the redundancy useful for downstream tasks. Hence, we thought this was a useful package to explore.
We note that both OpenIE 4 and MiniIE support relation extractions that go beyond binary tuples, supporting the extraction of n-ary relations. We note that the most recent version of Open IE (version 5) is focused on n-ary relations. For ease of judgement, we focused on binary relations. Additionally, both systems support the detection of negative relations.
In terms of settings, we used the off the shelf settings for OpenIE 4. For MinIE, we used their “safe mode" option, which uses slightly more aggressive minimization than the standard setting. In the recent evaluation of MiniIE, this setting performed roughly on par with the default options BIBREF7 . Driver code showing how we ran each system is available.
Datasets
We used two different data sources in our evaluation. The first dataset (WIKI) was the same set of 200 sentences from Wikipedia used in BIBREF7 . These sentences were randomly selected by the creators of the dataset. This choice allows for a rough comparison between our results and theirs.
The second dataset (SCI) was a set of 220 sentences from the scientific literature. We sourced the sentences from the OA-STM corpus. This corpus is derived from the 10 most published in disciplines. It includes 11 articles each from the following domains: agriculture, astronomy, biology, chemistry, computer science, earth science, engineering, materials science, math, and medicine. The article text is made freely available and the corpus provides both an XML and a simple text version of each article.
We randomly selected 2 sentences with more than two words from each paper using the simple text version of the paper. We maintained the id of the source article and the line number for each sentence.
Annotation Process
We employed the following annotation process. Each OIE extractor was applied to both datasets with the settings described above. This resulted in the generation of triples for 199 of the 200 WIKI sentences and 206 of the 220 SCI sentences. That is there were some sentences in which no triples were extracted. We discuss later the sentences in which no triples were extracted. In total 2247 triples were extracted.
The sentences and their corresponding triples were then divided. Each task contained 10 sentences and all of their unique corresponding triples from a particular OIE systems. Half of the ten sentences were randomly selected from SCI and the other half were randomly selected from WIKI. Crowd workers were asked to mark whether a triple was correct, namely, did the triple reflect the consequence of the sentence. Examples of correct and incorrect triples were provided. Complete labelling instructions and the presentation of the HITS can be found with the dataset. All triples were labelled by at least 5 workers.
Note, to ensure the every HIT had 10 sentences, some sentences were duplicated. Furthermore, we did not mandate that all workers complete all HITS.
We followed recommended practices for the use of crowd sourcing in linguistics BIBREF11 . We used Amazon Mechanical Turk as a means to present the sentences and their corresponding triples to a crowd for annotation. Within Mechanical Turk tasks are called Human Intelligence Tasks (HITs). To begin, we collected a small set of sentences and triples with known correct answers. We did this by creating a series of internal HITs and loaded them the Mechanical Turk development environment called the Mechanical Turk Sandbox. The HITs were visible to a trusted group of colleagues who were asked to complete the HITs.
Having an internal team of workers attempt HITs provides us with two valuable aspects of the eventual production HITs. First, internal users are able to provide feedback related to usability and clarity of the task. They were asked to read the instructions and let us know if there was anything that was unclear. After taking the HITs, they are able to ask questions about anomalies or confusing situations they encounter and allow us to determine if specific types of HITs are either not appropriate for the task or might need further explanation in the instructions. In addition to the internal users direct feedback, we were also able to use the Mechanical Turk Requester functionality to monitor how long (in minutes and seconds) it took each worker to complete each HIT. This would come into factor how we decided on how much to pay each Worker per HIT after they were made available to the public.
The second significant outcome from the internal annotations was the generation of a set of `expected' correct triples. Having a this set of annotations is an integral part of two aspects of our crowdsourcing process. First, it allows us to create a qualification HIT. A qualification HIT is a HIT that is made available to the public with the understanding the Workers will be evaluated based on how closely they matched the annotations of the internal annotators. Based upon this, the Workers with the most matches would be invited to work on additional tasks. Second, we are able to add the internal set of triples randomly amongst the other relations we were seeking to have annotated. This allows us to monitor quality of the individual Workers over the course of the project. Note, none of this data was used in the actual evaluation. It was only for the purposes of qualifying Workers.
We are sensitive to issues that other researchers have in regards to Mechanical Turk Workers earning fair payment in exchange for their contributions to the HITs BIBREF12 . We used the time estimates from our internal annotation to price the task in order to be above US minimum wage. All workers were qualified before being issued tasks. Overall, we employed 10 crowd workers. On average it took 30 minutes for a worker to complete a HIT. In line with BIBREF13 , we monitored for potential non-performance or spam by looking for long response times and consecutive submitted results. We saw no indicators of low quality responses.
Judgement Data and Inter-Annotator Agreement
In total, 11262 judgements were obtained after running the annotation process. Every triple had at least 5 judgements from different annotators. All judgement data is made available. The proportion of overall agreement between annotators is 0.76 with a standard deviation of 0.25 on whether a triple is consequence of the given sentence. We also calculated inter-annotator agreement statistics. Using Krippendorff's alpha inter-annotator agreement was 0.44. This calculation was performed over all data and annotators as Krippendorff's alpha is designed to account for missing data and work across more than two annotators. Additionally, Fleiss' Kappa and Scott's pi were calculated pairwise between all annotators where there were overlapping ratings (i.e. raters had rated at least one triple in common). The average Fleiss's Kappa was 0.41 and the average of Scott's pi was 0.37. Using BIBREF14 as a guide, we interpret these statistics as suggesting there is moderate agreement between annotators and that agreement is above random chance. This moderate level of agreement is to be expected as the task itself can be difficult and requires judgement from the annotators at the margin.
Table 1 shows examples of triples that were associated with higher disagreement between annotators. One can see for example, in the third example, that annotators might be confused by the use of a pronoun (him). Another example is in the last sentence of the table, where one can see that there might be disagreement on whether the subsequent prepositional phrase behind light microscope analysis should be included as part of the extracted triple.
We take the variability of judgements into account when using this data to compute the performance of the two extraction tools. Hence, to make assessments as to whether a triple correctly reflects the content from which it is extracted, we rely on the unanimous positive agreement between crowd workers. That is to say that if we have 100% inter-annotator agreement that a triple was correctly extracted we label it as correct.
Experimental Results
Table 2 show the results for the combinations of systems and data sources. The Correct Triples column contains the number of triples that are labelled as being correct by all annotators. Total Triples are the total number of triples extracted by the given systems over the specified data. Precision is calculated as typical where Correct Triples are treated as true positives. On average, 3.1 triples were extracted per sentence.
Figure 1 shows the performance of extractors in terms of precision as inter-annotator agreement decreases. In this figure, we look only at agreement on triples where the majority agree that the triple is correct. Furthermore, to ease comparison, we only consider triples with 5 judgements this excludes 9 triples. We indicate not only the pair-wise inter-annotator agreement but also the number of annotators who have judged a triple to be correct. For example, at the 40% agreement level at least 3 annotators have agreed that a triple is true. The figure separates the results by extractor and by data source.
We see that as expected the amount of triples agreed to as correct grows larger as we relax the requirement for agreement. For example, analyzing Open IE's results, at the 100% agreement level we see a precision of 0.56 whereas at the 40% agreement level we see a precision of 0.78. Table 3 shows the total number of correct extractions at the three agreement levels.
Testing H1: Comparing the Performance of OIE on Scientific vs. Encyclopedic Text
From the data, we see that extractors perform better on sentences from Wikipedia (0.54 P) than scientific text (0.34 P). Additionally, we see that there is higher annotator agreement on whether triples extracted from Wikipedia and scientific text are correct or incorrect: 0.80 - SD 0.24 (WIKI) vs. 0.72 - SD 0.25 (SCI). A similar difference in agreement is observed when only looking at triples that are considered to be correct by the majority of annotators: 0.87 - SD 0.21 (WIKI) vs. 0.78 - SD 0.25 (SCI) . In both cases, the difference is significant with p-values $<$ 0.01 using Welch's t-test. The differences between data sources are also seen when looking at the individual extraction tools. For instance, for Open IE 4 the precision is 0.19 higher for wikipedia extractions over those from scientific text. With this evidence, we reject our first hypothesis that the performance of these extractors are similar across data sources.
Testing H2: Comparing the Performance of Systems
We also compare the output of the two extractors. In terms precision, Open IE 4 performs much better across the two datasets (0.56P vs 0.39P). Looking at triples considered to be correct by the majority of annotators, we see that Open IE 4 has higher inter-annotator agreement 0.87 - SD 0.22 (Open IE) vs 0.81 - SD 0.24 (MinIE). Focusing on scientific and medical text (SCI), again where the triples are majority annotated as being correct, Open IE has higher inter-annotator agreement (Open IE: 0.83 - SD 0.24 vs MiniIE: 0.76 - SD 0.25). In both cases, the difference is significant with p-values $<$ 0.01 using Welch's t-test. This leads us to conclude that Open IE produces triples that annotators are more likely to agree as being correct.
MinIE provides many more correct extractions than OpenIE 4 (935 more across both datasets). The true recall numbers of the two systems can not be calculated with the data available, but the 40% difference in the numbers of correct extractions is strong evidence that the two systems do not have equivalent behavior.
A third indication of differences in their outputs comes from examining the complexity of the extracted relations. Open IE 4 generates longer triples on average (11.5 words) vs. 8.5 words for MinIE across all argument positions. However, Open IE 4 generates shorter relation types than MinIE (Open IE - 3.7 words; MiniIE 6.27 words) and the standard deviation in terms of word length is much more compact for Open IE 4 - 1 word vs 3 words for MinIE. Overall, our conclusion is that Open IE 4 performs better than MinIE both in terms of precision and compactness of relation types, while not matching MinIE's recall, and thus we reject our second hypothesis.
Other Observations
The amount of triples extracted from the scientific text is slightly larger than that extracted from the Wikipedia text. This follows from the fact that the scientific sentences are on average roughly 7 words longer than encyclopedic text.
The results of our experiment also confirm the notion that an unsupervised approach to extracting relations is important. We have identified 698 unique relation types that are part of triples agreed to be correct by all annotators. This number of relation types is derived from only 400 sentences. While not every relation type is essential for downstream tasks, it is clear that building specific extractors for each relation type in a supervised setting would be difficult.
Error Analysis
We now look more closely at the various errors that were generated by the two extractors.
Table 4 shows the sentences in which neither extractor produced triples. We see 3 distinct groups. The first are phrases that are incomplete sentences usually originating from headings (e.g. Materials and methods). The next group are descriptive headings potentially coming from paper titles or figure captions. We also see a group with more complex prepositional phrases. In general, these errors could be avoided by being more selective of the sentences used for random selection. Additionally, these systems could look at potentially just extracting noun phrases with variable relation types, hence, expressing a cooccurrence relation.
We also looked at where there was complete agreement by all annotators that a triple extraction was incorrect. In total there were 138 of these triples originating from 76 unique sentences. There were several patterns that appeared in these sentences.
We also see similar errors to those pointed out by BIBREF8 , namely, uninformative extractions, the difficulty in handling n-ary relations that are latent in the text, difficulties handling negations, and very large argument lengths. In general, these errors together point to several areas for further improvement including:
Conclusion
The pace of change in the scientific literature means that interconnections and facts in the form of relations between entities are constantly being created. Open information extraction provides an important tool to keep up with that pace of change. We have provided evidence that unsupervised techniques are needed to be able to deal with the variety of relations present in text. The work presented here provides an independent evaluation of these tools in their use on scientific text. Past evaluations have focused on encyclopedic or news corpora which often have simpler structures. We have shown that existing OIE systems perform worse on scientific and medical content than on general audience content.
There are a range of avenues for future work. First, the application of Crowd Truth framework BIBREF15 in the analysis of these results might prove to be useful as we believe that the use of unanimous agreement tends to negatively impact the perceived performance of the OIE tools. Second, we think the application to n-ary relations and a deeper analysis of negative relations would be of interest. To do this kind of evaluation, an important area of future work is the development of guidelines and tasks for more complex analysis of sentences in a crowd sourcing environment. The ability, for example, to indicate argument boundaries or correct sentences can be expected of expert annotators but needs to implemented in a manner that is efficient and easy for the general crowd worker. Third, we would like to expand the evaluation dataset to an even larger numbers of sentences. Lastly, there are a number of core natural language processing components that might be useful for OIE in this setting, for example, the use of syntactic features as suggested by BIBREF16 . Furthermore, we think that coreference is a crucial missing component and we are actively investigating improved coreference resolution for scientific texts.
To conclude, we hope that this evaluation provides further insights for implementors of these extraction tools to deal with the complexity of scientific and medical text. | Unanswerable |
18e915b917c81056ceaaad5d6581781c0168dac9 | 18e915b917c81056ceaaad5d6581781c0168dac9_0 | Q: What is the most common error type?
Text: Introduction
This work is licensed under a Creative Commons Attribution 4.0 International License. License details: http://creativecommons.org/licenses/by/4.0/
The scientific literature is growing at a rapid rate BIBREF0 . To make sense of this flood of literature, for example, to extract cancer pathways BIBREF1 or find geological features BIBREF2 , increasingly requires the application of natural language processing. Given the diversity of information and its constant flux, the use of unsupervised or distantly supervised techniques are of interest BIBREF3 . In this paper, we investigate one such unsupervised method, namely, Open Information Extraction (OIE) BIBREF4 . OIE is the task of the unsupervised creation of structured information from text. OIE is often used as a starting point for a number of downstream tasks including knowledge base construction, relation extraction, and question answering BIBREF5 .
While OIE has been applied to the scientific literature before BIBREF6 , we have not found a systematic evaluation of OIE as applied to scientific publications. The most recent evaluations of OIE extraction tools BIBREF7 , BIBREF8 have instead looked at the performance of these tools on traditional NLP information sources (i.e. encyclopedic and news-wire text). Indeed, as BIBREF8 noted, there is little work on the evaluation of OIE systems. Thus, the goal of this paper is to evaluate the performance of the state of the art in OIE systems on scientific text.
Specifically, we aim to test two hypotheses:
Additionally, we seek to gain insight into the value of unsupervised approaches to information extraction and also provide information useful to implementors of these systems. We note that our evaluation differs from existing OIE evaluations in that we use crowd-sourcing annotations instead of expert annotators. This allows for a larger number of annotators to be used. All of our data, annotations and analyses are made openly available.
The rest of the paper is organized as follows. We begin with a discussion of existing evaluation approaches and then describe the OIE systems that we evaluated. We then proceed to describe the datasets used in the evaluation and the annotation process that was employed. This is followed by the results of the evaluation including an error analysis. Finally, we conclude.
Existing Evaluation Approaches
OIE systems analyze sentences and emit relations between one predicate and two or more arguments (e.g. Washington :: was :: president). The arguments and predicates are not fixed to a given domain. (Note, that throughout this paper we use the word `triple” to refer interchangeably to binary relations.) Existing evaluation approaches for OIE systems have primarily taken a ground truth-based approach. Human annotators analyze sentences and determine correct relations to be extracted. Systems are then evaluated with respect to the overlap or similarity of their extractions to the ground truth annotations, allowing the standard metrics of precision and recall to be reported.
This seems sensible but is actually problematic because of different but equivalent representations of the information in an article. For example, consider the sentence “The patient was treated with Emtricitabine, Etravirine, and Darunavir”. One possible extraction is:
(The patient :: was treated with :: Emtricitabine, Etravirine, and Darunavir)
Another possible extraction is:
(The patient :: was treated with :: Emtricitabine)
(The patient :: was treated with :: Etravirine)
(The patient :: was treated with :: Darunavir)
Neither of these is wrong, but by choosing one approach or the other a pre-constructed gold set will falsely penalize a system that uses the other approach.
From such evaluations and their own cross dataset evaluation, BIBREF8 list the following common errors committed by OIE systems:
In our evaluation, we take a different approach. We do not define ground truth relation extractions from the sentences in advance. Instead, we manually judge the correctness of each extraction after the fact. We feel that this is the crux of the information extraction challenge. Is what is being extracted correct or not? This approach enables us to consider many more relations through the use of a crowd-sourced annotation process. Our evaluation approach is similar to the qualitative analysis performed in BIBREF8 and the evaluation performed in BIBREF7 . However, our evaluation is able to use more judges (5 instead of 2) because we apply crowd sourcing. For our labelling instructions, we adapted those used by BIBREF7 to the crowd sourcing setting.
As previously noted existing evaluations have also only looked at encyclopedic or newspaper corpora. Several systems (e.g. BIBREF4 , BIBREF9 ) have looked at text from the web as well, however, as far as we know, none have specifically looked at evaluation for scientific and medical text.
Systems
We evaluate two OIE systems (i.e. extractors). The first, OpenIE 4 BIBREF5 , descends from two popular OIE systems OLLIE BIBREF10 and Reverb BIBREF10 . We view this as a baseline system. The second was MinIE BIBREF7 , which is reported as performing better than OLLIE, ClauseIE BIBREF9 and Stanford OIE BIBREF9 . MinIE focuses on the notion of minimization - producing compact extractions from sentences. In our experience using OIE on scientific text, we have found that these systems often produce overly specific extractions that do not provide the redundancy useful for downstream tasks. Hence, we thought this was a useful package to explore.
We note that both OpenIE 4 and MiniIE support relation extractions that go beyond binary tuples, supporting the extraction of n-ary relations. We note that the most recent version of Open IE (version 5) is focused on n-ary relations. For ease of judgement, we focused on binary relations. Additionally, both systems support the detection of negative relations.
In terms of settings, we used the off the shelf settings for OpenIE 4. For MinIE, we used their “safe mode" option, which uses slightly more aggressive minimization than the standard setting. In the recent evaluation of MiniIE, this setting performed roughly on par with the default options BIBREF7 . Driver code showing how we ran each system is available.
Datasets
We used two different data sources in our evaluation. The first dataset (WIKI) was the same set of 200 sentences from Wikipedia used in BIBREF7 . These sentences were randomly selected by the creators of the dataset. This choice allows for a rough comparison between our results and theirs.
The second dataset (SCI) was a set of 220 sentences from the scientific literature. We sourced the sentences from the OA-STM corpus. This corpus is derived from the 10 most published in disciplines. It includes 11 articles each from the following domains: agriculture, astronomy, biology, chemistry, computer science, earth science, engineering, materials science, math, and medicine. The article text is made freely available and the corpus provides both an XML and a simple text version of each article.
We randomly selected 2 sentences with more than two words from each paper using the simple text version of the paper. We maintained the id of the source article and the line number for each sentence.
Annotation Process
We employed the following annotation process. Each OIE extractor was applied to both datasets with the settings described above. This resulted in the generation of triples for 199 of the 200 WIKI sentences and 206 of the 220 SCI sentences. That is there were some sentences in which no triples were extracted. We discuss later the sentences in which no triples were extracted. In total 2247 triples were extracted.
The sentences and their corresponding triples were then divided. Each task contained 10 sentences and all of their unique corresponding triples from a particular OIE systems. Half of the ten sentences were randomly selected from SCI and the other half were randomly selected from WIKI. Crowd workers were asked to mark whether a triple was correct, namely, did the triple reflect the consequence of the sentence. Examples of correct and incorrect triples were provided. Complete labelling instructions and the presentation of the HITS can be found with the dataset. All triples were labelled by at least 5 workers.
Note, to ensure the every HIT had 10 sentences, some sentences were duplicated. Furthermore, we did not mandate that all workers complete all HITS.
We followed recommended practices for the use of crowd sourcing in linguistics BIBREF11 . We used Amazon Mechanical Turk as a means to present the sentences and their corresponding triples to a crowd for annotation. Within Mechanical Turk tasks are called Human Intelligence Tasks (HITs). To begin, we collected a small set of sentences and triples with known correct answers. We did this by creating a series of internal HITs and loaded them the Mechanical Turk development environment called the Mechanical Turk Sandbox. The HITs were visible to a trusted group of colleagues who were asked to complete the HITs.
Having an internal team of workers attempt HITs provides us with two valuable aspects of the eventual production HITs. First, internal users are able to provide feedback related to usability and clarity of the task. They were asked to read the instructions and let us know if there was anything that was unclear. After taking the HITs, they are able to ask questions about anomalies or confusing situations they encounter and allow us to determine if specific types of HITs are either not appropriate for the task or might need further explanation in the instructions. In addition to the internal users direct feedback, we were also able to use the Mechanical Turk Requester functionality to monitor how long (in minutes and seconds) it took each worker to complete each HIT. This would come into factor how we decided on how much to pay each Worker per HIT after they were made available to the public.
The second significant outcome from the internal annotations was the generation of a set of `expected' correct triples. Having a this set of annotations is an integral part of two aspects of our crowdsourcing process. First, it allows us to create a qualification HIT. A qualification HIT is a HIT that is made available to the public with the understanding the Workers will be evaluated based on how closely they matched the annotations of the internal annotators. Based upon this, the Workers with the most matches would be invited to work on additional tasks. Second, we are able to add the internal set of triples randomly amongst the other relations we were seeking to have annotated. This allows us to monitor quality of the individual Workers over the course of the project. Note, none of this data was used in the actual evaluation. It was only for the purposes of qualifying Workers.
We are sensitive to issues that other researchers have in regards to Mechanical Turk Workers earning fair payment in exchange for their contributions to the HITs BIBREF12 . We used the time estimates from our internal annotation to price the task in order to be above US minimum wage. All workers were qualified before being issued tasks. Overall, we employed 10 crowd workers. On average it took 30 minutes for a worker to complete a HIT. In line with BIBREF13 , we monitored for potential non-performance or spam by looking for long response times and consecutive submitted results. We saw no indicators of low quality responses.
Judgement Data and Inter-Annotator Agreement
In total, 11262 judgements were obtained after running the annotation process. Every triple had at least 5 judgements from different annotators. All judgement data is made available. The proportion of overall agreement between annotators is 0.76 with a standard deviation of 0.25 on whether a triple is consequence of the given sentence. We also calculated inter-annotator agreement statistics. Using Krippendorff's alpha inter-annotator agreement was 0.44. This calculation was performed over all data and annotators as Krippendorff's alpha is designed to account for missing data and work across more than two annotators. Additionally, Fleiss' Kappa and Scott's pi were calculated pairwise between all annotators where there were overlapping ratings (i.e. raters had rated at least one triple in common). The average Fleiss's Kappa was 0.41 and the average of Scott's pi was 0.37. Using BIBREF14 as a guide, we interpret these statistics as suggesting there is moderate agreement between annotators and that agreement is above random chance. This moderate level of agreement is to be expected as the task itself can be difficult and requires judgement from the annotators at the margin.
Table 1 shows examples of triples that were associated with higher disagreement between annotators. One can see for example, in the third example, that annotators might be confused by the use of a pronoun (him). Another example is in the last sentence of the table, where one can see that there might be disagreement on whether the subsequent prepositional phrase behind light microscope analysis should be included as part of the extracted triple.
We take the variability of judgements into account when using this data to compute the performance of the two extraction tools. Hence, to make assessments as to whether a triple correctly reflects the content from which it is extracted, we rely on the unanimous positive agreement between crowd workers. That is to say that if we have 100% inter-annotator agreement that a triple was correctly extracted we label it as correct.
Experimental Results
Table 2 show the results for the combinations of systems and data sources. The Correct Triples column contains the number of triples that are labelled as being correct by all annotators. Total Triples are the total number of triples extracted by the given systems over the specified data. Precision is calculated as typical where Correct Triples are treated as true positives. On average, 3.1 triples were extracted per sentence.
Figure 1 shows the performance of extractors in terms of precision as inter-annotator agreement decreases. In this figure, we look only at agreement on triples where the majority agree that the triple is correct. Furthermore, to ease comparison, we only consider triples with 5 judgements this excludes 9 triples. We indicate not only the pair-wise inter-annotator agreement but also the number of annotators who have judged a triple to be correct. For example, at the 40% agreement level at least 3 annotators have agreed that a triple is true. The figure separates the results by extractor and by data source.
We see that as expected the amount of triples agreed to as correct grows larger as we relax the requirement for agreement. For example, analyzing Open IE's results, at the 100% agreement level we see a precision of 0.56 whereas at the 40% agreement level we see a precision of 0.78. Table 3 shows the total number of correct extractions at the three agreement levels.
Testing H1: Comparing the Performance of OIE on Scientific vs. Encyclopedic Text
From the data, we see that extractors perform better on sentences from Wikipedia (0.54 P) than scientific text (0.34 P). Additionally, we see that there is higher annotator agreement on whether triples extracted from Wikipedia and scientific text are correct or incorrect: 0.80 - SD 0.24 (WIKI) vs. 0.72 - SD 0.25 (SCI). A similar difference in agreement is observed when only looking at triples that are considered to be correct by the majority of annotators: 0.87 - SD 0.21 (WIKI) vs. 0.78 - SD 0.25 (SCI) . In both cases, the difference is significant with p-values $<$ 0.01 using Welch's t-test. The differences between data sources are also seen when looking at the individual extraction tools. For instance, for Open IE 4 the precision is 0.19 higher for wikipedia extractions over those from scientific text. With this evidence, we reject our first hypothesis that the performance of these extractors are similar across data sources.
Testing H2: Comparing the Performance of Systems
We also compare the output of the two extractors. In terms precision, Open IE 4 performs much better across the two datasets (0.56P vs 0.39P). Looking at triples considered to be correct by the majority of annotators, we see that Open IE 4 has higher inter-annotator agreement 0.87 - SD 0.22 (Open IE) vs 0.81 - SD 0.24 (MinIE). Focusing on scientific and medical text (SCI), again where the triples are majority annotated as being correct, Open IE has higher inter-annotator agreement (Open IE: 0.83 - SD 0.24 vs MiniIE: 0.76 - SD 0.25). In both cases, the difference is significant with p-values $<$ 0.01 using Welch's t-test. This leads us to conclude that Open IE produces triples that annotators are more likely to agree as being correct.
MinIE provides many more correct extractions than OpenIE 4 (935 more across both datasets). The true recall numbers of the two systems can not be calculated with the data available, but the 40% difference in the numbers of correct extractions is strong evidence that the two systems do not have equivalent behavior.
A third indication of differences in their outputs comes from examining the complexity of the extracted relations. Open IE 4 generates longer triples on average (11.5 words) vs. 8.5 words for MinIE across all argument positions. However, Open IE 4 generates shorter relation types than MinIE (Open IE - 3.7 words; MiniIE 6.27 words) and the standard deviation in terms of word length is much more compact for Open IE 4 - 1 word vs 3 words for MinIE. Overall, our conclusion is that Open IE 4 performs better than MinIE both in terms of precision and compactness of relation types, while not matching MinIE's recall, and thus we reject our second hypothesis.
Other Observations
The amount of triples extracted from the scientific text is slightly larger than that extracted from the Wikipedia text. This follows from the fact that the scientific sentences are on average roughly 7 words longer than encyclopedic text.
The results of our experiment also confirm the notion that an unsupervised approach to extracting relations is important. We have identified 698 unique relation types that are part of triples agreed to be correct by all annotators. This number of relation types is derived from only 400 sentences. While not every relation type is essential for downstream tasks, it is clear that building specific extractors for each relation type in a supervised setting would be difficult.
Error Analysis
We now look more closely at the various errors that were generated by the two extractors.
Table 4 shows the sentences in which neither extractor produced triples. We see 3 distinct groups. The first are phrases that are incomplete sentences usually originating from headings (e.g. Materials and methods). The next group are descriptive headings potentially coming from paper titles or figure captions. We also see a group with more complex prepositional phrases. In general, these errors could be avoided by being more selective of the sentences used for random selection. Additionally, these systems could look at potentially just extracting noun phrases with variable relation types, hence, expressing a cooccurrence relation.
We also looked at where there was complete agreement by all annotators that a triple extraction was incorrect. In total there were 138 of these triples originating from 76 unique sentences. There were several patterns that appeared in these sentences.
We also see similar errors to those pointed out by BIBREF8 , namely, uninformative extractions, the difficulty in handling n-ary relations that are latent in the text, difficulties handling negations, and very large argument lengths. In general, these errors together point to several areas for further improvement including:
Conclusion
The pace of change in the scientific literature means that interconnections and facts in the form of relations between entities are constantly being created. Open information extraction provides an important tool to keep up with that pace of change. We have provided evidence that unsupervised techniques are needed to be able to deal with the variety of relations present in text. The work presented here provides an independent evaluation of these tools in their use on scientific text. Past evaluations have focused on encyclopedic or news corpora which often have simpler structures. We have shown that existing OIE systems perform worse on scientific and medical content than on general audience content.
There are a range of avenues for future work. First, the application of Crowd Truth framework BIBREF15 in the analysis of these results might prove to be useful as we believe that the use of unanimous agreement tends to negatively impact the perceived performance of the OIE tools. Second, we think the application to n-ary relations and a deeper analysis of negative relations would be of interest. To do this kind of evaluation, an important area of future work is the development of guidelines and tasks for more complex analysis of sentences in a crowd sourcing environment. The ability, for example, to indicate argument boundaries or correct sentences can be expected of expert annotators but needs to implemented in a manner that is efficient and easy for the general crowd worker. Third, we would like to expand the evaluation dataset to an even larger numbers of sentences. Lastly, there are a number of core natural language processing components that might be useful for OIE in this setting, for example, the use of syntactic features as suggested by BIBREF16 . Furthermore, we think that coreference is a crucial missing component and we are actively investigating improved coreference resolution for scientific texts.
To conclude, we hope that this evaluation provides further insights for implementors of these extraction tools to deal with the complexity of scientific and medical text. | all annotators that a triple extraction was incorrect |
9c68d6d5451395199ca08757157fbfea27f00f69 | 9c68d6d5451395199ca08757157fbfea27f00f69_0 | Q: Which OpenIE systems were used?
Text: Introduction
This work is licensed under a Creative Commons Attribution 4.0 International License. License details: http://creativecommons.org/licenses/by/4.0/
The scientific literature is growing at a rapid rate BIBREF0 . To make sense of this flood of literature, for example, to extract cancer pathways BIBREF1 or find geological features BIBREF2 , increasingly requires the application of natural language processing. Given the diversity of information and its constant flux, the use of unsupervised or distantly supervised techniques are of interest BIBREF3 . In this paper, we investigate one such unsupervised method, namely, Open Information Extraction (OIE) BIBREF4 . OIE is the task of the unsupervised creation of structured information from text. OIE is often used as a starting point for a number of downstream tasks including knowledge base construction, relation extraction, and question answering BIBREF5 .
While OIE has been applied to the scientific literature before BIBREF6 , we have not found a systematic evaluation of OIE as applied to scientific publications. The most recent evaluations of OIE extraction tools BIBREF7 , BIBREF8 have instead looked at the performance of these tools on traditional NLP information sources (i.e. encyclopedic and news-wire text). Indeed, as BIBREF8 noted, there is little work on the evaluation of OIE systems. Thus, the goal of this paper is to evaluate the performance of the state of the art in OIE systems on scientific text.
Specifically, we aim to test two hypotheses:
Additionally, we seek to gain insight into the value of unsupervised approaches to information extraction and also provide information useful to implementors of these systems. We note that our evaluation differs from existing OIE evaluations in that we use crowd-sourcing annotations instead of expert annotators. This allows for a larger number of annotators to be used. All of our data, annotations and analyses are made openly available.
The rest of the paper is organized as follows. We begin with a discussion of existing evaluation approaches and then describe the OIE systems that we evaluated. We then proceed to describe the datasets used in the evaluation and the annotation process that was employed. This is followed by the results of the evaluation including an error analysis. Finally, we conclude.
Existing Evaluation Approaches
OIE systems analyze sentences and emit relations between one predicate and two or more arguments (e.g. Washington :: was :: president). The arguments and predicates are not fixed to a given domain. (Note, that throughout this paper we use the word `triple” to refer interchangeably to binary relations.) Existing evaluation approaches for OIE systems have primarily taken a ground truth-based approach. Human annotators analyze sentences and determine correct relations to be extracted. Systems are then evaluated with respect to the overlap or similarity of their extractions to the ground truth annotations, allowing the standard metrics of precision and recall to be reported.
This seems sensible but is actually problematic because of different but equivalent representations of the information in an article. For example, consider the sentence “The patient was treated with Emtricitabine, Etravirine, and Darunavir”. One possible extraction is:
(The patient :: was treated with :: Emtricitabine, Etravirine, and Darunavir)
Another possible extraction is:
(The patient :: was treated with :: Emtricitabine)
(The patient :: was treated with :: Etravirine)
(The patient :: was treated with :: Darunavir)
Neither of these is wrong, but by choosing one approach or the other a pre-constructed gold set will falsely penalize a system that uses the other approach.
From such evaluations and their own cross dataset evaluation, BIBREF8 list the following common errors committed by OIE systems:
In our evaluation, we take a different approach. We do not define ground truth relation extractions from the sentences in advance. Instead, we manually judge the correctness of each extraction after the fact. We feel that this is the crux of the information extraction challenge. Is what is being extracted correct or not? This approach enables us to consider many more relations through the use of a crowd-sourced annotation process. Our evaluation approach is similar to the qualitative analysis performed in BIBREF8 and the evaluation performed in BIBREF7 . However, our evaluation is able to use more judges (5 instead of 2) because we apply crowd sourcing. For our labelling instructions, we adapted those used by BIBREF7 to the crowd sourcing setting.
As previously noted existing evaluations have also only looked at encyclopedic or newspaper corpora. Several systems (e.g. BIBREF4 , BIBREF9 ) have looked at text from the web as well, however, as far as we know, none have specifically looked at evaluation for scientific and medical text.
Systems
We evaluate two OIE systems (i.e. extractors). The first, OpenIE 4 BIBREF5 , descends from two popular OIE systems OLLIE BIBREF10 and Reverb BIBREF10 . We view this as a baseline system. The second was MinIE BIBREF7 , which is reported as performing better than OLLIE, ClauseIE BIBREF9 and Stanford OIE BIBREF9 . MinIE focuses on the notion of minimization - producing compact extractions from sentences. In our experience using OIE on scientific text, we have found that these systems often produce overly specific extractions that do not provide the redundancy useful for downstream tasks. Hence, we thought this was a useful package to explore.
We note that both OpenIE 4 and MiniIE support relation extractions that go beyond binary tuples, supporting the extraction of n-ary relations. We note that the most recent version of Open IE (version 5) is focused on n-ary relations. For ease of judgement, we focused on binary relations. Additionally, both systems support the detection of negative relations.
In terms of settings, we used the off the shelf settings for OpenIE 4. For MinIE, we used their “safe mode" option, which uses slightly more aggressive minimization than the standard setting. In the recent evaluation of MiniIE, this setting performed roughly on par with the default options BIBREF7 . Driver code showing how we ran each system is available.
Datasets
We used two different data sources in our evaluation. The first dataset (WIKI) was the same set of 200 sentences from Wikipedia used in BIBREF7 . These sentences were randomly selected by the creators of the dataset. This choice allows for a rough comparison between our results and theirs.
The second dataset (SCI) was a set of 220 sentences from the scientific literature. We sourced the sentences from the OA-STM corpus. This corpus is derived from the 10 most published in disciplines. It includes 11 articles each from the following domains: agriculture, astronomy, biology, chemistry, computer science, earth science, engineering, materials science, math, and medicine. The article text is made freely available and the corpus provides both an XML and a simple text version of each article.
We randomly selected 2 sentences with more than two words from each paper using the simple text version of the paper. We maintained the id of the source article and the line number for each sentence.
Annotation Process
We employed the following annotation process. Each OIE extractor was applied to both datasets with the settings described above. This resulted in the generation of triples for 199 of the 200 WIKI sentences and 206 of the 220 SCI sentences. That is there were some sentences in which no triples were extracted. We discuss later the sentences in which no triples were extracted. In total 2247 triples were extracted.
The sentences and their corresponding triples were then divided. Each task contained 10 sentences and all of their unique corresponding triples from a particular OIE systems. Half of the ten sentences were randomly selected from SCI and the other half were randomly selected from WIKI. Crowd workers were asked to mark whether a triple was correct, namely, did the triple reflect the consequence of the sentence. Examples of correct and incorrect triples were provided. Complete labelling instructions and the presentation of the HITS can be found with the dataset. All triples were labelled by at least 5 workers.
Note, to ensure the every HIT had 10 sentences, some sentences were duplicated. Furthermore, we did not mandate that all workers complete all HITS.
We followed recommended practices for the use of crowd sourcing in linguistics BIBREF11 . We used Amazon Mechanical Turk as a means to present the sentences and their corresponding triples to a crowd for annotation. Within Mechanical Turk tasks are called Human Intelligence Tasks (HITs). To begin, we collected a small set of sentences and triples with known correct answers. We did this by creating a series of internal HITs and loaded them the Mechanical Turk development environment called the Mechanical Turk Sandbox. The HITs were visible to a trusted group of colleagues who were asked to complete the HITs.
Having an internal team of workers attempt HITs provides us with two valuable aspects of the eventual production HITs. First, internal users are able to provide feedback related to usability and clarity of the task. They were asked to read the instructions and let us know if there was anything that was unclear. After taking the HITs, they are able to ask questions about anomalies or confusing situations they encounter and allow us to determine if specific types of HITs are either not appropriate for the task or might need further explanation in the instructions. In addition to the internal users direct feedback, we were also able to use the Mechanical Turk Requester functionality to monitor how long (in minutes and seconds) it took each worker to complete each HIT. This would come into factor how we decided on how much to pay each Worker per HIT after they were made available to the public.
The second significant outcome from the internal annotations was the generation of a set of `expected' correct triples. Having a this set of annotations is an integral part of two aspects of our crowdsourcing process. First, it allows us to create a qualification HIT. A qualification HIT is a HIT that is made available to the public with the understanding the Workers will be evaluated based on how closely they matched the annotations of the internal annotators. Based upon this, the Workers with the most matches would be invited to work on additional tasks. Second, we are able to add the internal set of triples randomly amongst the other relations we were seeking to have annotated. This allows us to monitor quality of the individual Workers over the course of the project. Note, none of this data was used in the actual evaluation. It was only for the purposes of qualifying Workers.
We are sensitive to issues that other researchers have in regards to Mechanical Turk Workers earning fair payment in exchange for their contributions to the HITs BIBREF12 . We used the time estimates from our internal annotation to price the task in order to be above US minimum wage. All workers were qualified before being issued tasks. Overall, we employed 10 crowd workers. On average it took 30 minutes for a worker to complete a HIT. In line with BIBREF13 , we monitored for potential non-performance or spam by looking for long response times and consecutive submitted results. We saw no indicators of low quality responses.
Judgement Data and Inter-Annotator Agreement
In total, 11262 judgements were obtained after running the annotation process. Every triple had at least 5 judgements from different annotators. All judgement data is made available. The proportion of overall agreement between annotators is 0.76 with a standard deviation of 0.25 on whether a triple is consequence of the given sentence. We also calculated inter-annotator agreement statistics. Using Krippendorff's alpha inter-annotator agreement was 0.44. This calculation was performed over all data and annotators as Krippendorff's alpha is designed to account for missing data and work across more than two annotators. Additionally, Fleiss' Kappa and Scott's pi were calculated pairwise between all annotators where there were overlapping ratings (i.e. raters had rated at least one triple in common). The average Fleiss's Kappa was 0.41 and the average of Scott's pi was 0.37. Using BIBREF14 as a guide, we interpret these statistics as suggesting there is moderate agreement between annotators and that agreement is above random chance. This moderate level of agreement is to be expected as the task itself can be difficult and requires judgement from the annotators at the margin.
Table 1 shows examples of triples that were associated with higher disagreement between annotators. One can see for example, in the third example, that annotators might be confused by the use of a pronoun (him). Another example is in the last sentence of the table, where one can see that there might be disagreement on whether the subsequent prepositional phrase behind light microscope analysis should be included as part of the extracted triple.
We take the variability of judgements into account when using this data to compute the performance of the two extraction tools. Hence, to make assessments as to whether a triple correctly reflects the content from which it is extracted, we rely on the unanimous positive agreement between crowd workers. That is to say that if we have 100% inter-annotator agreement that a triple was correctly extracted we label it as correct.
Experimental Results
Table 2 show the results for the combinations of systems and data sources. The Correct Triples column contains the number of triples that are labelled as being correct by all annotators. Total Triples are the total number of triples extracted by the given systems over the specified data. Precision is calculated as typical where Correct Triples are treated as true positives. On average, 3.1 triples were extracted per sentence.
Figure 1 shows the performance of extractors in terms of precision as inter-annotator agreement decreases. In this figure, we look only at agreement on triples where the majority agree that the triple is correct. Furthermore, to ease comparison, we only consider triples with 5 judgements this excludes 9 triples. We indicate not only the pair-wise inter-annotator agreement but also the number of annotators who have judged a triple to be correct. For example, at the 40% agreement level at least 3 annotators have agreed that a triple is true. The figure separates the results by extractor and by data source.
We see that as expected the amount of triples agreed to as correct grows larger as we relax the requirement for agreement. For example, analyzing Open IE's results, at the 100% agreement level we see a precision of 0.56 whereas at the 40% agreement level we see a precision of 0.78. Table 3 shows the total number of correct extractions at the three agreement levels.
Testing H1: Comparing the Performance of OIE on Scientific vs. Encyclopedic Text
From the data, we see that extractors perform better on sentences from Wikipedia (0.54 P) than scientific text (0.34 P). Additionally, we see that there is higher annotator agreement on whether triples extracted from Wikipedia and scientific text are correct or incorrect: 0.80 - SD 0.24 (WIKI) vs. 0.72 - SD 0.25 (SCI). A similar difference in agreement is observed when only looking at triples that are considered to be correct by the majority of annotators: 0.87 - SD 0.21 (WIKI) vs. 0.78 - SD 0.25 (SCI) . In both cases, the difference is significant with p-values $<$ 0.01 using Welch's t-test. The differences between data sources are also seen when looking at the individual extraction tools. For instance, for Open IE 4 the precision is 0.19 higher for wikipedia extractions over those from scientific text. With this evidence, we reject our first hypothesis that the performance of these extractors are similar across data sources.
Testing H2: Comparing the Performance of Systems
We also compare the output of the two extractors. In terms precision, Open IE 4 performs much better across the two datasets (0.56P vs 0.39P). Looking at triples considered to be correct by the majority of annotators, we see that Open IE 4 has higher inter-annotator agreement 0.87 - SD 0.22 (Open IE) vs 0.81 - SD 0.24 (MinIE). Focusing on scientific and medical text (SCI), again where the triples are majority annotated as being correct, Open IE has higher inter-annotator agreement (Open IE: 0.83 - SD 0.24 vs MiniIE: 0.76 - SD 0.25). In both cases, the difference is significant with p-values $<$ 0.01 using Welch's t-test. This leads us to conclude that Open IE produces triples that annotators are more likely to agree as being correct.
MinIE provides many more correct extractions than OpenIE 4 (935 more across both datasets). The true recall numbers of the two systems can not be calculated with the data available, but the 40% difference in the numbers of correct extractions is strong evidence that the two systems do not have equivalent behavior.
A third indication of differences in their outputs comes from examining the complexity of the extracted relations. Open IE 4 generates longer triples on average (11.5 words) vs. 8.5 words for MinIE across all argument positions. However, Open IE 4 generates shorter relation types than MinIE (Open IE - 3.7 words; MiniIE 6.27 words) and the standard deviation in terms of word length is much more compact for Open IE 4 - 1 word vs 3 words for MinIE. Overall, our conclusion is that Open IE 4 performs better than MinIE both in terms of precision and compactness of relation types, while not matching MinIE's recall, and thus we reject our second hypothesis.
Other Observations
The amount of triples extracted from the scientific text is slightly larger than that extracted from the Wikipedia text. This follows from the fact that the scientific sentences are on average roughly 7 words longer than encyclopedic text.
The results of our experiment also confirm the notion that an unsupervised approach to extracting relations is important. We have identified 698 unique relation types that are part of triples agreed to be correct by all annotators. This number of relation types is derived from only 400 sentences. While not every relation type is essential for downstream tasks, it is clear that building specific extractors for each relation type in a supervised setting would be difficult.
Error Analysis
We now look more closely at the various errors that were generated by the two extractors.
Table 4 shows the sentences in which neither extractor produced triples. We see 3 distinct groups. The first are phrases that are incomplete sentences usually originating from headings (e.g. Materials and methods). The next group are descriptive headings potentially coming from paper titles or figure captions. We also see a group with more complex prepositional phrases. In general, these errors could be avoided by being more selective of the sentences used for random selection. Additionally, these systems could look at potentially just extracting noun phrases with variable relation types, hence, expressing a cooccurrence relation.
We also looked at where there was complete agreement by all annotators that a triple extraction was incorrect. In total there were 138 of these triples originating from 76 unique sentences. There were several patterns that appeared in these sentences.
We also see similar errors to those pointed out by BIBREF8 , namely, uninformative extractions, the difficulty in handling n-ary relations that are latent in the text, difficulties handling negations, and very large argument lengths. In general, these errors together point to several areas for further improvement including:
Conclusion
The pace of change in the scientific literature means that interconnections and facts in the form of relations between entities are constantly being created. Open information extraction provides an important tool to keep up with that pace of change. We have provided evidence that unsupervised techniques are needed to be able to deal with the variety of relations present in text. The work presented here provides an independent evaluation of these tools in their use on scientific text. Past evaluations have focused on encyclopedic or news corpora which often have simpler structures. We have shown that existing OIE systems perform worse on scientific and medical content than on general audience content.
There are a range of avenues for future work. First, the application of Crowd Truth framework BIBREF15 in the analysis of these results might prove to be useful as we believe that the use of unanimous agreement tends to negatively impact the perceived performance of the OIE tools. Second, we think the application to n-ary relations and a deeper analysis of negative relations would be of interest. To do this kind of evaluation, an important area of future work is the development of guidelines and tasks for more complex analysis of sentences in a crowd sourcing environment. The ability, for example, to indicate argument boundaries or correct sentences can be expected of expert annotators but needs to implemented in a manner that is efficient and easy for the general crowd worker. Third, we would like to expand the evaluation dataset to an even larger numbers of sentences. Lastly, there are a number of core natural language processing components that might be useful for OIE in this setting, for example, the use of syntactic features as suggested by BIBREF16 . Furthermore, we think that coreference is a crucial missing component and we are actively investigating improved coreference resolution for scientific texts.
To conclude, we hope that this evaluation provides further insights for implementors of these extraction tools to deal with the complexity of scientific and medical text. | OpenIE4 and MiniIE |
372fbf2d120ca7a101f70d226057f9639bf1f9f2 | 372fbf2d120ca7a101f70d226057f9639bf1f9f2_0 | Q: What is the role of crowd-sourcing?
Text: Introduction
This work is licensed under a Creative Commons Attribution 4.0 International License. License details: http://creativecommons.org/licenses/by/4.0/
The scientific literature is growing at a rapid rate BIBREF0 . To make sense of this flood of literature, for example, to extract cancer pathways BIBREF1 or find geological features BIBREF2 , increasingly requires the application of natural language processing. Given the diversity of information and its constant flux, the use of unsupervised or distantly supervised techniques are of interest BIBREF3 . In this paper, we investigate one such unsupervised method, namely, Open Information Extraction (OIE) BIBREF4 . OIE is the task of the unsupervised creation of structured information from text. OIE is often used as a starting point for a number of downstream tasks including knowledge base construction, relation extraction, and question answering BIBREF5 .
While OIE has been applied to the scientific literature before BIBREF6 , we have not found a systematic evaluation of OIE as applied to scientific publications. The most recent evaluations of OIE extraction tools BIBREF7 , BIBREF8 have instead looked at the performance of these tools on traditional NLP information sources (i.e. encyclopedic and news-wire text). Indeed, as BIBREF8 noted, there is little work on the evaluation of OIE systems. Thus, the goal of this paper is to evaluate the performance of the state of the art in OIE systems on scientific text.
Specifically, we aim to test two hypotheses:
Additionally, we seek to gain insight into the value of unsupervised approaches to information extraction and also provide information useful to implementors of these systems. We note that our evaluation differs from existing OIE evaluations in that we use crowd-sourcing annotations instead of expert annotators. This allows for a larger number of annotators to be used. All of our data, annotations and analyses are made openly available.
The rest of the paper is organized as follows. We begin with a discussion of existing evaluation approaches and then describe the OIE systems that we evaluated. We then proceed to describe the datasets used in the evaluation and the annotation process that was employed. This is followed by the results of the evaluation including an error analysis. Finally, we conclude.
Existing Evaluation Approaches
OIE systems analyze sentences and emit relations between one predicate and two or more arguments (e.g. Washington :: was :: president). The arguments and predicates are not fixed to a given domain. (Note, that throughout this paper we use the word `triple” to refer interchangeably to binary relations.) Existing evaluation approaches for OIE systems have primarily taken a ground truth-based approach. Human annotators analyze sentences and determine correct relations to be extracted. Systems are then evaluated with respect to the overlap or similarity of their extractions to the ground truth annotations, allowing the standard metrics of precision and recall to be reported.
This seems sensible but is actually problematic because of different but equivalent representations of the information in an article. For example, consider the sentence “The patient was treated with Emtricitabine, Etravirine, and Darunavir”. One possible extraction is:
(The patient :: was treated with :: Emtricitabine, Etravirine, and Darunavir)
Another possible extraction is:
(The patient :: was treated with :: Emtricitabine)
(The patient :: was treated with :: Etravirine)
(The patient :: was treated with :: Darunavir)
Neither of these is wrong, but by choosing one approach or the other a pre-constructed gold set will falsely penalize a system that uses the other approach.
From such evaluations and their own cross dataset evaluation, BIBREF8 list the following common errors committed by OIE systems:
In our evaluation, we take a different approach. We do not define ground truth relation extractions from the sentences in advance. Instead, we manually judge the correctness of each extraction after the fact. We feel that this is the crux of the information extraction challenge. Is what is being extracted correct or not? This approach enables us to consider many more relations through the use of a crowd-sourced annotation process. Our evaluation approach is similar to the qualitative analysis performed in BIBREF8 and the evaluation performed in BIBREF7 . However, our evaluation is able to use more judges (5 instead of 2) because we apply crowd sourcing. For our labelling instructions, we adapted those used by BIBREF7 to the crowd sourcing setting.
As previously noted existing evaluations have also only looked at encyclopedic or newspaper corpora. Several systems (e.g. BIBREF4 , BIBREF9 ) have looked at text from the web as well, however, as far as we know, none have specifically looked at evaluation for scientific and medical text.
Systems
We evaluate two OIE systems (i.e. extractors). The first, OpenIE 4 BIBREF5 , descends from two popular OIE systems OLLIE BIBREF10 and Reverb BIBREF10 . We view this as a baseline system. The second was MinIE BIBREF7 , which is reported as performing better than OLLIE, ClauseIE BIBREF9 and Stanford OIE BIBREF9 . MinIE focuses on the notion of minimization - producing compact extractions from sentences. In our experience using OIE on scientific text, we have found that these systems often produce overly specific extractions that do not provide the redundancy useful for downstream tasks. Hence, we thought this was a useful package to explore.
We note that both OpenIE 4 and MiniIE support relation extractions that go beyond binary tuples, supporting the extraction of n-ary relations. We note that the most recent version of Open IE (version 5) is focused on n-ary relations. For ease of judgement, we focused on binary relations. Additionally, both systems support the detection of negative relations.
In terms of settings, we used the off the shelf settings for OpenIE 4. For MinIE, we used their “safe mode" option, which uses slightly more aggressive minimization than the standard setting. In the recent evaluation of MiniIE, this setting performed roughly on par with the default options BIBREF7 . Driver code showing how we ran each system is available.
Datasets
We used two different data sources in our evaluation. The first dataset (WIKI) was the same set of 200 sentences from Wikipedia used in BIBREF7 . These sentences were randomly selected by the creators of the dataset. This choice allows for a rough comparison between our results and theirs.
The second dataset (SCI) was a set of 220 sentences from the scientific literature. We sourced the sentences from the OA-STM corpus. This corpus is derived from the 10 most published in disciplines. It includes 11 articles each from the following domains: agriculture, astronomy, biology, chemistry, computer science, earth science, engineering, materials science, math, and medicine. The article text is made freely available and the corpus provides both an XML and a simple text version of each article.
We randomly selected 2 sentences with more than two words from each paper using the simple text version of the paper. We maintained the id of the source article and the line number for each sentence.
Annotation Process
We employed the following annotation process. Each OIE extractor was applied to both datasets with the settings described above. This resulted in the generation of triples for 199 of the 200 WIKI sentences and 206 of the 220 SCI sentences. That is there were some sentences in which no triples were extracted. We discuss later the sentences in which no triples were extracted. In total 2247 triples were extracted.
The sentences and their corresponding triples were then divided. Each task contained 10 sentences and all of their unique corresponding triples from a particular OIE systems. Half of the ten sentences were randomly selected from SCI and the other half were randomly selected from WIKI. Crowd workers were asked to mark whether a triple was correct, namely, did the triple reflect the consequence of the sentence. Examples of correct and incorrect triples were provided. Complete labelling instructions and the presentation of the HITS can be found with the dataset. All triples were labelled by at least 5 workers.
Note, to ensure the every HIT had 10 sentences, some sentences were duplicated. Furthermore, we did not mandate that all workers complete all HITS.
We followed recommended practices for the use of crowd sourcing in linguistics BIBREF11 . We used Amazon Mechanical Turk as a means to present the sentences and their corresponding triples to a crowd for annotation. Within Mechanical Turk tasks are called Human Intelligence Tasks (HITs). To begin, we collected a small set of sentences and triples with known correct answers. We did this by creating a series of internal HITs and loaded them the Mechanical Turk development environment called the Mechanical Turk Sandbox. The HITs were visible to a trusted group of colleagues who were asked to complete the HITs.
Having an internal team of workers attempt HITs provides us with two valuable aspects of the eventual production HITs. First, internal users are able to provide feedback related to usability and clarity of the task. They were asked to read the instructions and let us know if there was anything that was unclear. After taking the HITs, they are able to ask questions about anomalies or confusing situations they encounter and allow us to determine if specific types of HITs are either not appropriate for the task or might need further explanation in the instructions. In addition to the internal users direct feedback, we were also able to use the Mechanical Turk Requester functionality to monitor how long (in minutes and seconds) it took each worker to complete each HIT. This would come into factor how we decided on how much to pay each Worker per HIT after they were made available to the public.
The second significant outcome from the internal annotations was the generation of a set of `expected' correct triples. Having a this set of annotations is an integral part of two aspects of our crowdsourcing process. First, it allows us to create a qualification HIT. A qualification HIT is a HIT that is made available to the public with the understanding the Workers will be evaluated based on how closely they matched the annotations of the internal annotators. Based upon this, the Workers with the most matches would be invited to work on additional tasks. Second, we are able to add the internal set of triples randomly amongst the other relations we were seeking to have annotated. This allows us to monitor quality of the individual Workers over the course of the project. Note, none of this data was used in the actual evaluation. It was only for the purposes of qualifying Workers.
We are sensitive to issues that other researchers have in regards to Mechanical Turk Workers earning fair payment in exchange for their contributions to the HITs BIBREF12 . We used the time estimates from our internal annotation to price the task in order to be above US minimum wage. All workers were qualified before being issued tasks. Overall, we employed 10 crowd workers. On average it took 30 minutes for a worker to complete a HIT. In line with BIBREF13 , we monitored for potential non-performance or spam by looking for long response times and consecutive submitted results. We saw no indicators of low quality responses.
Judgement Data and Inter-Annotator Agreement
In total, 11262 judgements were obtained after running the annotation process. Every triple had at least 5 judgements from different annotators. All judgement data is made available. The proportion of overall agreement between annotators is 0.76 with a standard deviation of 0.25 on whether a triple is consequence of the given sentence. We also calculated inter-annotator agreement statistics. Using Krippendorff's alpha inter-annotator agreement was 0.44. This calculation was performed over all data and annotators as Krippendorff's alpha is designed to account for missing data and work across more than two annotators. Additionally, Fleiss' Kappa and Scott's pi were calculated pairwise between all annotators where there were overlapping ratings (i.e. raters had rated at least one triple in common). The average Fleiss's Kappa was 0.41 and the average of Scott's pi was 0.37. Using BIBREF14 as a guide, we interpret these statistics as suggesting there is moderate agreement between annotators and that agreement is above random chance. This moderate level of agreement is to be expected as the task itself can be difficult and requires judgement from the annotators at the margin.
Table 1 shows examples of triples that were associated with higher disagreement between annotators. One can see for example, in the third example, that annotators might be confused by the use of a pronoun (him). Another example is in the last sentence of the table, where one can see that there might be disagreement on whether the subsequent prepositional phrase behind light microscope analysis should be included as part of the extracted triple.
We take the variability of judgements into account when using this data to compute the performance of the two extraction tools. Hence, to make assessments as to whether a triple correctly reflects the content from which it is extracted, we rely on the unanimous positive agreement between crowd workers. That is to say that if we have 100% inter-annotator agreement that a triple was correctly extracted we label it as correct.
Experimental Results
Table 2 show the results for the combinations of systems and data sources. The Correct Triples column contains the number of triples that are labelled as being correct by all annotators. Total Triples are the total number of triples extracted by the given systems over the specified data. Precision is calculated as typical where Correct Triples are treated as true positives. On average, 3.1 triples were extracted per sentence.
Figure 1 shows the performance of extractors in terms of precision as inter-annotator agreement decreases. In this figure, we look only at agreement on triples where the majority agree that the triple is correct. Furthermore, to ease comparison, we only consider triples with 5 judgements this excludes 9 triples. We indicate not only the pair-wise inter-annotator agreement but also the number of annotators who have judged a triple to be correct. For example, at the 40% agreement level at least 3 annotators have agreed that a triple is true. The figure separates the results by extractor and by data source.
We see that as expected the amount of triples agreed to as correct grows larger as we relax the requirement for agreement. For example, analyzing Open IE's results, at the 100% agreement level we see a precision of 0.56 whereas at the 40% agreement level we see a precision of 0.78. Table 3 shows the total number of correct extractions at the three agreement levels.
Testing H1: Comparing the Performance of OIE on Scientific vs. Encyclopedic Text
From the data, we see that extractors perform better on sentences from Wikipedia (0.54 P) than scientific text (0.34 P). Additionally, we see that there is higher annotator agreement on whether triples extracted from Wikipedia and scientific text are correct or incorrect: 0.80 - SD 0.24 (WIKI) vs. 0.72 - SD 0.25 (SCI). A similar difference in agreement is observed when only looking at triples that are considered to be correct by the majority of annotators: 0.87 - SD 0.21 (WIKI) vs. 0.78 - SD 0.25 (SCI) . In both cases, the difference is significant with p-values $<$ 0.01 using Welch's t-test. The differences between data sources are also seen when looking at the individual extraction tools. For instance, for Open IE 4 the precision is 0.19 higher for wikipedia extractions over those from scientific text. With this evidence, we reject our first hypothesis that the performance of these extractors are similar across data sources.
Testing H2: Comparing the Performance of Systems
We also compare the output of the two extractors. In terms precision, Open IE 4 performs much better across the two datasets (0.56P vs 0.39P). Looking at triples considered to be correct by the majority of annotators, we see that Open IE 4 has higher inter-annotator agreement 0.87 - SD 0.22 (Open IE) vs 0.81 - SD 0.24 (MinIE). Focusing on scientific and medical text (SCI), again where the triples are majority annotated as being correct, Open IE has higher inter-annotator agreement (Open IE: 0.83 - SD 0.24 vs MiniIE: 0.76 - SD 0.25). In both cases, the difference is significant with p-values $<$ 0.01 using Welch's t-test. This leads us to conclude that Open IE produces triples that annotators are more likely to agree as being correct.
MinIE provides many more correct extractions than OpenIE 4 (935 more across both datasets). The true recall numbers of the two systems can not be calculated with the data available, but the 40% difference in the numbers of correct extractions is strong evidence that the two systems do not have equivalent behavior.
A third indication of differences in their outputs comes from examining the complexity of the extracted relations. Open IE 4 generates longer triples on average (11.5 words) vs. 8.5 words for MinIE across all argument positions. However, Open IE 4 generates shorter relation types than MinIE (Open IE - 3.7 words; MiniIE 6.27 words) and the standard deviation in terms of word length is much more compact for Open IE 4 - 1 word vs 3 words for MinIE. Overall, our conclusion is that Open IE 4 performs better than MinIE both in terms of precision and compactness of relation types, while not matching MinIE's recall, and thus we reject our second hypothesis.
Other Observations
The amount of triples extracted from the scientific text is slightly larger than that extracted from the Wikipedia text. This follows from the fact that the scientific sentences are on average roughly 7 words longer than encyclopedic text.
The results of our experiment also confirm the notion that an unsupervised approach to extracting relations is important. We have identified 698 unique relation types that are part of triples agreed to be correct by all annotators. This number of relation types is derived from only 400 sentences. While not every relation type is essential for downstream tasks, it is clear that building specific extractors for each relation type in a supervised setting would be difficult.
Error Analysis
We now look more closely at the various errors that were generated by the two extractors.
Table 4 shows the sentences in which neither extractor produced triples. We see 3 distinct groups. The first are phrases that are incomplete sentences usually originating from headings (e.g. Materials and methods). The next group are descriptive headings potentially coming from paper titles or figure captions. We also see a group with more complex prepositional phrases. In general, these errors could be avoided by being more selective of the sentences used for random selection. Additionally, these systems could look at potentially just extracting noun phrases with variable relation types, hence, expressing a cooccurrence relation.
We also looked at where there was complete agreement by all annotators that a triple extraction was incorrect. In total there were 138 of these triples originating from 76 unique sentences. There were several patterns that appeared in these sentences.
We also see similar errors to those pointed out by BIBREF8 , namely, uninformative extractions, the difficulty in handling n-ary relations that are latent in the text, difficulties handling negations, and very large argument lengths. In general, these errors together point to several areas for further improvement including:
Conclusion
The pace of change in the scientific literature means that interconnections and facts in the form of relations between entities are constantly being created. Open information extraction provides an important tool to keep up with that pace of change. We have provided evidence that unsupervised techniques are needed to be able to deal with the variety of relations present in text. The work presented here provides an independent evaluation of these tools in their use on scientific text. Past evaluations have focused on encyclopedic or news corpora which often have simpler structures. We have shown that existing OIE systems perform worse on scientific and medical content than on general audience content.
There are a range of avenues for future work. First, the application of Crowd Truth framework BIBREF15 in the analysis of these results might prove to be useful as we believe that the use of unanimous agreement tends to negatively impact the perceived performance of the OIE tools. Second, we think the application to n-ary relations and a deeper analysis of negative relations would be of interest. To do this kind of evaluation, an important area of future work is the development of guidelines and tasks for more complex analysis of sentences in a crowd sourcing environment. The ability, for example, to indicate argument boundaries or correct sentences can be expected of expert annotators but needs to implemented in a manner that is efficient and easy for the general crowd worker. Third, we would like to expand the evaluation dataset to an even larger numbers of sentences. Lastly, there are a number of core natural language processing components that might be useful for OIE in this setting, for example, the use of syntactic features as suggested by BIBREF16 . Furthermore, we think that coreference is a crucial missing component and we are actively investigating improved coreference resolution for scientific texts.
To conclude, we hope that this evaluation provides further insights for implementors of these extraction tools to deal with the complexity of scientific and medical text. | Crowd workers were asked to mark whether a triple was correct, namely, did the triple reflect the consequence of the sentence. |
f14ff780c28addab1d738f676c4ec0b4106356b6 | f14ff780c28addab1d738f676c4ec0b4106356b6_0 | Q: How are meta vertices computed?
Text: Introduction and related work
Keywords are terms (i.e. expressions) that best describe the subject of a document BIBREF0 . A good keyword effectively summarizes the content of the document and allows it to be efficiently retrieved when needed. Traditionally, keyword assignment was a manual task, but with the emergence of large amounts of textual data, automatic keyword extraction methods have become indispensable. Despite a considerable effort from the research community, state-of-the-art keyword extraction algorithms leave much to be desired and their performance is still lower than on many other core NLP tasks BIBREF1 . The first keyword extraction methods mostly followed a supervised approach BIBREF2 , BIBREF3 , BIBREF4 : they first extract keyword features and then train a classifier on a gold standard dataset. For example, KEA BIBREF4 , a state of the art supervised keyword extraction algorithm is based on the Naive Bayes machine learning algorithm. While these methods offer quite good performance, they rely on an annotated gold standard dataset and require a (relatively) long training process. In contrast, unsupervised approaches need no training and can be applied directly without relying on a gold standard document collection. They can be further divided into statistical and graph-based methods. The former, such as YAKE BIBREF5 , BIBREF6 , KP-MINER BIBREF7 and RAKE BIBREF8 , use statistical characteristics of the texts to capture keywords, while the latter, such as Topic Rank BIBREF9 , TextRank BIBREF10 , Topical PageRank BIBREF11 and Single Rank BIBREF12 , build graphs to rank words based on their position in the graph. Among statistical approaches, the state-of-the-art keyword extraction algorithm is YAKE BIBREF5 , BIBREF6 , which is also one of the best performing keyword extraction algorithms overall; it defines a set of five features capturing keyword characteristics which are heuristically combined to assign a single score to every keyword. On the other hand, among graph-based approaches, Topic Rank BIBREF9 can be considered state-of-the-art; candidate keywords are clustered into topics and used as vertices in the final graph, used for keyword extraction. Next, a graph-based ranking model is applied to assign a significance score to each topic and keywords are generated by selecting a candidate from each of the top-ranked topics. Network-based methodology has also been successfully applied to the task of topic extraction BIBREF13 .
The method that we propose in this paper, RaKUn, is a graph-based keyword extraction method. We exploit some of the ideas from the area of graph aggregation-based learning, where, for example, graph convolutional neural networks and similar approaches were shown to yield high quality vertex representations by aggregating their neighborhoods' feature space BIBREF14 . This work implements some of the similar ideas (albeit not in a neural network setting), where redundant information is aggregated into meta vertices in a similar manner. Similar efforts were shown as useful for hierarchical subnetwork aggregation in sensor networks BIBREF15 and in biological use cases of simulation of large proteins BIBREF16 .
The main contributions of this paper are as follows. The notion of load centrality was to our knowledge not yet sufficiently exploited for keyword extraction. We show that this fast measure offers competitive performance to other widely used centralities, such as for example the PageRank centrality (used in BIBREF10 ). To our knowledge, this work is the first to introduce the notion of meta vertices with the aim of aggregating similar vertices, following similar ideas to the statistical method YAKE BIBREF5 , which is considered a state-of-the-art for the keyword extraction. Next, as part of the proposed RaKUn algorithm we extend the extraction from unigrams also to bigram and threegram keywords based on load centrality scores computed for considered tokens. Last but not least, we demonstrate how arbitrary textual corpora can be transformed into weighted graphs whilst maintaining global sequential information, offering the opportunity to exploit potential context not naturally present in statistical methods.
The paper is structured as follows. We first present the text to graph transformation approach (Section SECREF2 ), followed by the introduction of the RaKUn keyword extractor (Section SECREF3 ). We continue with qualitative evaluation (Section SECREF4 ) and quantitative evaluation (Section SECREF5 ), before concluding the paper in Section 6.
Transforming texts to graphs
We first discuss how the texts are transformed to graphs, on which RaKUn operates. Next, we formally state the problem of keyword extraction and discuss its relation to graph centrality metrics.
Representing text
In this work we consider directed graphs. Let INLINEFORM0 represent a graph comprised of a set of vertices INLINEFORM1 and a set of edges ( INLINEFORM2 ), which are ordered pairs. Further, each edge can have a real-valued weight assigned. Let INLINEFORM3 represent a document comprised of tokens INLINEFORM4 . The order in which tokens in text appear is known, thus INLINEFORM5 is a totally ordered set. A potential way of constructing a graph from a document is by simply observing word co-occurrences. When two words co-occur, they are used as an edge. However, such approaches do not take into account the sequence nature of the words, meaning that the order is lost. We attempt to take this aspect into account as follows. The given corpus is traversed, and for each element INLINEFORM6 , its successor INLINEFORM7 , together with a given element, forms a directed edge INLINEFORM8 . Finally, such edges are weighted according to the number of times they appear in a given corpus. Thus the graph, constructed after traversing a given corpus, consists of all local neighborhoods (order one), merged into a single joint structure. Global contextual information is potentially kept intact (via weights), even though it needs to be detected via network analysis as proposed next.
Improving graph quality by meta vertex construction
A naïve approach to constructing a graph, as discussed in the previous section, commonly yields noisy graphs, rendering learning tasks harder. Therefore, we next discuss the selected approaches we employ in order to reduce both the computational complexity and the spatial complexity of constructing the graph, as well as increasing its quality (for the given down-stream task).
First, we consider the following heuristics which reduce the complexity of the graph that we construct for keyword extraction: Considered token length (while traversing the document INLINEFORM0 , only tokens of length INLINEFORM1 are considered), and next, lemmatization (tokens can be lemmatized, offering spatial benefits and avoiding redundant vertices in the final graph). The two modifications yield a potentially “simpler” graph, which is more suitable and faster for mining.
Even if the optional lemmatization step is applied, one can still aim at further reducing the graph complexity by merging similar vertices. This step is called meta vertex construction. The motivation can be explained by the fact, that even similar lemmas can be mapped to the same keyword (e.g., mechanic and mechanical; normal and abnormal). This step also captures spelling errors (similar vertices that will not be handled by lemmatization), spelling differences (e.g., British vs. American English), non-standard writing (e.g., in Twitter data), mistakes in lemmatization or unavailable or omitted lemmatization step.
The meta-vertex construction step works as follows. Let INLINEFORM0 represent the set of vertices, as defined above. A meta vertex INLINEFORM1 is comprised of a set of vertices that are elements of INLINEFORM2 , i.e. INLINEFORM3 . Let INLINEFORM4 denote the INLINEFORM5 -th meta vertex. We construct a given INLINEFORM6 so that for each INLINEFORM7 , INLINEFORM8 's initial edges (prior to merging it into a meta vertex) are rewired to the newly added INLINEFORM9 . Note that such edges connect to vertices which are not a part of INLINEFORM10 . Thus, both the number of vertices, as well as edges get reduced substantially. This feature is implemented via the following procedure:
Meta vertex candidate identification. Edit distance and word lengths distance are used to determine whether two words should be merged into a meta vertex (only if length distance threshold is met, the more expensive edit distance is computed).
The meta vertex creation. As common identifiers, we use the stemmed version of the original vertices and if there is more than one resulting stem, we select the vertex from the identified candidates that has the highest centrality value in the graph and its stemmed version is introduced as a novel vertex (meta vertex).
The edges of the words entailed in the meta vertex are next rewired to the meta vertex.
The two original words are removed from the graph.
The procedure is repeated for all candidate pairs.
A schematic representation of meta vertex construction is shown in Figure FIGREF3 . The yellow and blue groups of vertices both form a meta vertex, the resulting (right) graph is thus substantially reduced, both with respect to the number of vertices, as well as the number of edges.
Keyword identification
Up to this point, we discussed how the graph used for keyword extraction is constructed. In this work, we exploit the notion of load centrality, a fast measure for estimating the importance of vertices in graphs. This metric can be defined as follows.
Qualitative evaluation
RaKUn can be used also for visualization of keywords in a given document or document corpus. A visualization of extracted keywords is applied to an example from wiki20 BIBREF19 (for dataset description see Section SECREF15 ), where we visualize both the global corpus graph, as well as a local (document) view where keywords are emphasized, see Figures FIGREF13 and FIGREF14 , respectively. It can be observed that the global graph's topology is far from uniform — even though we did not perform any tests of scale-freeness, we believe the constructed graphs are subject to distinct topologies, where keywords play prominent roles.
Quantitative evaluation
This section discusses the experimental setting used to validate the proposed RaKUn approach against state-of-the-art baselines. We first describe the datasets, and continue with the presentation of the experimental setting and results.
Datasets
For RaKUn evaluation, we used 14 gold standard datasets from the list of BIBREF5 , BIBREF6 , from which we selected datasets in English. Detailed dataset descriptions and statistics can be found in Table TABREF17 , while full statistics and files for download can be found online. Most datasets are from the domain of computer science or contain multiple domains. They are very diverse in terms of the number of documents—ranging from wiki20 with 20 documents to Inspec with 2,000 documents, in terms of the average number of gold standard keywords per document—from 5.07 in kdd to 48.92 in 500N-KPCrowd-v1.1—and in terms of the average length of the documents—from 75.97 in kdd to SemEval2017 with 8332.34.
Experimental setting
We adopted the same evaluation procedure as used for the series of results recently introduced by YAKE authors BIBREF6 . Five fold cross validation was used to determine the overall performance, for which we measured Precision, Recall and F1 score, with the latter being reported in Table TABREF24 . Keywords were stemmed prior to evaluation. As the number of keywords in the gold standard document is not equal to the number of extracted keywords (in our experiments INLINEFORM0 =10), in the recall we divide the correctly extracted keywords by the number of keywords parameter INLINEFORM1 , if in the gold standard number of keywords is higher than INLINEFORM2 .
Selecting default configuration. First, we used a dedicated run for determining the default parameters. The cross validation was performed as follows. For each train-test dataset split, we kept the documents in the test fold intact, whilst performing a grid search on the train part to find the best parametrization. Finally, the selected configuration was used to extract keywords on the unseen test set. For each train-test split, we thus obtained the number of true and false positives, as well as true and false negatives, which were summed up and, after all folds were considered, used to obtain final F1 scores, which served for default parameter selection. The grid search was conducted over the following parameter range Num keywords: 10, Num tokens (the number of tokens a keyword can consist of): Count threshold (minimum support used to determine potential bigram candidates): Word length difference threshold (maximum difference in word length used to determine whether a given pair of words shall be aggregated): INLINEFORM0 , Edit length difference (maximum edit distance allowed to consider a given pair of words for aggregation): INLINEFORM1 , Lemmatization: [yes, no].
Even if one can use the described grid-search fine-tunning procedure to select the best setting for individual datasets, we observed that in nearly all the cases the best settings were the same. We therefore selected it as the default, which can be used also on new unlabeled data. The default parameter setting was as follows. The number of tokens was set to 1, Count threshold was thus not needed (only unigrams), for meta vertex construction Word length difference threshold was set to 3 and Edit distance to 2. Words were initially lemmatized. Next, we report the results using these selected parameters (same across all datasets), by which we also test the general usefulness of the approach.
Results
The results are presented in Table TABREF24 , where we report on F1 with the default parameter setting of RaKUn, together with the results from related work, as reported in the github table of the YAKE BIBREF5 .
We first observe that on the selection of datasets, the proposed RaKUn wins more than any other method. We also see that it performs notably better on some of the datasets, whereas on the remainder it performs worse than state-of-the-art approaches. Such results demonstrate that the proposed method finds keywords differently, indicating load centrality, combined with meta vertices, represents a promising research venue. The datasets, where the proposed method outperforms the current state-of-the-art results are: 500N-KPCrowd-v1.1, Schutz2008, fao30 and wiki20. In addition, RaKUn also achieves competitive results on citeulike180. A look at the gold standard keywords in these datasets reveals that they contain many single-word units which is why the default configuration (which returns unigrams only) was able to perform so well.
Four of these five datasets (500N-KPCrowd-v1.1, Schutz2008, fao30, wiki20) are also the ones with the highest average number of keywords per document with at least 33.23 keywords per document, while the fifth dataset (citeulike180) also has a relatively large value (18.42). Similarly, four of the five well-performing datasets (Schutz2008, fao30, citeulike180, wiki20) include long documents (more than 3,900 words), with the exception being 500N-KPCrowd-v1.1. For details, see Table TABREF17 . We observe that the proposed RaKUn outperforms the majority of other competitive graph-based methods. For example, the most similar variants Topical PageRank and TextRank do not perform as well on the majority of the considered datasets. Furthermore, RaKUn also outperforms KEA, a supervised keyword learner (e.g., very high difference in performance on 500N-KPCrowd-v1.1 and Schutz2008 datasets), indicating unsupervised learning from the graph's structure offers a more robust keyword extraction method than learning a classifier directly.
Conclusions and further work
In this work we proposed RaKUn, a novel unsupervised keyword extraction algorithm which exploits the efficient computation of load centrality, combined with the introduction of meta vertices, which notably reduce corpus graph sizes. The method is fast, and performs well compared to state-of-the-art such as YAKE and graph-based keyword extractors. In further work, we will test the method on other languages. We also believe additional semantic background knowledge information could be used to prune the graph's structure even further, and potentially introduce keywords that are inherently not even present in the text (cf. BIBREF29 ). The proposed method does not attempt to exploit meso-scale graph structure, such as convex skeletons or communities, which are known to play prominent roles in real-world networks and could allow for vertex aggregation based on additional graph properties. We believe the proposed method could also be extended using the Ricci-Oliver BIBREF30 flows on weighted graphs. | Meta vertex candidate identification. Edit distance and word lengths distance are used to determine whether two words should be merged into a meta vertex (only if length distance threshold is met, the more expensive edit distance is computed)., The meta vertex creation. As common identifiers, we use the stemmed version of the original vertices and if there is more than one resulting stem, we select the vertex from the identified candidates that has the highest centrality value in the graph and its stemmed version is introduced as a novel vertex (meta vertex). |
b799936d6580c0e95102027175d3fe184f0ee253 | b799936d6580c0e95102027175d3fe184f0ee253_0 | Q: How are graphs derived from a given text?
Text: Introduction and related work
Keywords are terms (i.e. expressions) that best describe the subject of a document BIBREF0 . A good keyword effectively summarizes the content of the document and allows it to be efficiently retrieved when needed. Traditionally, keyword assignment was a manual task, but with the emergence of large amounts of textual data, automatic keyword extraction methods have become indispensable. Despite a considerable effort from the research community, state-of-the-art keyword extraction algorithms leave much to be desired and their performance is still lower than on many other core NLP tasks BIBREF1 . The first keyword extraction methods mostly followed a supervised approach BIBREF2 , BIBREF3 , BIBREF4 : they first extract keyword features and then train a classifier on a gold standard dataset. For example, KEA BIBREF4 , a state of the art supervised keyword extraction algorithm is based on the Naive Bayes machine learning algorithm. While these methods offer quite good performance, they rely on an annotated gold standard dataset and require a (relatively) long training process. In contrast, unsupervised approaches need no training and can be applied directly without relying on a gold standard document collection. They can be further divided into statistical and graph-based methods. The former, such as YAKE BIBREF5 , BIBREF6 , KP-MINER BIBREF7 and RAKE BIBREF8 , use statistical characteristics of the texts to capture keywords, while the latter, such as Topic Rank BIBREF9 , TextRank BIBREF10 , Topical PageRank BIBREF11 and Single Rank BIBREF12 , build graphs to rank words based on their position in the graph. Among statistical approaches, the state-of-the-art keyword extraction algorithm is YAKE BIBREF5 , BIBREF6 , which is also one of the best performing keyword extraction algorithms overall; it defines a set of five features capturing keyword characteristics which are heuristically combined to assign a single score to every keyword. On the other hand, among graph-based approaches, Topic Rank BIBREF9 can be considered state-of-the-art; candidate keywords are clustered into topics and used as vertices in the final graph, used for keyword extraction. Next, a graph-based ranking model is applied to assign a significance score to each topic and keywords are generated by selecting a candidate from each of the top-ranked topics. Network-based methodology has also been successfully applied to the task of topic extraction BIBREF13 .
The method that we propose in this paper, RaKUn, is a graph-based keyword extraction method. We exploit some of the ideas from the area of graph aggregation-based learning, where, for example, graph convolutional neural networks and similar approaches were shown to yield high quality vertex representations by aggregating their neighborhoods' feature space BIBREF14 . This work implements some of the similar ideas (albeit not in a neural network setting), where redundant information is aggregated into meta vertices in a similar manner. Similar efforts were shown as useful for hierarchical subnetwork aggregation in sensor networks BIBREF15 and in biological use cases of simulation of large proteins BIBREF16 .
The main contributions of this paper are as follows. The notion of load centrality was to our knowledge not yet sufficiently exploited for keyword extraction. We show that this fast measure offers competitive performance to other widely used centralities, such as for example the PageRank centrality (used in BIBREF10 ). To our knowledge, this work is the first to introduce the notion of meta vertices with the aim of aggregating similar vertices, following similar ideas to the statistical method YAKE BIBREF5 , which is considered a state-of-the-art for the keyword extraction. Next, as part of the proposed RaKUn algorithm we extend the extraction from unigrams also to bigram and threegram keywords based on load centrality scores computed for considered tokens. Last but not least, we demonstrate how arbitrary textual corpora can be transformed into weighted graphs whilst maintaining global sequential information, offering the opportunity to exploit potential context not naturally present in statistical methods.
The paper is structured as follows. We first present the text to graph transformation approach (Section SECREF2 ), followed by the introduction of the RaKUn keyword extractor (Section SECREF3 ). We continue with qualitative evaluation (Section SECREF4 ) and quantitative evaluation (Section SECREF5 ), before concluding the paper in Section 6.
Transforming texts to graphs
We first discuss how the texts are transformed to graphs, on which RaKUn operates. Next, we formally state the problem of keyword extraction and discuss its relation to graph centrality metrics.
Representing text
In this work we consider directed graphs. Let INLINEFORM0 represent a graph comprised of a set of vertices INLINEFORM1 and a set of edges ( INLINEFORM2 ), which are ordered pairs. Further, each edge can have a real-valued weight assigned. Let INLINEFORM3 represent a document comprised of tokens INLINEFORM4 . The order in which tokens in text appear is known, thus INLINEFORM5 is a totally ordered set. A potential way of constructing a graph from a document is by simply observing word co-occurrences. When two words co-occur, they are used as an edge. However, such approaches do not take into account the sequence nature of the words, meaning that the order is lost. We attempt to take this aspect into account as follows. The given corpus is traversed, and for each element INLINEFORM6 , its successor INLINEFORM7 , together with a given element, forms a directed edge INLINEFORM8 . Finally, such edges are weighted according to the number of times they appear in a given corpus. Thus the graph, constructed after traversing a given corpus, consists of all local neighborhoods (order one), merged into a single joint structure. Global contextual information is potentially kept intact (via weights), even though it needs to be detected via network analysis as proposed next.
Improving graph quality by meta vertex construction
A naïve approach to constructing a graph, as discussed in the previous section, commonly yields noisy graphs, rendering learning tasks harder. Therefore, we next discuss the selected approaches we employ in order to reduce both the computational complexity and the spatial complexity of constructing the graph, as well as increasing its quality (for the given down-stream task).
First, we consider the following heuristics which reduce the complexity of the graph that we construct for keyword extraction: Considered token length (while traversing the document INLINEFORM0 , only tokens of length INLINEFORM1 are considered), and next, lemmatization (tokens can be lemmatized, offering spatial benefits and avoiding redundant vertices in the final graph). The two modifications yield a potentially “simpler” graph, which is more suitable and faster for mining.
Even if the optional lemmatization step is applied, one can still aim at further reducing the graph complexity by merging similar vertices. This step is called meta vertex construction. The motivation can be explained by the fact, that even similar lemmas can be mapped to the same keyword (e.g., mechanic and mechanical; normal and abnormal). This step also captures spelling errors (similar vertices that will not be handled by lemmatization), spelling differences (e.g., British vs. American English), non-standard writing (e.g., in Twitter data), mistakes in lemmatization or unavailable or omitted lemmatization step.
The meta-vertex construction step works as follows. Let INLINEFORM0 represent the set of vertices, as defined above. A meta vertex INLINEFORM1 is comprised of a set of vertices that are elements of INLINEFORM2 , i.e. INLINEFORM3 . Let INLINEFORM4 denote the INLINEFORM5 -th meta vertex. We construct a given INLINEFORM6 so that for each INLINEFORM7 , INLINEFORM8 's initial edges (prior to merging it into a meta vertex) are rewired to the newly added INLINEFORM9 . Note that such edges connect to vertices which are not a part of INLINEFORM10 . Thus, both the number of vertices, as well as edges get reduced substantially. This feature is implemented via the following procedure:
Meta vertex candidate identification. Edit distance and word lengths distance are used to determine whether two words should be merged into a meta vertex (only if length distance threshold is met, the more expensive edit distance is computed).
The meta vertex creation. As common identifiers, we use the stemmed version of the original vertices and if there is more than one resulting stem, we select the vertex from the identified candidates that has the highest centrality value in the graph and its stemmed version is introduced as a novel vertex (meta vertex).
The edges of the words entailed in the meta vertex are next rewired to the meta vertex.
The two original words are removed from the graph.
The procedure is repeated for all candidate pairs.
A schematic representation of meta vertex construction is shown in Figure FIGREF3 . The yellow and blue groups of vertices both form a meta vertex, the resulting (right) graph is thus substantially reduced, both with respect to the number of vertices, as well as the number of edges.
Keyword identification
Up to this point, we discussed how the graph used for keyword extraction is constructed. In this work, we exploit the notion of load centrality, a fast measure for estimating the importance of vertices in graphs. This metric can be defined as follows.
Qualitative evaluation
RaKUn can be used also for visualization of keywords in a given document or document corpus. A visualization of extracted keywords is applied to an example from wiki20 BIBREF19 (for dataset description see Section SECREF15 ), where we visualize both the global corpus graph, as well as a local (document) view where keywords are emphasized, see Figures FIGREF13 and FIGREF14 , respectively. It can be observed that the global graph's topology is far from uniform — even though we did not perform any tests of scale-freeness, we believe the constructed graphs are subject to distinct topologies, where keywords play prominent roles.
Quantitative evaluation
This section discusses the experimental setting used to validate the proposed RaKUn approach against state-of-the-art baselines. We first describe the datasets, and continue with the presentation of the experimental setting and results.
Datasets
For RaKUn evaluation, we used 14 gold standard datasets from the list of BIBREF5 , BIBREF6 , from which we selected datasets in English. Detailed dataset descriptions and statistics can be found in Table TABREF17 , while full statistics and files for download can be found online. Most datasets are from the domain of computer science or contain multiple domains. They are very diverse in terms of the number of documents—ranging from wiki20 with 20 documents to Inspec with 2,000 documents, in terms of the average number of gold standard keywords per document—from 5.07 in kdd to 48.92 in 500N-KPCrowd-v1.1—and in terms of the average length of the documents—from 75.97 in kdd to SemEval2017 with 8332.34.
Experimental setting
We adopted the same evaluation procedure as used for the series of results recently introduced by YAKE authors BIBREF6 . Five fold cross validation was used to determine the overall performance, for which we measured Precision, Recall and F1 score, with the latter being reported in Table TABREF24 . Keywords were stemmed prior to evaluation. As the number of keywords in the gold standard document is not equal to the number of extracted keywords (in our experiments INLINEFORM0 =10), in the recall we divide the correctly extracted keywords by the number of keywords parameter INLINEFORM1 , if in the gold standard number of keywords is higher than INLINEFORM2 .
Selecting default configuration. First, we used a dedicated run for determining the default parameters. The cross validation was performed as follows. For each train-test dataset split, we kept the documents in the test fold intact, whilst performing a grid search on the train part to find the best parametrization. Finally, the selected configuration was used to extract keywords on the unseen test set. For each train-test split, we thus obtained the number of true and false positives, as well as true and false negatives, which were summed up and, after all folds were considered, used to obtain final F1 scores, which served for default parameter selection. The grid search was conducted over the following parameter range Num keywords: 10, Num tokens (the number of tokens a keyword can consist of): Count threshold (minimum support used to determine potential bigram candidates): Word length difference threshold (maximum difference in word length used to determine whether a given pair of words shall be aggregated): INLINEFORM0 , Edit length difference (maximum edit distance allowed to consider a given pair of words for aggregation): INLINEFORM1 , Lemmatization: [yes, no].
Even if one can use the described grid-search fine-tunning procedure to select the best setting for individual datasets, we observed that in nearly all the cases the best settings were the same. We therefore selected it as the default, which can be used also on new unlabeled data. The default parameter setting was as follows. The number of tokens was set to 1, Count threshold was thus not needed (only unigrams), for meta vertex construction Word length difference threshold was set to 3 and Edit distance to 2. Words were initially lemmatized. Next, we report the results using these selected parameters (same across all datasets), by which we also test the general usefulness of the approach.
Results
The results are presented in Table TABREF24 , where we report on F1 with the default parameter setting of RaKUn, together with the results from related work, as reported in the github table of the YAKE BIBREF5 .
We first observe that on the selection of datasets, the proposed RaKUn wins more than any other method. We also see that it performs notably better on some of the datasets, whereas on the remainder it performs worse than state-of-the-art approaches. Such results demonstrate that the proposed method finds keywords differently, indicating load centrality, combined with meta vertices, represents a promising research venue. The datasets, where the proposed method outperforms the current state-of-the-art results are: 500N-KPCrowd-v1.1, Schutz2008, fao30 and wiki20. In addition, RaKUn also achieves competitive results on citeulike180. A look at the gold standard keywords in these datasets reveals that they contain many single-word units which is why the default configuration (which returns unigrams only) was able to perform so well.
Four of these five datasets (500N-KPCrowd-v1.1, Schutz2008, fao30, wiki20) are also the ones with the highest average number of keywords per document with at least 33.23 keywords per document, while the fifth dataset (citeulike180) also has a relatively large value (18.42). Similarly, four of the five well-performing datasets (Schutz2008, fao30, citeulike180, wiki20) include long documents (more than 3,900 words), with the exception being 500N-KPCrowd-v1.1. For details, see Table TABREF17 . We observe that the proposed RaKUn outperforms the majority of other competitive graph-based methods. For example, the most similar variants Topical PageRank and TextRank do not perform as well on the majority of the considered datasets. Furthermore, RaKUn also outperforms KEA, a supervised keyword learner (e.g., very high difference in performance on 500N-KPCrowd-v1.1 and Schutz2008 datasets), indicating unsupervised learning from the graph's structure offers a more robust keyword extraction method than learning a classifier directly.
Conclusions and further work
In this work we proposed RaKUn, a novel unsupervised keyword extraction algorithm which exploits the efficient computation of load centrality, combined with the introduction of meta vertices, which notably reduce corpus graph sizes. The method is fast, and performs well compared to state-of-the-art such as YAKE and graph-based keyword extractors. In further work, we will test the method on other languages. We also believe additional semantic background knowledge information could be used to prune the graph's structure even further, and potentially introduce keywords that are inherently not even present in the text (cf. BIBREF29 ). The proposed method does not attempt to exploit meso-scale graph structure, such as convex skeletons or communities, which are known to play prominent roles in real-world networks and could allow for vertex aggregation based on additional graph properties. We believe the proposed method could also be extended using the Ricci-Oliver BIBREF30 flows on weighted graphs. | The given corpus is traversed, and for each element INLINEFORM6 , its successor INLINEFORM7 , together with a given element, forms a directed edge INLINEFORM8 . Finally, such edges are weighted according to the number of times they appear in a given corpus. Thus the graph, constructed after traversing a given corpus, consists of all local neighborhoods (order one), merged into a single joint structure. Global contextual information is potentially kept intact (via weights), even though it needs to be detected via network analysis |
568ce2f5355d009ec9bc1471fb5ea74655f7e554 | 568ce2f5355d009ec9bc1471fb5ea74655f7e554_0 | Q: In what sense if the proposed method interpretable?
Text: Introduction and related work
Keywords are terms (i.e. expressions) that best describe the subject of a document BIBREF0 . A good keyword effectively summarizes the content of the document and allows it to be efficiently retrieved when needed. Traditionally, keyword assignment was a manual task, but with the emergence of large amounts of textual data, automatic keyword extraction methods have become indispensable. Despite a considerable effort from the research community, state-of-the-art keyword extraction algorithms leave much to be desired and their performance is still lower than on many other core NLP tasks BIBREF1 . The first keyword extraction methods mostly followed a supervised approach BIBREF2 , BIBREF3 , BIBREF4 : they first extract keyword features and then train a classifier on a gold standard dataset. For example, KEA BIBREF4 , a state of the art supervised keyword extraction algorithm is based on the Naive Bayes machine learning algorithm. While these methods offer quite good performance, they rely on an annotated gold standard dataset and require a (relatively) long training process. In contrast, unsupervised approaches need no training and can be applied directly without relying on a gold standard document collection. They can be further divided into statistical and graph-based methods. The former, such as YAKE BIBREF5 , BIBREF6 , KP-MINER BIBREF7 and RAKE BIBREF8 , use statistical characteristics of the texts to capture keywords, while the latter, such as Topic Rank BIBREF9 , TextRank BIBREF10 , Topical PageRank BIBREF11 and Single Rank BIBREF12 , build graphs to rank words based on their position in the graph. Among statistical approaches, the state-of-the-art keyword extraction algorithm is YAKE BIBREF5 , BIBREF6 , which is also one of the best performing keyword extraction algorithms overall; it defines a set of five features capturing keyword characteristics which are heuristically combined to assign a single score to every keyword. On the other hand, among graph-based approaches, Topic Rank BIBREF9 can be considered state-of-the-art; candidate keywords are clustered into topics and used as vertices in the final graph, used for keyword extraction. Next, a graph-based ranking model is applied to assign a significance score to each topic and keywords are generated by selecting a candidate from each of the top-ranked topics. Network-based methodology has also been successfully applied to the task of topic extraction BIBREF13 .
The method that we propose in this paper, RaKUn, is a graph-based keyword extraction method. We exploit some of the ideas from the area of graph aggregation-based learning, where, for example, graph convolutional neural networks and similar approaches were shown to yield high quality vertex representations by aggregating their neighborhoods' feature space BIBREF14 . This work implements some of the similar ideas (albeit not in a neural network setting), where redundant information is aggregated into meta vertices in a similar manner. Similar efforts were shown as useful for hierarchical subnetwork aggregation in sensor networks BIBREF15 and in biological use cases of simulation of large proteins BIBREF16 .
The main contributions of this paper are as follows. The notion of load centrality was to our knowledge not yet sufficiently exploited for keyword extraction. We show that this fast measure offers competitive performance to other widely used centralities, such as for example the PageRank centrality (used in BIBREF10 ). To our knowledge, this work is the first to introduce the notion of meta vertices with the aim of aggregating similar vertices, following similar ideas to the statistical method YAKE BIBREF5 , which is considered a state-of-the-art for the keyword extraction. Next, as part of the proposed RaKUn algorithm we extend the extraction from unigrams also to bigram and threegram keywords based on load centrality scores computed for considered tokens. Last but not least, we demonstrate how arbitrary textual corpora can be transformed into weighted graphs whilst maintaining global sequential information, offering the opportunity to exploit potential context not naturally present in statistical methods.
The paper is structured as follows. We first present the text to graph transformation approach (Section SECREF2 ), followed by the introduction of the RaKUn keyword extractor (Section SECREF3 ). We continue with qualitative evaluation (Section SECREF4 ) and quantitative evaluation (Section SECREF5 ), before concluding the paper in Section 6.
Transforming texts to graphs
We first discuss how the texts are transformed to graphs, on which RaKUn operates. Next, we formally state the problem of keyword extraction and discuss its relation to graph centrality metrics.
Representing text
In this work we consider directed graphs. Let INLINEFORM0 represent a graph comprised of a set of vertices INLINEFORM1 and a set of edges ( INLINEFORM2 ), which are ordered pairs. Further, each edge can have a real-valued weight assigned. Let INLINEFORM3 represent a document comprised of tokens INLINEFORM4 . The order in which tokens in text appear is known, thus INLINEFORM5 is a totally ordered set. A potential way of constructing a graph from a document is by simply observing word co-occurrences. When two words co-occur, they are used as an edge. However, such approaches do not take into account the sequence nature of the words, meaning that the order is lost. We attempt to take this aspect into account as follows. The given corpus is traversed, and for each element INLINEFORM6 , its successor INLINEFORM7 , together with a given element, forms a directed edge INLINEFORM8 . Finally, such edges are weighted according to the number of times they appear in a given corpus. Thus the graph, constructed after traversing a given corpus, consists of all local neighborhoods (order one), merged into a single joint structure. Global contextual information is potentially kept intact (via weights), even though it needs to be detected via network analysis as proposed next.
Improving graph quality by meta vertex construction
A naïve approach to constructing a graph, as discussed in the previous section, commonly yields noisy graphs, rendering learning tasks harder. Therefore, we next discuss the selected approaches we employ in order to reduce both the computational complexity and the spatial complexity of constructing the graph, as well as increasing its quality (for the given down-stream task).
First, we consider the following heuristics which reduce the complexity of the graph that we construct for keyword extraction: Considered token length (while traversing the document INLINEFORM0 , only tokens of length INLINEFORM1 are considered), and next, lemmatization (tokens can be lemmatized, offering spatial benefits and avoiding redundant vertices in the final graph). The two modifications yield a potentially “simpler” graph, which is more suitable and faster for mining.
Even if the optional lemmatization step is applied, one can still aim at further reducing the graph complexity by merging similar vertices. This step is called meta vertex construction. The motivation can be explained by the fact, that even similar lemmas can be mapped to the same keyword (e.g., mechanic and mechanical; normal and abnormal). This step also captures spelling errors (similar vertices that will not be handled by lemmatization), spelling differences (e.g., British vs. American English), non-standard writing (e.g., in Twitter data), mistakes in lemmatization or unavailable or omitted lemmatization step.
The meta-vertex construction step works as follows. Let INLINEFORM0 represent the set of vertices, as defined above. A meta vertex INLINEFORM1 is comprised of a set of vertices that are elements of INLINEFORM2 , i.e. INLINEFORM3 . Let INLINEFORM4 denote the INLINEFORM5 -th meta vertex. We construct a given INLINEFORM6 so that for each INLINEFORM7 , INLINEFORM8 's initial edges (prior to merging it into a meta vertex) are rewired to the newly added INLINEFORM9 . Note that such edges connect to vertices which are not a part of INLINEFORM10 . Thus, both the number of vertices, as well as edges get reduced substantially. This feature is implemented via the following procedure:
Meta vertex candidate identification. Edit distance and word lengths distance are used to determine whether two words should be merged into a meta vertex (only if length distance threshold is met, the more expensive edit distance is computed).
The meta vertex creation. As common identifiers, we use the stemmed version of the original vertices and if there is more than one resulting stem, we select the vertex from the identified candidates that has the highest centrality value in the graph and its stemmed version is introduced as a novel vertex (meta vertex).
The edges of the words entailed in the meta vertex are next rewired to the meta vertex.
The two original words are removed from the graph.
The procedure is repeated for all candidate pairs.
A schematic representation of meta vertex construction is shown in Figure FIGREF3 . The yellow and blue groups of vertices both form a meta vertex, the resulting (right) graph is thus substantially reduced, both with respect to the number of vertices, as well as the number of edges.
Keyword identification
Up to this point, we discussed how the graph used for keyword extraction is constructed. In this work, we exploit the notion of load centrality, a fast measure for estimating the importance of vertices in graphs. This metric can be defined as follows.
Qualitative evaluation
RaKUn can be used also for visualization of keywords in a given document or document corpus. A visualization of extracted keywords is applied to an example from wiki20 BIBREF19 (for dataset description see Section SECREF15 ), where we visualize both the global corpus graph, as well as a local (document) view where keywords are emphasized, see Figures FIGREF13 and FIGREF14 , respectively. It can be observed that the global graph's topology is far from uniform — even though we did not perform any tests of scale-freeness, we believe the constructed graphs are subject to distinct topologies, where keywords play prominent roles.
Quantitative evaluation
This section discusses the experimental setting used to validate the proposed RaKUn approach against state-of-the-art baselines. We first describe the datasets, and continue with the presentation of the experimental setting and results.
Datasets
For RaKUn evaluation, we used 14 gold standard datasets from the list of BIBREF5 , BIBREF6 , from which we selected datasets in English. Detailed dataset descriptions and statistics can be found in Table TABREF17 , while full statistics and files for download can be found online. Most datasets are from the domain of computer science or contain multiple domains. They are very diverse in terms of the number of documents—ranging from wiki20 with 20 documents to Inspec with 2,000 documents, in terms of the average number of gold standard keywords per document—from 5.07 in kdd to 48.92 in 500N-KPCrowd-v1.1—and in terms of the average length of the documents—from 75.97 in kdd to SemEval2017 with 8332.34.
Experimental setting
We adopted the same evaluation procedure as used for the series of results recently introduced by YAKE authors BIBREF6 . Five fold cross validation was used to determine the overall performance, for which we measured Precision, Recall and F1 score, with the latter being reported in Table TABREF24 . Keywords were stemmed prior to evaluation. As the number of keywords in the gold standard document is not equal to the number of extracted keywords (in our experiments INLINEFORM0 =10), in the recall we divide the correctly extracted keywords by the number of keywords parameter INLINEFORM1 , if in the gold standard number of keywords is higher than INLINEFORM2 .
Selecting default configuration. First, we used a dedicated run for determining the default parameters. The cross validation was performed as follows. For each train-test dataset split, we kept the documents in the test fold intact, whilst performing a grid search on the train part to find the best parametrization. Finally, the selected configuration was used to extract keywords on the unseen test set. For each train-test split, we thus obtained the number of true and false positives, as well as true and false negatives, which were summed up and, after all folds were considered, used to obtain final F1 scores, which served for default parameter selection. The grid search was conducted over the following parameter range Num keywords: 10, Num tokens (the number of tokens a keyword can consist of): Count threshold (minimum support used to determine potential bigram candidates): Word length difference threshold (maximum difference in word length used to determine whether a given pair of words shall be aggregated): INLINEFORM0 , Edit length difference (maximum edit distance allowed to consider a given pair of words for aggregation): INLINEFORM1 , Lemmatization: [yes, no].
Even if one can use the described grid-search fine-tunning procedure to select the best setting for individual datasets, we observed that in nearly all the cases the best settings were the same. We therefore selected it as the default, which can be used also on new unlabeled data. The default parameter setting was as follows. The number of tokens was set to 1, Count threshold was thus not needed (only unigrams), for meta vertex construction Word length difference threshold was set to 3 and Edit distance to 2. Words were initially lemmatized. Next, we report the results using these selected parameters (same across all datasets), by which we also test the general usefulness of the approach.
Results
The results are presented in Table TABREF24 , where we report on F1 with the default parameter setting of RaKUn, together with the results from related work, as reported in the github table of the YAKE BIBREF5 .
We first observe that on the selection of datasets, the proposed RaKUn wins more than any other method. We also see that it performs notably better on some of the datasets, whereas on the remainder it performs worse than state-of-the-art approaches. Such results demonstrate that the proposed method finds keywords differently, indicating load centrality, combined with meta vertices, represents a promising research venue. The datasets, where the proposed method outperforms the current state-of-the-art results are: 500N-KPCrowd-v1.1, Schutz2008, fao30 and wiki20. In addition, RaKUn also achieves competitive results on citeulike180. A look at the gold standard keywords in these datasets reveals that they contain many single-word units which is why the default configuration (which returns unigrams only) was able to perform so well.
Four of these five datasets (500N-KPCrowd-v1.1, Schutz2008, fao30, wiki20) are also the ones with the highest average number of keywords per document with at least 33.23 keywords per document, while the fifth dataset (citeulike180) also has a relatively large value (18.42). Similarly, four of the five well-performing datasets (Schutz2008, fao30, citeulike180, wiki20) include long documents (more than 3,900 words), with the exception being 500N-KPCrowd-v1.1. For details, see Table TABREF17 . We observe that the proposed RaKUn outperforms the majority of other competitive graph-based methods. For example, the most similar variants Topical PageRank and TextRank do not perform as well on the majority of the considered datasets. Furthermore, RaKUn also outperforms KEA, a supervised keyword learner (e.g., very high difference in performance on 500N-KPCrowd-v1.1 and Schutz2008 datasets), indicating unsupervised learning from the graph's structure offers a more robust keyword extraction method than learning a classifier directly.
Conclusions and further work
In this work we proposed RaKUn, a novel unsupervised keyword extraction algorithm which exploits the efficient computation of load centrality, combined with the introduction of meta vertices, which notably reduce corpus graph sizes. The method is fast, and performs well compared to state-of-the-art such as YAKE and graph-based keyword extractors. In further work, we will test the method on other languages. We also believe additional semantic background knowledge information could be used to prune the graph's structure even further, and potentially introduce keywords that are inherently not even present in the text (cf. BIBREF29 ). The proposed method does not attempt to exploit meso-scale graph structure, such as convex skeletons or communities, which are known to play prominent roles in real-world networks and could allow for vertex aggregation based on additional graph properties. We believe the proposed method could also be extended using the Ricci-Oliver BIBREF30 flows on weighted graphs. | Unanswerable |
c000a43aff3cb0ad1cee5379f9388531b5521e9a | c000a43aff3cb0ad1cee5379f9388531b5521e9a_0 | Q: how are the bidirectional lms obtained?
Text: Introduction
Due to their simplicity and efficacy, pre-trained word embedding have become ubiquitous in NLP systems. Many prior studies have shown that they capture useful semantic and syntactic information BIBREF0 , BIBREF1 and including them in NLP systems has been shown to be enormously helpful for a variety of downstream tasks BIBREF2 .
However, in many NLP tasks it is essential to represent not just the meaning of a word, but also the word in context. For example, in the two phrases “A Central Bank spokesman” and “The Central African Republic”, the word `Central' is used as part of both an Organization and Location. Accordingly, current state of the art sequence tagging models typically include a bidirectional recurrent neural network (RNN) that encodes token sequences into a context sensitive representation before making token specific predictions BIBREF3 , BIBREF4 , BIBREF5 , BIBREF6 .
Although the token representation is initialized with pre-trained embeddings, the parameters of the bidirectional RNN are typically learned only on labeled data. Previous work has explored methods for jointly learning the bidirectional RNN with supplemental labeled data from other tasks BIBREF7 , BIBREF3 .
In this paper, we explore an alternate semi-supervised approach which does not require additional labeled data. We use a neural language model (LM), pre-trained on a large, unlabeled corpus to compute an encoding of the context at each position in the sequence (hereafter an LM embedding) and use it in the supervised sequence tagging model. Since the LM embeddings are used to compute the probability of future words in a neural LM, they are likely to encode both the semantic and syntactic roles of words in context.
Our main contribution is to show that the context sensitive representation captured in the LM embeddings is useful in the supervised sequence tagging setting. When we include the LM embeddings in our system overall performance increases from 90.87% to 91.93% INLINEFORM0 for the CoNLL 2003 NER task, a more then 1% absolute F1 increase, and a substantial improvement over the previous state of the art. We also establish a new state of the art result (96.37% INLINEFORM1 ) for the CoNLL 2000 Chunking task.
As a secondary contribution, we show that using both forward and backward LM embeddings boosts performance over a forward only LM. We also demonstrate that domain specific pre-training is not necessary by applying a LM trained in the news domain to scientific papers.
Overview
The main components in our language-model-augmented sequence tagger (TagLM) are illustrated in Fig. FIGREF4 . After pre-training word embeddings and a neural LM on large, unlabeled corpora (Step 1), we extract the word and LM embeddings for every token in a given input sequence (Step 2) and use them in the supervised sequence tagging model (Step 3).
Baseline sequence tagging model
Our baseline sequence tagging model is a hierarchical neural tagging model, closely following a number of recent studies BIBREF4 , BIBREF5 , BIBREF3 , BIBREF8 (left side of Figure FIGREF5 ).
Given a sentence of tokens INLINEFORM0 it first forms a representation, INLINEFORM1 , for each token by concatenating a character based representation INLINEFORM2 with a token embedding INLINEFORM3 : DISPLAYFORM0
The character representation INLINEFORM0 captures morphological information and is either a convolutional neural network (CNN) BIBREF4 , BIBREF8 or RNN BIBREF3 , BIBREF5 . It is parameterized by INLINEFORM1 with parameters INLINEFORM2 . The token embeddings, INLINEFORM3 , are obtained as a lookup INLINEFORM4 , initialized using pre-trained word embeddings, and fine tuned during training BIBREF2 .
To learn a context sensitive representation, we employ multiple layers of bidirectional RNNs. For each token position, INLINEFORM0 , the hidden state INLINEFORM1 of RNN layer INLINEFORM2 is formed by concatenating the hidden states from the forward ( INLINEFORM3 ) and backward ( INLINEFORM4 ) RNNs. As a result, the bidirectional RNN is able to use both past and future information to make a prediction at token INLINEFORM5 . More formally, for the first RNN layer that operates on INLINEFORM6 to output INLINEFORM7 : DISPLAYFORM0
The second RNN layer is similar and uses INLINEFORM0 to output INLINEFORM1 . In this paper, we use INLINEFORM2 layers of RNNs in all experiments and parameterize INLINEFORM3 as either Gated Recurrent Units (GRU) BIBREF9 or Long Short-Term Memory units (LSTM) BIBREF10 depending on the task.
Finally, the output of the final RNN layer INLINEFORM0 is used to predict a score for each possible tag using a single dense layer. Due to the dependencies between successive tags in our sequence labeling tasks (e.g. using the BIOES labeling scheme, it is not possible for I-PER to follow B-LOC), it is beneficial to model and decode each sentence jointly instead of independently predicting the label for each token. Accordingly, we add another layer with parameters for each label bigram, computing the sentence conditional random field (CRF) loss BIBREF11 using the forward-backward algorithm at training time, and using the Viterbi algorithm to find the most likely tag sequence at test time, similar to BIBREF2 .
Bidirectional LM
A language model computes the probability of a token sequence INLINEFORM0 INLINEFORM1
Recent state of the art neural language models BIBREF12 use a similar architecture to our baseline sequence tagger where they pass a token representation (either from a CNN over characters or as token embeddings) through multiple layers of LSTMs to embed the history INLINEFORM0 into a fixed dimensional vector INLINEFORM1 . This is the forward LM embedding of the token at position INLINEFORM2 and is the output of the top LSTM layer in the language model. Finally, the language model predicts the probability of token INLINEFORM3 using a softmax layer over words in the vocabulary.
The need to capture future context in the LM embeddings suggests it is beneficial to also consider a backward LM in additional to the traditional forward LM. A backward LM predicts the previous token given the future context. Given a sentence with INLINEFORM0 tokens, it computes INLINEFORM1
A backward LM can be implemented in an analogous way to a forward LM and produces the backward LM embedding INLINEFORM0 , for the sequence INLINEFORM1 , the output embeddings of the top layer LSTM.
In our final system, after pre-training the forward and backward LMs separately, we remove the top layer softmax and concatenate the forward and backward LM embeddings to form bidirectional LM embeddings, i.e., INLINEFORM0 . Note that in our formulation, the forward and backward LMs are independent, without any shared parameters.
Combining LM with sequence model
Our combined system, TagLM, uses the LM embeddings as additional inputs to the sequence tagging model. In particular, we concatenate the LM embeddings INLINEFORM0 with the output from one of the bidirectional RNN layers in the sequence model. In our experiments, we found that introducing the LM embeddings at the output of the first layer performed the best. More formally, we simply replace ( EQREF6 ) with DISPLAYFORM0
There are alternate possibilities for adding the LM embeddings to the sequence model. One possibility adds a non-linear mapping after the concatenation and before the second RNN (e.g. replacing ( EQREF9 ) with INLINEFORM0 where INLINEFORM1 is a non-linear function). Another possibility introduces an attention-like mechanism that weights the all LM embeddings in a sentence before including them in the sequence model. Our initial results with the simple concatenation were encouraging so we did not explore these alternatives in this study, preferring to leave them for future work.
Experiments
We evaluate our approach on two well benchmarked sequence tagging tasks, the CoNLL 2003 NER task BIBREF13 and the CoNLL 2000 Chunking task BIBREF14 . We report the official evaluation metric (micro-averaged INLINEFORM0 ). In both cases, we use the BIOES labeling scheme for the output tags, following previous work which showed it outperforms other options BIBREF15 . Following BIBREF8 , we use the Senna word embeddings BIBREF2 and pre-processed the text by lowercasing all tokens and replacing all digits with 0.
Overall system results
Tables TABREF15 and TABREF16 compare results from TagLM with previously published state of the art results without additional labeled data or task specific gazetteers. Tables TABREF17 and TABREF18 compare results of TagLM to other systems that include additional labeled data or gazetteers. In both tasks, TagLM establishes a new state of the art using bidirectional LMs (the forward CNN-BIG-LSTM and the backward LSTM-2048-512).
In the CoNLL 2003 NER task, our model scores 91.93 mean INLINEFORM0 , which is a statistically significant increase over the previous best result of 91.62 INLINEFORM1 from BIBREF8 that used gazetteers (at 95%, two-sided Welch t-test, INLINEFORM2 ).
In the CoNLL 2000 Chunking task, TagLM achieves 96.37 mean INLINEFORM0 , exceeding all previously published results without additional labeled data by more then 1% absolute INLINEFORM1 . The improvement over the previous best result of 95.77 in BIBREF6 that jointly trains with Penn Treebank (PTB) POS tags is statistically significant at 95% ( INLINEFORM2 assuming standard deviation of INLINEFORM3 ).
Importantly, the LM embeddings amounts to an average absolute improvement of 1.06 and 1.37 INLINEFORM0 in the NER and Chunking tasks, respectively.
Although we do not use external labeled data or gazetteers, we found that TagLM outperforms previous state of the art results in both tasks when external resources (labeled data or task specific gazetteers) are available. Furthermore, Tables TABREF17 and TABREF18 show that, in most cases, the improvements we obtain by adding LM embeddings are larger then the improvements previously obtained by adding other forms of transfer or joint learning. For example, BIBREF3 noted an improvement of only 0.06 INLINEFORM0 in the NER task when transfer learning from both CoNLL 2000 chunks and PTB POS tags and BIBREF8 reported an increase of 0.71 INLINEFORM1 when adding gazetteers to their baseline. In the Chunking task, previous work has reported from 0.28 to 0.75 improvement in INLINEFORM2 when including supervised labels from the PTB POS tags or CoNLL 2003 entities BIBREF3 , BIBREF7 , BIBREF6 .
Analysis
To elucidate the characteristics of our LM augmented sequence tagger, we ran a number of additional experiments on the CoNLL 2003 NER task.
In this experiment, we concatenate the LM embeddings at different locations in the baseline sequence tagger. In particular, we used the LM embeddings INLINEFORM0 to:
augment the input of the first RNN layer; i.e.,
INLINEFORM0 ,
augment the output of the first RNN layer; i.e., INLINEFORM0 , and
augment the output of the second RNN layer; i.e., INLINEFORM0 .
Table TABREF20 shows that the second alternative performs best. We speculate that the second RNN layer in the sequence tagging model is able to capture interactions between task specific context as expressed in the first RNN layer and general context as expressed in the LM embeddings in a way that improves overall system performance. These results are consistent with BIBREF7 who found that chunking performance was sensitive to the level at which additional POS supervision was added.
In this experiment, we compare six different configurations of the forward and backward language models (including the baseline model which does not use any language models). The results are reported in Table TABREF21 .
We find that adding backward LM embeddings consistently outperforms forward-only LM embeddings, with INLINEFORM0 improvements between 0.22 and 0.27%, even with the relatively small backward LSTM-2048-512 LM.
LM size is important, and replacing the forward LSTM-2048-512 with CNN-BIG-LSTM (test perplexities of 47.7 to 30.0 on 1B Word Benchmark) improves INLINEFORM0 by 0.26 - 0.31%, about as much as adding backward LM. Accordingly, we hypothesize (but have not tested) that replacing the backward LSTM-2048-512 with a backward LM analogous to the CNN-BIG-LSTM would further improve performance.
To highlight the importance of including language models trained on a large scale data, we also experimented with training a language model on just the CoNLL 2003 training and development data. Due to the much smaller size of this data set, we decreased the model size to 512 hidden units with a 256 dimension projection and normalized tokens in the same manner as input to the sequence tagging model (lower-cased, with all digits replaced with 0). The test set perplexities for the forward and backward models (measured on the CoNLL 2003 test data) were 106.9 and 104.2, respectively. Including embeddings from these language models decreased performance slightly compared to the baseline system without any LM. This result supports the hypothesis that adding language models help because they learn composition functions (i.e., the RNN parameters in the language model) from much larger data compared to the composition functions in the baseline tagger, which are only learned from labeled data.
To understand the importance of including a task specific sequence RNN we ran an experiment that removed the task specific sequence RNN and used only the LM embeddings with a dense layer and CRF to predict output tags. In this setup, performance was very low, 88.17 INLINEFORM0 , well below our baseline. This result confirms that the RNNs in the baseline tagger encode essential information which is not encoded in the LM embeddings. This is unsurprising since the RNNs in the baseline tagger are trained on labeled examples, unlike the RNN in the language model which is only trained on unlabeled examples. Note that the LM weights are fixed in this experiment.
A priori, we expect the addition of LM embeddings to be most beneficial in cases where the task specific annotated datasets are small. To test this hypothesis, we replicated the setup from BIBREF3 that samples 1% of the CoNLL 2003 training set and compared the performance of TagLM to our baseline without LM. In this scenario, test INLINEFORM0 increased 3.35% (from 67.66 to 71.01%) compared to an increase of 1.06% INLINEFORM1 for a similar comparison with the full training dataset. The analogous increases in BIBREF3 are 3.97% for cross-lingual transfer from CoNLL 2002 Spanish NER and 6.28% INLINEFORM2 for transfer from PTB POS tags. However, they found only a 0.06% INLINEFORM3 increase when using the full training data and transferring from both CoNLL 2000 chunks and PTB POS tags. Taken together, this suggests that for very small labeled training sets, transferring from other tasks yields a large improvement, but this improvement almost disappears when the training data is large. On the other hand, our approach is less dependent on the training set size and significantly improves performance even with larger training sets.
Our TagLM formulation increases the number of parameters in the second RNN layer INLINEFORM0 due to the increase in the input dimension INLINEFORM1 if all other hyperparameters are held constant. To confirm that this did not have a material impact on the results, we ran two additional experiments. In the first, we trained a system without a LM but increased the second RNN layer hidden dimension so that number of parameters was the same as in TagLM. In this case, performance decreased slightly (by 0.15% INLINEFORM2 ) compared to the baseline model, indicating that solely increasing parameters does not improve performance. In the second experiment, we decreased the hidden dimension of the second RNN layer in TagLM to give it the same number of parameters as the baseline no LM model. In this case, test INLINEFORM3 increased slightly to INLINEFORM4 indicating that the additional parameters in TagLM are slightly hurting performance.
One artifact of our evaluation framework is that both the labeled data in the chunking and NER tasks and the unlabeled text in the 1 Billion Word Benchmark used to train the bidirectional LMs are derived from news articles. To test the sensitivity to the LM training domain, we also applied TagLM with a LM trained on news articles to the SemEval 2017 Shared Task 10, ScienceIE. ScienceIE requires end-to-end joint entity and relationship extraction from scientific publications across three diverse fields (computer science, material sciences, and physics) and defines three broad entity types (Task, Material and Process). For this task, TagLM increased INLINEFORM0 on the development set by 4.12% (from 49.93 to to 54.05%) for entity extraction over our baseline without LM embeddings and it was a major component in our winning submission to ScienceIE, Scenario 1 BIBREF20 . We conclude that LM embeddings can improve the performance of a sequence tagger even when the data comes from a different domain.
Conclusion
In this paper, we proposed a simple and general semi-supervised method using pre-trained neural language models to augment token representations in sequence tagging models. Our method significantly outperforms current state of the art models in two popular datasets for NER and Chunking. Our analysis shows that adding a backward LM in addition to traditional forward LMs consistently improves performance. The proposed method is robust even when the LM is trained on unlabeled data from a different domain, or when the baseline model is trained on a large number of labeled examples.
Acknowledgments
We thank Chris Dyer, Julia Hockenmaier, Jayant Krishnamurthy, Matt Gardner and Oren Etzioni for comments on earlier drafts that led to substantial improvements in the final version. | They pre-train forward and backward LMs separately, remove top layer softmax, and concatenate to obtain the bidirectional LMs. |
a5b67470a1c4779877f0d8b7724879bbb0a3b313 | a5b67470a1c4779877f0d8b7724879bbb0a3b313_0 | Q: what metrics are used in evaluation?
Text: Introduction
Due to their simplicity and efficacy, pre-trained word embedding have become ubiquitous in NLP systems. Many prior studies have shown that they capture useful semantic and syntactic information BIBREF0 , BIBREF1 and including them in NLP systems has been shown to be enormously helpful for a variety of downstream tasks BIBREF2 .
However, in many NLP tasks it is essential to represent not just the meaning of a word, but also the word in context. For example, in the two phrases “A Central Bank spokesman” and “The Central African Republic”, the word `Central' is used as part of both an Organization and Location. Accordingly, current state of the art sequence tagging models typically include a bidirectional recurrent neural network (RNN) that encodes token sequences into a context sensitive representation before making token specific predictions BIBREF3 , BIBREF4 , BIBREF5 , BIBREF6 .
Although the token representation is initialized with pre-trained embeddings, the parameters of the bidirectional RNN are typically learned only on labeled data. Previous work has explored methods for jointly learning the bidirectional RNN with supplemental labeled data from other tasks BIBREF7 , BIBREF3 .
In this paper, we explore an alternate semi-supervised approach which does not require additional labeled data. We use a neural language model (LM), pre-trained on a large, unlabeled corpus to compute an encoding of the context at each position in the sequence (hereafter an LM embedding) and use it in the supervised sequence tagging model. Since the LM embeddings are used to compute the probability of future words in a neural LM, they are likely to encode both the semantic and syntactic roles of words in context.
Our main contribution is to show that the context sensitive representation captured in the LM embeddings is useful in the supervised sequence tagging setting. When we include the LM embeddings in our system overall performance increases from 90.87% to 91.93% INLINEFORM0 for the CoNLL 2003 NER task, a more then 1% absolute F1 increase, and a substantial improvement over the previous state of the art. We also establish a new state of the art result (96.37% INLINEFORM1 ) for the CoNLL 2000 Chunking task.
As a secondary contribution, we show that using both forward and backward LM embeddings boosts performance over a forward only LM. We also demonstrate that domain specific pre-training is not necessary by applying a LM trained in the news domain to scientific papers.
Overview
The main components in our language-model-augmented sequence tagger (TagLM) are illustrated in Fig. FIGREF4 . After pre-training word embeddings and a neural LM on large, unlabeled corpora (Step 1), we extract the word and LM embeddings for every token in a given input sequence (Step 2) and use them in the supervised sequence tagging model (Step 3).
Baseline sequence tagging model
Our baseline sequence tagging model is a hierarchical neural tagging model, closely following a number of recent studies BIBREF4 , BIBREF5 , BIBREF3 , BIBREF8 (left side of Figure FIGREF5 ).
Given a sentence of tokens INLINEFORM0 it first forms a representation, INLINEFORM1 , for each token by concatenating a character based representation INLINEFORM2 with a token embedding INLINEFORM3 : DISPLAYFORM0
The character representation INLINEFORM0 captures morphological information and is either a convolutional neural network (CNN) BIBREF4 , BIBREF8 or RNN BIBREF3 , BIBREF5 . It is parameterized by INLINEFORM1 with parameters INLINEFORM2 . The token embeddings, INLINEFORM3 , are obtained as a lookup INLINEFORM4 , initialized using pre-trained word embeddings, and fine tuned during training BIBREF2 .
To learn a context sensitive representation, we employ multiple layers of bidirectional RNNs. For each token position, INLINEFORM0 , the hidden state INLINEFORM1 of RNN layer INLINEFORM2 is formed by concatenating the hidden states from the forward ( INLINEFORM3 ) and backward ( INLINEFORM4 ) RNNs. As a result, the bidirectional RNN is able to use both past and future information to make a prediction at token INLINEFORM5 . More formally, for the first RNN layer that operates on INLINEFORM6 to output INLINEFORM7 : DISPLAYFORM0
The second RNN layer is similar and uses INLINEFORM0 to output INLINEFORM1 . In this paper, we use INLINEFORM2 layers of RNNs in all experiments and parameterize INLINEFORM3 as either Gated Recurrent Units (GRU) BIBREF9 or Long Short-Term Memory units (LSTM) BIBREF10 depending on the task.
Finally, the output of the final RNN layer INLINEFORM0 is used to predict a score for each possible tag using a single dense layer. Due to the dependencies between successive tags in our sequence labeling tasks (e.g. using the BIOES labeling scheme, it is not possible for I-PER to follow B-LOC), it is beneficial to model and decode each sentence jointly instead of independently predicting the label for each token. Accordingly, we add another layer with parameters for each label bigram, computing the sentence conditional random field (CRF) loss BIBREF11 using the forward-backward algorithm at training time, and using the Viterbi algorithm to find the most likely tag sequence at test time, similar to BIBREF2 .
Bidirectional LM
A language model computes the probability of a token sequence INLINEFORM0 INLINEFORM1
Recent state of the art neural language models BIBREF12 use a similar architecture to our baseline sequence tagger where they pass a token representation (either from a CNN over characters or as token embeddings) through multiple layers of LSTMs to embed the history INLINEFORM0 into a fixed dimensional vector INLINEFORM1 . This is the forward LM embedding of the token at position INLINEFORM2 and is the output of the top LSTM layer in the language model. Finally, the language model predicts the probability of token INLINEFORM3 using a softmax layer over words in the vocabulary.
The need to capture future context in the LM embeddings suggests it is beneficial to also consider a backward LM in additional to the traditional forward LM. A backward LM predicts the previous token given the future context. Given a sentence with INLINEFORM0 tokens, it computes INLINEFORM1
A backward LM can be implemented in an analogous way to a forward LM and produces the backward LM embedding INLINEFORM0 , for the sequence INLINEFORM1 , the output embeddings of the top layer LSTM.
In our final system, after pre-training the forward and backward LMs separately, we remove the top layer softmax and concatenate the forward and backward LM embeddings to form bidirectional LM embeddings, i.e., INLINEFORM0 . Note that in our formulation, the forward and backward LMs are independent, without any shared parameters.
Combining LM with sequence model
Our combined system, TagLM, uses the LM embeddings as additional inputs to the sequence tagging model. In particular, we concatenate the LM embeddings INLINEFORM0 with the output from one of the bidirectional RNN layers in the sequence model. In our experiments, we found that introducing the LM embeddings at the output of the first layer performed the best. More formally, we simply replace ( EQREF6 ) with DISPLAYFORM0
There are alternate possibilities for adding the LM embeddings to the sequence model. One possibility adds a non-linear mapping after the concatenation and before the second RNN (e.g. replacing ( EQREF9 ) with INLINEFORM0 where INLINEFORM1 is a non-linear function). Another possibility introduces an attention-like mechanism that weights the all LM embeddings in a sentence before including them in the sequence model. Our initial results with the simple concatenation were encouraging so we did not explore these alternatives in this study, preferring to leave them for future work.
Experiments
We evaluate our approach on two well benchmarked sequence tagging tasks, the CoNLL 2003 NER task BIBREF13 and the CoNLL 2000 Chunking task BIBREF14 . We report the official evaluation metric (micro-averaged INLINEFORM0 ). In both cases, we use the BIOES labeling scheme for the output tags, following previous work which showed it outperforms other options BIBREF15 . Following BIBREF8 , we use the Senna word embeddings BIBREF2 and pre-processed the text by lowercasing all tokens and replacing all digits with 0.
Overall system results
Tables TABREF15 and TABREF16 compare results from TagLM with previously published state of the art results without additional labeled data or task specific gazetteers. Tables TABREF17 and TABREF18 compare results of TagLM to other systems that include additional labeled data or gazetteers. In both tasks, TagLM establishes a new state of the art using bidirectional LMs (the forward CNN-BIG-LSTM and the backward LSTM-2048-512).
In the CoNLL 2003 NER task, our model scores 91.93 mean INLINEFORM0 , which is a statistically significant increase over the previous best result of 91.62 INLINEFORM1 from BIBREF8 that used gazetteers (at 95%, two-sided Welch t-test, INLINEFORM2 ).
In the CoNLL 2000 Chunking task, TagLM achieves 96.37 mean INLINEFORM0 , exceeding all previously published results without additional labeled data by more then 1% absolute INLINEFORM1 . The improvement over the previous best result of 95.77 in BIBREF6 that jointly trains with Penn Treebank (PTB) POS tags is statistically significant at 95% ( INLINEFORM2 assuming standard deviation of INLINEFORM3 ).
Importantly, the LM embeddings amounts to an average absolute improvement of 1.06 and 1.37 INLINEFORM0 in the NER and Chunking tasks, respectively.
Although we do not use external labeled data or gazetteers, we found that TagLM outperforms previous state of the art results in both tasks when external resources (labeled data or task specific gazetteers) are available. Furthermore, Tables TABREF17 and TABREF18 show that, in most cases, the improvements we obtain by adding LM embeddings are larger then the improvements previously obtained by adding other forms of transfer or joint learning. For example, BIBREF3 noted an improvement of only 0.06 INLINEFORM0 in the NER task when transfer learning from both CoNLL 2000 chunks and PTB POS tags and BIBREF8 reported an increase of 0.71 INLINEFORM1 when adding gazetteers to their baseline. In the Chunking task, previous work has reported from 0.28 to 0.75 improvement in INLINEFORM2 when including supervised labels from the PTB POS tags or CoNLL 2003 entities BIBREF3 , BIBREF7 , BIBREF6 .
Analysis
To elucidate the characteristics of our LM augmented sequence tagger, we ran a number of additional experiments on the CoNLL 2003 NER task.
In this experiment, we concatenate the LM embeddings at different locations in the baseline sequence tagger. In particular, we used the LM embeddings INLINEFORM0 to:
augment the input of the first RNN layer; i.e.,
INLINEFORM0 ,
augment the output of the first RNN layer; i.e., INLINEFORM0 , and
augment the output of the second RNN layer; i.e., INLINEFORM0 .
Table TABREF20 shows that the second alternative performs best. We speculate that the second RNN layer in the sequence tagging model is able to capture interactions between task specific context as expressed in the first RNN layer and general context as expressed in the LM embeddings in a way that improves overall system performance. These results are consistent with BIBREF7 who found that chunking performance was sensitive to the level at which additional POS supervision was added.
In this experiment, we compare six different configurations of the forward and backward language models (including the baseline model which does not use any language models). The results are reported in Table TABREF21 .
We find that adding backward LM embeddings consistently outperforms forward-only LM embeddings, with INLINEFORM0 improvements between 0.22 and 0.27%, even with the relatively small backward LSTM-2048-512 LM.
LM size is important, and replacing the forward LSTM-2048-512 with CNN-BIG-LSTM (test perplexities of 47.7 to 30.0 on 1B Word Benchmark) improves INLINEFORM0 by 0.26 - 0.31%, about as much as adding backward LM. Accordingly, we hypothesize (but have not tested) that replacing the backward LSTM-2048-512 with a backward LM analogous to the CNN-BIG-LSTM would further improve performance.
To highlight the importance of including language models trained on a large scale data, we also experimented with training a language model on just the CoNLL 2003 training and development data. Due to the much smaller size of this data set, we decreased the model size to 512 hidden units with a 256 dimension projection and normalized tokens in the same manner as input to the sequence tagging model (lower-cased, with all digits replaced with 0). The test set perplexities for the forward and backward models (measured on the CoNLL 2003 test data) were 106.9 and 104.2, respectively. Including embeddings from these language models decreased performance slightly compared to the baseline system without any LM. This result supports the hypothesis that adding language models help because they learn composition functions (i.e., the RNN parameters in the language model) from much larger data compared to the composition functions in the baseline tagger, which are only learned from labeled data.
To understand the importance of including a task specific sequence RNN we ran an experiment that removed the task specific sequence RNN and used only the LM embeddings with a dense layer and CRF to predict output tags. In this setup, performance was very low, 88.17 INLINEFORM0 , well below our baseline. This result confirms that the RNNs in the baseline tagger encode essential information which is not encoded in the LM embeddings. This is unsurprising since the RNNs in the baseline tagger are trained on labeled examples, unlike the RNN in the language model which is only trained on unlabeled examples. Note that the LM weights are fixed in this experiment.
A priori, we expect the addition of LM embeddings to be most beneficial in cases where the task specific annotated datasets are small. To test this hypothesis, we replicated the setup from BIBREF3 that samples 1% of the CoNLL 2003 training set and compared the performance of TagLM to our baseline without LM. In this scenario, test INLINEFORM0 increased 3.35% (from 67.66 to 71.01%) compared to an increase of 1.06% INLINEFORM1 for a similar comparison with the full training dataset. The analogous increases in BIBREF3 are 3.97% for cross-lingual transfer from CoNLL 2002 Spanish NER and 6.28% INLINEFORM2 for transfer from PTB POS tags. However, they found only a 0.06% INLINEFORM3 increase when using the full training data and transferring from both CoNLL 2000 chunks and PTB POS tags. Taken together, this suggests that for very small labeled training sets, transferring from other tasks yields a large improvement, but this improvement almost disappears when the training data is large. On the other hand, our approach is less dependent on the training set size and significantly improves performance even with larger training sets.
Our TagLM formulation increases the number of parameters in the second RNN layer INLINEFORM0 due to the increase in the input dimension INLINEFORM1 if all other hyperparameters are held constant. To confirm that this did not have a material impact on the results, we ran two additional experiments. In the first, we trained a system without a LM but increased the second RNN layer hidden dimension so that number of parameters was the same as in TagLM. In this case, performance decreased slightly (by 0.15% INLINEFORM2 ) compared to the baseline model, indicating that solely increasing parameters does not improve performance. In the second experiment, we decreased the hidden dimension of the second RNN layer in TagLM to give it the same number of parameters as the baseline no LM model. In this case, test INLINEFORM3 increased slightly to INLINEFORM4 indicating that the additional parameters in TagLM are slightly hurting performance.
One artifact of our evaluation framework is that both the labeled data in the chunking and NER tasks and the unlabeled text in the 1 Billion Word Benchmark used to train the bidirectional LMs are derived from news articles. To test the sensitivity to the LM training domain, we also applied TagLM with a LM trained on news articles to the SemEval 2017 Shared Task 10, ScienceIE. ScienceIE requires end-to-end joint entity and relationship extraction from scientific publications across three diverse fields (computer science, material sciences, and physics) and defines three broad entity types (Task, Material and Process). For this task, TagLM increased INLINEFORM0 on the development set by 4.12% (from 49.93 to to 54.05%) for entity extraction over our baseline without LM embeddings and it was a major component in our winning submission to ScienceIE, Scenario 1 BIBREF20 . We conclude that LM embeddings can improve the performance of a sequence tagger even when the data comes from a different domain.
Conclusion
In this paper, we proposed a simple and general semi-supervised method using pre-trained neural language models to augment token representations in sequence tagging models. Our method significantly outperforms current state of the art models in two popular datasets for NER and Chunking. Our analysis shows that adding a backward LM in addition to traditional forward LMs consistently improves performance. The proposed method is robust even when the LM is trained on unlabeled data from a different domain, or when the baseline model is trained on a large number of labeled examples.
Acknowledgments
We thank Chris Dyer, Julia Hockenmaier, Jayant Krishnamurthy, Matt Gardner and Oren Etzioni for comments on earlier drafts that led to substantial improvements in the final version. | micro-averaged F1 |
12cfbaace49f9363fcc10989cf92a50dfe0a55ea | 12cfbaace49f9363fcc10989cf92a50dfe0a55ea_0 | Q: what results do they achieve?
Text: Introduction
Due to their simplicity and efficacy, pre-trained word embedding have become ubiquitous in NLP systems. Many prior studies have shown that they capture useful semantic and syntactic information BIBREF0 , BIBREF1 and including them in NLP systems has been shown to be enormously helpful for a variety of downstream tasks BIBREF2 .
However, in many NLP tasks it is essential to represent not just the meaning of a word, but also the word in context. For example, in the two phrases “A Central Bank spokesman” and “The Central African Republic”, the word `Central' is used as part of both an Organization and Location. Accordingly, current state of the art sequence tagging models typically include a bidirectional recurrent neural network (RNN) that encodes token sequences into a context sensitive representation before making token specific predictions BIBREF3 , BIBREF4 , BIBREF5 , BIBREF6 .
Although the token representation is initialized with pre-trained embeddings, the parameters of the bidirectional RNN are typically learned only on labeled data. Previous work has explored methods for jointly learning the bidirectional RNN with supplemental labeled data from other tasks BIBREF7 , BIBREF3 .
In this paper, we explore an alternate semi-supervised approach which does not require additional labeled data. We use a neural language model (LM), pre-trained on a large, unlabeled corpus to compute an encoding of the context at each position in the sequence (hereafter an LM embedding) and use it in the supervised sequence tagging model. Since the LM embeddings are used to compute the probability of future words in a neural LM, they are likely to encode both the semantic and syntactic roles of words in context.
Our main contribution is to show that the context sensitive representation captured in the LM embeddings is useful in the supervised sequence tagging setting. When we include the LM embeddings in our system overall performance increases from 90.87% to 91.93% INLINEFORM0 for the CoNLL 2003 NER task, a more then 1% absolute F1 increase, and a substantial improvement over the previous state of the art. We also establish a new state of the art result (96.37% INLINEFORM1 ) for the CoNLL 2000 Chunking task.
As a secondary contribution, we show that using both forward and backward LM embeddings boosts performance over a forward only LM. We also demonstrate that domain specific pre-training is not necessary by applying a LM trained in the news domain to scientific papers.
Overview
The main components in our language-model-augmented sequence tagger (TagLM) are illustrated in Fig. FIGREF4 . After pre-training word embeddings and a neural LM on large, unlabeled corpora (Step 1), we extract the word and LM embeddings for every token in a given input sequence (Step 2) and use them in the supervised sequence tagging model (Step 3).
Baseline sequence tagging model
Our baseline sequence tagging model is a hierarchical neural tagging model, closely following a number of recent studies BIBREF4 , BIBREF5 , BIBREF3 , BIBREF8 (left side of Figure FIGREF5 ).
Given a sentence of tokens INLINEFORM0 it first forms a representation, INLINEFORM1 , for each token by concatenating a character based representation INLINEFORM2 with a token embedding INLINEFORM3 : DISPLAYFORM0
The character representation INLINEFORM0 captures morphological information and is either a convolutional neural network (CNN) BIBREF4 , BIBREF8 or RNN BIBREF3 , BIBREF5 . It is parameterized by INLINEFORM1 with parameters INLINEFORM2 . The token embeddings, INLINEFORM3 , are obtained as a lookup INLINEFORM4 , initialized using pre-trained word embeddings, and fine tuned during training BIBREF2 .
To learn a context sensitive representation, we employ multiple layers of bidirectional RNNs. For each token position, INLINEFORM0 , the hidden state INLINEFORM1 of RNN layer INLINEFORM2 is formed by concatenating the hidden states from the forward ( INLINEFORM3 ) and backward ( INLINEFORM4 ) RNNs. As a result, the bidirectional RNN is able to use both past and future information to make a prediction at token INLINEFORM5 . More formally, for the first RNN layer that operates on INLINEFORM6 to output INLINEFORM7 : DISPLAYFORM0
The second RNN layer is similar and uses INLINEFORM0 to output INLINEFORM1 . In this paper, we use INLINEFORM2 layers of RNNs in all experiments and parameterize INLINEFORM3 as either Gated Recurrent Units (GRU) BIBREF9 or Long Short-Term Memory units (LSTM) BIBREF10 depending on the task.
Finally, the output of the final RNN layer INLINEFORM0 is used to predict a score for each possible tag using a single dense layer. Due to the dependencies between successive tags in our sequence labeling tasks (e.g. using the BIOES labeling scheme, it is not possible for I-PER to follow B-LOC), it is beneficial to model and decode each sentence jointly instead of independently predicting the label for each token. Accordingly, we add another layer with parameters for each label bigram, computing the sentence conditional random field (CRF) loss BIBREF11 using the forward-backward algorithm at training time, and using the Viterbi algorithm to find the most likely tag sequence at test time, similar to BIBREF2 .
Bidirectional LM
A language model computes the probability of a token sequence INLINEFORM0 INLINEFORM1
Recent state of the art neural language models BIBREF12 use a similar architecture to our baseline sequence tagger where they pass a token representation (either from a CNN over characters or as token embeddings) through multiple layers of LSTMs to embed the history INLINEFORM0 into a fixed dimensional vector INLINEFORM1 . This is the forward LM embedding of the token at position INLINEFORM2 and is the output of the top LSTM layer in the language model. Finally, the language model predicts the probability of token INLINEFORM3 using a softmax layer over words in the vocabulary.
The need to capture future context in the LM embeddings suggests it is beneficial to also consider a backward LM in additional to the traditional forward LM. A backward LM predicts the previous token given the future context. Given a sentence with INLINEFORM0 tokens, it computes INLINEFORM1
A backward LM can be implemented in an analogous way to a forward LM and produces the backward LM embedding INLINEFORM0 , for the sequence INLINEFORM1 , the output embeddings of the top layer LSTM.
In our final system, after pre-training the forward and backward LMs separately, we remove the top layer softmax and concatenate the forward and backward LM embeddings to form bidirectional LM embeddings, i.e., INLINEFORM0 . Note that in our formulation, the forward and backward LMs are independent, without any shared parameters.
Combining LM with sequence model
Our combined system, TagLM, uses the LM embeddings as additional inputs to the sequence tagging model. In particular, we concatenate the LM embeddings INLINEFORM0 with the output from one of the bidirectional RNN layers in the sequence model. In our experiments, we found that introducing the LM embeddings at the output of the first layer performed the best. More formally, we simply replace ( EQREF6 ) with DISPLAYFORM0
There are alternate possibilities for adding the LM embeddings to the sequence model. One possibility adds a non-linear mapping after the concatenation and before the second RNN (e.g. replacing ( EQREF9 ) with INLINEFORM0 where INLINEFORM1 is a non-linear function). Another possibility introduces an attention-like mechanism that weights the all LM embeddings in a sentence before including them in the sequence model. Our initial results with the simple concatenation were encouraging so we did not explore these alternatives in this study, preferring to leave them for future work.
Experiments
We evaluate our approach on two well benchmarked sequence tagging tasks, the CoNLL 2003 NER task BIBREF13 and the CoNLL 2000 Chunking task BIBREF14 . We report the official evaluation metric (micro-averaged INLINEFORM0 ). In both cases, we use the BIOES labeling scheme for the output tags, following previous work which showed it outperforms other options BIBREF15 . Following BIBREF8 , we use the Senna word embeddings BIBREF2 and pre-processed the text by lowercasing all tokens and replacing all digits with 0.
Overall system results
Tables TABREF15 and TABREF16 compare results from TagLM with previously published state of the art results without additional labeled data or task specific gazetteers. Tables TABREF17 and TABREF18 compare results of TagLM to other systems that include additional labeled data or gazetteers. In both tasks, TagLM establishes a new state of the art using bidirectional LMs (the forward CNN-BIG-LSTM and the backward LSTM-2048-512).
In the CoNLL 2003 NER task, our model scores 91.93 mean INLINEFORM0 , which is a statistically significant increase over the previous best result of 91.62 INLINEFORM1 from BIBREF8 that used gazetteers (at 95%, two-sided Welch t-test, INLINEFORM2 ).
In the CoNLL 2000 Chunking task, TagLM achieves 96.37 mean INLINEFORM0 , exceeding all previously published results without additional labeled data by more then 1% absolute INLINEFORM1 . The improvement over the previous best result of 95.77 in BIBREF6 that jointly trains with Penn Treebank (PTB) POS tags is statistically significant at 95% ( INLINEFORM2 assuming standard deviation of INLINEFORM3 ).
Importantly, the LM embeddings amounts to an average absolute improvement of 1.06 and 1.37 INLINEFORM0 in the NER and Chunking tasks, respectively.
Although we do not use external labeled data or gazetteers, we found that TagLM outperforms previous state of the art results in both tasks when external resources (labeled data or task specific gazetteers) are available. Furthermore, Tables TABREF17 and TABREF18 show that, in most cases, the improvements we obtain by adding LM embeddings are larger then the improvements previously obtained by adding other forms of transfer or joint learning. For example, BIBREF3 noted an improvement of only 0.06 INLINEFORM0 in the NER task when transfer learning from both CoNLL 2000 chunks and PTB POS tags and BIBREF8 reported an increase of 0.71 INLINEFORM1 when adding gazetteers to their baseline. In the Chunking task, previous work has reported from 0.28 to 0.75 improvement in INLINEFORM2 when including supervised labels from the PTB POS tags or CoNLL 2003 entities BIBREF3 , BIBREF7 , BIBREF6 .
Analysis
To elucidate the characteristics of our LM augmented sequence tagger, we ran a number of additional experiments on the CoNLL 2003 NER task.
In this experiment, we concatenate the LM embeddings at different locations in the baseline sequence tagger. In particular, we used the LM embeddings INLINEFORM0 to:
augment the input of the first RNN layer; i.e.,
INLINEFORM0 ,
augment the output of the first RNN layer; i.e., INLINEFORM0 , and
augment the output of the second RNN layer; i.e., INLINEFORM0 .
Table TABREF20 shows that the second alternative performs best. We speculate that the second RNN layer in the sequence tagging model is able to capture interactions between task specific context as expressed in the first RNN layer and general context as expressed in the LM embeddings in a way that improves overall system performance. These results are consistent with BIBREF7 who found that chunking performance was sensitive to the level at which additional POS supervision was added.
In this experiment, we compare six different configurations of the forward and backward language models (including the baseline model which does not use any language models). The results are reported in Table TABREF21 .
We find that adding backward LM embeddings consistently outperforms forward-only LM embeddings, with INLINEFORM0 improvements between 0.22 and 0.27%, even with the relatively small backward LSTM-2048-512 LM.
LM size is important, and replacing the forward LSTM-2048-512 with CNN-BIG-LSTM (test perplexities of 47.7 to 30.0 on 1B Word Benchmark) improves INLINEFORM0 by 0.26 - 0.31%, about as much as adding backward LM. Accordingly, we hypothesize (but have not tested) that replacing the backward LSTM-2048-512 with a backward LM analogous to the CNN-BIG-LSTM would further improve performance.
To highlight the importance of including language models trained on a large scale data, we also experimented with training a language model on just the CoNLL 2003 training and development data. Due to the much smaller size of this data set, we decreased the model size to 512 hidden units with a 256 dimension projection and normalized tokens in the same manner as input to the sequence tagging model (lower-cased, with all digits replaced with 0). The test set perplexities for the forward and backward models (measured on the CoNLL 2003 test data) were 106.9 and 104.2, respectively. Including embeddings from these language models decreased performance slightly compared to the baseline system without any LM. This result supports the hypothesis that adding language models help because they learn composition functions (i.e., the RNN parameters in the language model) from much larger data compared to the composition functions in the baseline tagger, which are only learned from labeled data.
To understand the importance of including a task specific sequence RNN we ran an experiment that removed the task specific sequence RNN and used only the LM embeddings with a dense layer and CRF to predict output tags. In this setup, performance was very low, 88.17 INLINEFORM0 , well below our baseline. This result confirms that the RNNs in the baseline tagger encode essential information which is not encoded in the LM embeddings. This is unsurprising since the RNNs in the baseline tagger are trained on labeled examples, unlike the RNN in the language model which is only trained on unlabeled examples. Note that the LM weights are fixed in this experiment.
A priori, we expect the addition of LM embeddings to be most beneficial in cases where the task specific annotated datasets are small. To test this hypothesis, we replicated the setup from BIBREF3 that samples 1% of the CoNLL 2003 training set and compared the performance of TagLM to our baseline without LM. In this scenario, test INLINEFORM0 increased 3.35% (from 67.66 to 71.01%) compared to an increase of 1.06% INLINEFORM1 for a similar comparison with the full training dataset. The analogous increases in BIBREF3 are 3.97% for cross-lingual transfer from CoNLL 2002 Spanish NER and 6.28% INLINEFORM2 for transfer from PTB POS tags. However, they found only a 0.06% INLINEFORM3 increase when using the full training data and transferring from both CoNLL 2000 chunks and PTB POS tags. Taken together, this suggests that for very small labeled training sets, transferring from other tasks yields a large improvement, but this improvement almost disappears when the training data is large. On the other hand, our approach is less dependent on the training set size and significantly improves performance even with larger training sets.
Our TagLM formulation increases the number of parameters in the second RNN layer INLINEFORM0 due to the increase in the input dimension INLINEFORM1 if all other hyperparameters are held constant. To confirm that this did not have a material impact on the results, we ran two additional experiments. In the first, we trained a system without a LM but increased the second RNN layer hidden dimension so that number of parameters was the same as in TagLM. In this case, performance decreased slightly (by 0.15% INLINEFORM2 ) compared to the baseline model, indicating that solely increasing parameters does not improve performance. In the second experiment, we decreased the hidden dimension of the second RNN layer in TagLM to give it the same number of parameters as the baseline no LM model. In this case, test INLINEFORM3 increased slightly to INLINEFORM4 indicating that the additional parameters in TagLM are slightly hurting performance.
One artifact of our evaluation framework is that both the labeled data in the chunking and NER tasks and the unlabeled text in the 1 Billion Word Benchmark used to train the bidirectional LMs are derived from news articles. To test the sensitivity to the LM training domain, we also applied TagLM with a LM trained on news articles to the SemEval 2017 Shared Task 10, ScienceIE. ScienceIE requires end-to-end joint entity and relationship extraction from scientific publications across three diverse fields (computer science, material sciences, and physics) and defines three broad entity types (Task, Material and Process). For this task, TagLM increased INLINEFORM0 on the development set by 4.12% (from 49.93 to to 54.05%) for entity extraction over our baseline without LM embeddings and it was a major component in our winning submission to ScienceIE, Scenario 1 BIBREF20 . We conclude that LM embeddings can improve the performance of a sequence tagger even when the data comes from a different domain.
Conclusion
In this paper, we proposed a simple and general semi-supervised method using pre-trained neural language models to augment token representations in sequence tagging models. Our method significantly outperforms current state of the art models in two popular datasets for NER and Chunking. Our analysis shows that adding a backward LM in addition to traditional forward LMs consistently improves performance. The proposed method is robust even when the LM is trained on unlabeled data from a different domain, or when the baseline model is trained on a large number of labeled examples.
Acknowledgments
We thank Chris Dyer, Julia Hockenmaier, Jayant Krishnamurthy, Matt Gardner and Oren Etzioni for comments on earlier drafts that led to substantial improvements in the final version. | 91.93% F1 score on CoNLL 2003 NER task and 96.37% F1 score on CoNLL 2000 Chunking task |
4640793d82aa7db30ad7b88c0bf0a1030e636558 | 4640793d82aa7db30ad7b88c0bf0a1030e636558_0 | Q: what previous systems were compared to?
Text: Introduction
Due to their simplicity and efficacy, pre-trained word embedding have become ubiquitous in NLP systems. Many prior studies have shown that they capture useful semantic and syntactic information BIBREF0 , BIBREF1 and including them in NLP systems has been shown to be enormously helpful for a variety of downstream tasks BIBREF2 .
However, in many NLP tasks it is essential to represent not just the meaning of a word, but also the word in context. For example, in the two phrases “A Central Bank spokesman” and “The Central African Republic”, the word `Central' is used as part of both an Organization and Location. Accordingly, current state of the art sequence tagging models typically include a bidirectional recurrent neural network (RNN) that encodes token sequences into a context sensitive representation before making token specific predictions BIBREF3 , BIBREF4 , BIBREF5 , BIBREF6 .
Although the token representation is initialized with pre-trained embeddings, the parameters of the bidirectional RNN are typically learned only on labeled data. Previous work has explored methods for jointly learning the bidirectional RNN with supplemental labeled data from other tasks BIBREF7 , BIBREF3 .
In this paper, we explore an alternate semi-supervised approach which does not require additional labeled data. We use a neural language model (LM), pre-trained on a large, unlabeled corpus to compute an encoding of the context at each position in the sequence (hereafter an LM embedding) and use it in the supervised sequence tagging model. Since the LM embeddings are used to compute the probability of future words in a neural LM, they are likely to encode both the semantic and syntactic roles of words in context.
Our main contribution is to show that the context sensitive representation captured in the LM embeddings is useful in the supervised sequence tagging setting. When we include the LM embeddings in our system overall performance increases from 90.87% to 91.93% INLINEFORM0 for the CoNLL 2003 NER task, a more then 1% absolute F1 increase, and a substantial improvement over the previous state of the art. We also establish a new state of the art result (96.37% INLINEFORM1 ) for the CoNLL 2000 Chunking task.
As a secondary contribution, we show that using both forward and backward LM embeddings boosts performance over a forward only LM. We also demonstrate that domain specific pre-training is not necessary by applying a LM trained in the news domain to scientific papers.
Overview
The main components in our language-model-augmented sequence tagger (TagLM) are illustrated in Fig. FIGREF4 . After pre-training word embeddings and a neural LM on large, unlabeled corpora (Step 1), we extract the word and LM embeddings for every token in a given input sequence (Step 2) and use them in the supervised sequence tagging model (Step 3).
Baseline sequence tagging model
Our baseline sequence tagging model is a hierarchical neural tagging model, closely following a number of recent studies BIBREF4 , BIBREF5 , BIBREF3 , BIBREF8 (left side of Figure FIGREF5 ).
Given a sentence of tokens INLINEFORM0 it first forms a representation, INLINEFORM1 , for each token by concatenating a character based representation INLINEFORM2 with a token embedding INLINEFORM3 : DISPLAYFORM0
The character representation INLINEFORM0 captures morphological information and is either a convolutional neural network (CNN) BIBREF4 , BIBREF8 or RNN BIBREF3 , BIBREF5 . It is parameterized by INLINEFORM1 with parameters INLINEFORM2 . The token embeddings, INLINEFORM3 , are obtained as a lookup INLINEFORM4 , initialized using pre-trained word embeddings, and fine tuned during training BIBREF2 .
To learn a context sensitive representation, we employ multiple layers of bidirectional RNNs. For each token position, INLINEFORM0 , the hidden state INLINEFORM1 of RNN layer INLINEFORM2 is formed by concatenating the hidden states from the forward ( INLINEFORM3 ) and backward ( INLINEFORM4 ) RNNs. As a result, the bidirectional RNN is able to use both past and future information to make a prediction at token INLINEFORM5 . More formally, for the first RNN layer that operates on INLINEFORM6 to output INLINEFORM7 : DISPLAYFORM0
The second RNN layer is similar and uses INLINEFORM0 to output INLINEFORM1 . In this paper, we use INLINEFORM2 layers of RNNs in all experiments and parameterize INLINEFORM3 as either Gated Recurrent Units (GRU) BIBREF9 or Long Short-Term Memory units (LSTM) BIBREF10 depending on the task.
Finally, the output of the final RNN layer INLINEFORM0 is used to predict a score for each possible tag using a single dense layer. Due to the dependencies between successive tags in our sequence labeling tasks (e.g. using the BIOES labeling scheme, it is not possible for I-PER to follow B-LOC), it is beneficial to model and decode each sentence jointly instead of independently predicting the label for each token. Accordingly, we add another layer with parameters for each label bigram, computing the sentence conditional random field (CRF) loss BIBREF11 using the forward-backward algorithm at training time, and using the Viterbi algorithm to find the most likely tag sequence at test time, similar to BIBREF2 .
Bidirectional LM
A language model computes the probability of a token sequence INLINEFORM0 INLINEFORM1
Recent state of the art neural language models BIBREF12 use a similar architecture to our baseline sequence tagger where they pass a token representation (either from a CNN over characters or as token embeddings) through multiple layers of LSTMs to embed the history INLINEFORM0 into a fixed dimensional vector INLINEFORM1 . This is the forward LM embedding of the token at position INLINEFORM2 and is the output of the top LSTM layer in the language model. Finally, the language model predicts the probability of token INLINEFORM3 using a softmax layer over words in the vocabulary.
The need to capture future context in the LM embeddings suggests it is beneficial to also consider a backward LM in additional to the traditional forward LM. A backward LM predicts the previous token given the future context. Given a sentence with INLINEFORM0 tokens, it computes INLINEFORM1
A backward LM can be implemented in an analogous way to a forward LM and produces the backward LM embedding INLINEFORM0 , for the sequence INLINEFORM1 , the output embeddings of the top layer LSTM.
In our final system, after pre-training the forward and backward LMs separately, we remove the top layer softmax and concatenate the forward and backward LM embeddings to form bidirectional LM embeddings, i.e., INLINEFORM0 . Note that in our formulation, the forward and backward LMs are independent, without any shared parameters.
Combining LM with sequence model
Our combined system, TagLM, uses the LM embeddings as additional inputs to the sequence tagging model. In particular, we concatenate the LM embeddings INLINEFORM0 with the output from one of the bidirectional RNN layers in the sequence model. In our experiments, we found that introducing the LM embeddings at the output of the first layer performed the best. More formally, we simply replace ( EQREF6 ) with DISPLAYFORM0
There are alternate possibilities for adding the LM embeddings to the sequence model. One possibility adds a non-linear mapping after the concatenation and before the second RNN (e.g. replacing ( EQREF9 ) with INLINEFORM0 where INLINEFORM1 is a non-linear function). Another possibility introduces an attention-like mechanism that weights the all LM embeddings in a sentence before including them in the sequence model. Our initial results with the simple concatenation were encouraging so we did not explore these alternatives in this study, preferring to leave them for future work.
Experiments
We evaluate our approach on two well benchmarked sequence tagging tasks, the CoNLL 2003 NER task BIBREF13 and the CoNLL 2000 Chunking task BIBREF14 . We report the official evaluation metric (micro-averaged INLINEFORM0 ). In both cases, we use the BIOES labeling scheme for the output tags, following previous work which showed it outperforms other options BIBREF15 . Following BIBREF8 , we use the Senna word embeddings BIBREF2 and pre-processed the text by lowercasing all tokens and replacing all digits with 0.
Overall system results
Tables TABREF15 and TABREF16 compare results from TagLM with previously published state of the art results without additional labeled data or task specific gazetteers. Tables TABREF17 and TABREF18 compare results of TagLM to other systems that include additional labeled data or gazetteers. In both tasks, TagLM establishes a new state of the art using bidirectional LMs (the forward CNN-BIG-LSTM and the backward LSTM-2048-512).
In the CoNLL 2003 NER task, our model scores 91.93 mean INLINEFORM0 , which is a statistically significant increase over the previous best result of 91.62 INLINEFORM1 from BIBREF8 that used gazetteers (at 95%, two-sided Welch t-test, INLINEFORM2 ).
In the CoNLL 2000 Chunking task, TagLM achieves 96.37 mean INLINEFORM0 , exceeding all previously published results without additional labeled data by more then 1% absolute INLINEFORM1 . The improvement over the previous best result of 95.77 in BIBREF6 that jointly trains with Penn Treebank (PTB) POS tags is statistically significant at 95% ( INLINEFORM2 assuming standard deviation of INLINEFORM3 ).
Importantly, the LM embeddings amounts to an average absolute improvement of 1.06 and 1.37 INLINEFORM0 in the NER and Chunking tasks, respectively.
Although we do not use external labeled data or gazetteers, we found that TagLM outperforms previous state of the art results in both tasks when external resources (labeled data or task specific gazetteers) are available. Furthermore, Tables TABREF17 and TABREF18 show that, in most cases, the improvements we obtain by adding LM embeddings are larger then the improvements previously obtained by adding other forms of transfer or joint learning. For example, BIBREF3 noted an improvement of only 0.06 INLINEFORM0 in the NER task when transfer learning from both CoNLL 2000 chunks and PTB POS tags and BIBREF8 reported an increase of 0.71 INLINEFORM1 when adding gazetteers to their baseline. In the Chunking task, previous work has reported from 0.28 to 0.75 improvement in INLINEFORM2 when including supervised labels from the PTB POS tags or CoNLL 2003 entities BIBREF3 , BIBREF7 , BIBREF6 .
Analysis
To elucidate the characteristics of our LM augmented sequence tagger, we ran a number of additional experiments on the CoNLL 2003 NER task.
In this experiment, we concatenate the LM embeddings at different locations in the baseline sequence tagger. In particular, we used the LM embeddings INLINEFORM0 to:
augment the input of the first RNN layer; i.e.,
INLINEFORM0 ,
augment the output of the first RNN layer; i.e., INLINEFORM0 , and
augment the output of the second RNN layer; i.e., INLINEFORM0 .
Table TABREF20 shows that the second alternative performs best. We speculate that the second RNN layer in the sequence tagging model is able to capture interactions between task specific context as expressed in the first RNN layer and general context as expressed in the LM embeddings in a way that improves overall system performance. These results are consistent with BIBREF7 who found that chunking performance was sensitive to the level at which additional POS supervision was added.
In this experiment, we compare six different configurations of the forward and backward language models (including the baseline model which does not use any language models). The results are reported in Table TABREF21 .
We find that adding backward LM embeddings consistently outperforms forward-only LM embeddings, with INLINEFORM0 improvements between 0.22 and 0.27%, even with the relatively small backward LSTM-2048-512 LM.
LM size is important, and replacing the forward LSTM-2048-512 with CNN-BIG-LSTM (test perplexities of 47.7 to 30.0 on 1B Word Benchmark) improves INLINEFORM0 by 0.26 - 0.31%, about as much as adding backward LM. Accordingly, we hypothesize (but have not tested) that replacing the backward LSTM-2048-512 with a backward LM analogous to the CNN-BIG-LSTM would further improve performance.
To highlight the importance of including language models trained on a large scale data, we also experimented with training a language model on just the CoNLL 2003 training and development data. Due to the much smaller size of this data set, we decreased the model size to 512 hidden units with a 256 dimension projection and normalized tokens in the same manner as input to the sequence tagging model (lower-cased, with all digits replaced with 0). The test set perplexities for the forward and backward models (measured on the CoNLL 2003 test data) were 106.9 and 104.2, respectively. Including embeddings from these language models decreased performance slightly compared to the baseline system without any LM. This result supports the hypothesis that adding language models help because they learn composition functions (i.e., the RNN parameters in the language model) from much larger data compared to the composition functions in the baseline tagger, which are only learned from labeled data.
To understand the importance of including a task specific sequence RNN we ran an experiment that removed the task specific sequence RNN and used only the LM embeddings with a dense layer and CRF to predict output tags. In this setup, performance was very low, 88.17 INLINEFORM0 , well below our baseline. This result confirms that the RNNs in the baseline tagger encode essential information which is not encoded in the LM embeddings. This is unsurprising since the RNNs in the baseline tagger are trained on labeled examples, unlike the RNN in the language model which is only trained on unlabeled examples. Note that the LM weights are fixed in this experiment.
A priori, we expect the addition of LM embeddings to be most beneficial in cases where the task specific annotated datasets are small. To test this hypothesis, we replicated the setup from BIBREF3 that samples 1% of the CoNLL 2003 training set and compared the performance of TagLM to our baseline without LM. In this scenario, test INLINEFORM0 increased 3.35% (from 67.66 to 71.01%) compared to an increase of 1.06% INLINEFORM1 for a similar comparison with the full training dataset. The analogous increases in BIBREF3 are 3.97% for cross-lingual transfer from CoNLL 2002 Spanish NER and 6.28% INLINEFORM2 for transfer from PTB POS tags. However, they found only a 0.06% INLINEFORM3 increase when using the full training data and transferring from both CoNLL 2000 chunks and PTB POS tags. Taken together, this suggests that for very small labeled training sets, transferring from other tasks yields a large improvement, but this improvement almost disappears when the training data is large. On the other hand, our approach is less dependent on the training set size and significantly improves performance even with larger training sets.
Our TagLM formulation increases the number of parameters in the second RNN layer INLINEFORM0 due to the increase in the input dimension INLINEFORM1 if all other hyperparameters are held constant. To confirm that this did not have a material impact on the results, we ran two additional experiments. In the first, we trained a system without a LM but increased the second RNN layer hidden dimension so that number of parameters was the same as in TagLM. In this case, performance decreased slightly (by 0.15% INLINEFORM2 ) compared to the baseline model, indicating that solely increasing parameters does not improve performance. In the second experiment, we decreased the hidden dimension of the second RNN layer in TagLM to give it the same number of parameters as the baseline no LM model. In this case, test INLINEFORM3 increased slightly to INLINEFORM4 indicating that the additional parameters in TagLM are slightly hurting performance.
One artifact of our evaluation framework is that both the labeled data in the chunking and NER tasks and the unlabeled text in the 1 Billion Word Benchmark used to train the bidirectional LMs are derived from news articles. To test the sensitivity to the LM training domain, we also applied TagLM with a LM trained on news articles to the SemEval 2017 Shared Task 10, ScienceIE. ScienceIE requires end-to-end joint entity and relationship extraction from scientific publications across three diverse fields (computer science, material sciences, and physics) and defines three broad entity types (Task, Material and Process). For this task, TagLM increased INLINEFORM0 on the development set by 4.12% (from 49.93 to to 54.05%) for entity extraction over our baseline without LM embeddings and it was a major component in our winning submission to ScienceIE, Scenario 1 BIBREF20 . We conclude that LM embeddings can improve the performance of a sequence tagger even when the data comes from a different domain.
Conclusion
In this paper, we proposed a simple and general semi-supervised method using pre-trained neural language models to augment token representations in sequence tagging models. Our method significantly outperforms current state of the art models in two popular datasets for NER and Chunking. Our analysis shows that adding a backward LM in addition to traditional forward LMs consistently improves performance. The proposed method is robust even when the LM is trained on unlabeled data from a different domain, or when the baseline model is trained on a large number of labeled examples.
Acknowledgments
We thank Chris Dyer, Julia Hockenmaier, Jayant Krishnamurthy, Matt Gardner and Oren Etzioni for comments on earlier drafts that led to substantial improvements in the final version. | Chiu and Nichols (2016), Lample et al. (2016), Ma and Hovy (2016), Yang et al. (2017), Hashimoto et al. (2016), Søgaard and Goldberg (2016) |
a9c5252173d3df1c06c770c180a77520de68531b | a9c5252173d3df1c06c770c180a77520de68531b_0 | Q: what are the evaluation datasets?
Text: Introduction
Due to their simplicity and efficacy, pre-trained word embedding have become ubiquitous in NLP systems. Many prior studies have shown that they capture useful semantic and syntactic information BIBREF0 , BIBREF1 and including them in NLP systems has been shown to be enormously helpful for a variety of downstream tasks BIBREF2 .
However, in many NLP tasks it is essential to represent not just the meaning of a word, but also the word in context. For example, in the two phrases “A Central Bank spokesman” and “The Central African Republic”, the word `Central' is used as part of both an Organization and Location. Accordingly, current state of the art sequence tagging models typically include a bidirectional recurrent neural network (RNN) that encodes token sequences into a context sensitive representation before making token specific predictions BIBREF3 , BIBREF4 , BIBREF5 , BIBREF6 .
Although the token representation is initialized with pre-trained embeddings, the parameters of the bidirectional RNN are typically learned only on labeled data. Previous work has explored methods for jointly learning the bidirectional RNN with supplemental labeled data from other tasks BIBREF7 , BIBREF3 .
In this paper, we explore an alternate semi-supervised approach which does not require additional labeled data. We use a neural language model (LM), pre-trained on a large, unlabeled corpus to compute an encoding of the context at each position in the sequence (hereafter an LM embedding) and use it in the supervised sequence tagging model. Since the LM embeddings are used to compute the probability of future words in a neural LM, they are likely to encode both the semantic and syntactic roles of words in context.
Our main contribution is to show that the context sensitive representation captured in the LM embeddings is useful in the supervised sequence tagging setting. When we include the LM embeddings in our system overall performance increases from 90.87% to 91.93% INLINEFORM0 for the CoNLL 2003 NER task, a more then 1% absolute F1 increase, and a substantial improvement over the previous state of the art. We also establish a new state of the art result (96.37% INLINEFORM1 ) for the CoNLL 2000 Chunking task.
As a secondary contribution, we show that using both forward and backward LM embeddings boosts performance over a forward only LM. We also demonstrate that domain specific pre-training is not necessary by applying a LM trained in the news domain to scientific papers.
Overview
The main components in our language-model-augmented sequence tagger (TagLM) are illustrated in Fig. FIGREF4 . After pre-training word embeddings and a neural LM on large, unlabeled corpora (Step 1), we extract the word and LM embeddings for every token in a given input sequence (Step 2) and use them in the supervised sequence tagging model (Step 3).
Baseline sequence tagging model
Our baseline sequence tagging model is a hierarchical neural tagging model, closely following a number of recent studies BIBREF4 , BIBREF5 , BIBREF3 , BIBREF8 (left side of Figure FIGREF5 ).
Given a sentence of tokens INLINEFORM0 it first forms a representation, INLINEFORM1 , for each token by concatenating a character based representation INLINEFORM2 with a token embedding INLINEFORM3 : DISPLAYFORM0
The character representation INLINEFORM0 captures morphological information and is either a convolutional neural network (CNN) BIBREF4 , BIBREF8 or RNN BIBREF3 , BIBREF5 . It is parameterized by INLINEFORM1 with parameters INLINEFORM2 . The token embeddings, INLINEFORM3 , are obtained as a lookup INLINEFORM4 , initialized using pre-trained word embeddings, and fine tuned during training BIBREF2 .
To learn a context sensitive representation, we employ multiple layers of bidirectional RNNs. For each token position, INLINEFORM0 , the hidden state INLINEFORM1 of RNN layer INLINEFORM2 is formed by concatenating the hidden states from the forward ( INLINEFORM3 ) and backward ( INLINEFORM4 ) RNNs. As a result, the bidirectional RNN is able to use both past and future information to make a prediction at token INLINEFORM5 . More formally, for the first RNN layer that operates on INLINEFORM6 to output INLINEFORM7 : DISPLAYFORM0
The second RNN layer is similar and uses INLINEFORM0 to output INLINEFORM1 . In this paper, we use INLINEFORM2 layers of RNNs in all experiments and parameterize INLINEFORM3 as either Gated Recurrent Units (GRU) BIBREF9 or Long Short-Term Memory units (LSTM) BIBREF10 depending on the task.
Finally, the output of the final RNN layer INLINEFORM0 is used to predict a score for each possible tag using a single dense layer. Due to the dependencies between successive tags in our sequence labeling tasks (e.g. using the BIOES labeling scheme, it is not possible for I-PER to follow B-LOC), it is beneficial to model and decode each sentence jointly instead of independently predicting the label for each token. Accordingly, we add another layer with parameters for each label bigram, computing the sentence conditional random field (CRF) loss BIBREF11 using the forward-backward algorithm at training time, and using the Viterbi algorithm to find the most likely tag sequence at test time, similar to BIBREF2 .
Bidirectional LM
A language model computes the probability of a token sequence INLINEFORM0 INLINEFORM1
Recent state of the art neural language models BIBREF12 use a similar architecture to our baseline sequence tagger where they pass a token representation (either from a CNN over characters or as token embeddings) through multiple layers of LSTMs to embed the history INLINEFORM0 into a fixed dimensional vector INLINEFORM1 . This is the forward LM embedding of the token at position INLINEFORM2 and is the output of the top LSTM layer in the language model. Finally, the language model predicts the probability of token INLINEFORM3 using a softmax layer over words in the vocabulary.
The need to capture future context in the LM embeddings suggests it is beneficial to also consider a backward LM in additional to the traditional forward LM. A backward LM predicts the previous token given the future context. Given a sentence with INLINEFORM0 tokens, it computes INLINEFORM1
A backward LM can be implemented in an analogous way to a forward LM and produces the backward LM embedding INLINEFORM0 , for the sequence INLINEFORM1 , the output embeddings of the top layer LSTM.
In our final system, after pre-training the forward and backward LMs separately, we remove the top layer softmax and concatenate the forward and backward LM embeddings to form bidirectional LM embeddings, i.e., INLINEFORM0 . Note that in our formulation, the forward and backward LMs are independent, without any shared parameters.
Combining LM with sequence model
Our combined system, TagLM, uses the LM embeddings as additional inputs to the sequence tagging model. In particular, we concatenate the LM embeddings INLINEFORM0 with the output from one of the bidirectional RNN layers in the sequence model. In our experiments, we found that introducing the LM embeddings at the output of the first layer performed the best. More formally, we simply replace ( EQREF6 ) with DISPLAYFORM0
There are alternate possibilities for adding the LM embeddings to the sequence model. One possibility adds a non-linear mapping after the concatenation and before the second RNN (e.g. replacing ( EQREF9 ) with INLINEFORM0 where INLINEFORM1 is a non-linear function). Another possibility introduces an attention-like mechanism that weights the all LM embeddings in a sentence before including them in the sequence model. Our initial results with the simple concatenation were encouraging so we did not explore these alternatives in this study, preferring to leave them for future work.
Experiments
We evaluate our approach on two well benchmarked sequence tagging tasks, the CoNLL 2003 NER task BIBREF13 and the CoNLL 2000 Chunking task BIBREF14 . We report the official evaluation metric (micro-averaged INLINEFORM0 ). In both cases, we use the BIOES labeling scheme for the output tags, following previous work which showed it outperforms other options BIBREF15 . Following BIBREF8 , we use the Senna word embeddings BIBREF2 and pre-processed the text by lowercasing all tokens and replacing all digits with 0.
Overall system results
Tables TABREF15 and TABREF16 compare results from TagLM with previously published state of the art results without additional labeled data or task specific gazetteers. Tables TABREF17 and TABREF18 compare results of TagLM to other systems that include additional labeled data or gazetteers. In both tasks, TagLM establishes a new state of the art using bidirectional LMs (the forward CNN-BIG-LSTM and the backward LSTM-2048-512).
In the CoNLL 2003 NER task, our model scores 91.93 mean INLINEFORM0 , which is a statistically significant increase over the previous best result of 91.62 INLINEFORM1 from BIBREF8 that used gazetteers (at 95%, two-sided Welch t-test, INLINEFORM2 ).
In the CoNLL 2000 Chunking task, TagLM achieves 96.37 mean INLINEFORM0 , exceeding all previously published results without additional labeled data by more then 1% absolute INLINEFORM1 . The improvement over the previous best result of 95.77 in BIBREF6 that jointly trains with Penn Treebank (PTB) POS tags is statistically significant at 95% ( INLINEFORM2 assuming standard deviation of INLINEFORM3 ).
Importantly, the LM embeddings amounts to an average absolute improvement of 1.06 and 1.37 INLINEFORM0 in the NER and Chunking tasks, respectively.
Although we do not use external labeled data or gazetteers, we found that TagLM outperforms previous state of the art results in both tasks when external resources (labeled data or task specific gazetteers) are available. Furthermore, Tables TABREF17 and TABREF18 show that, in most cases, the improvements we obtain by adding LM embeddings are larger then the improvements previously obtained by adding other forms of transfer or joint learning. For example, BIBREF3 noted an improvement of only 0.06 INLINEFORM0 in the NER task when transfer learning from both CoNLL 2000 chunks and PTB POS tags and BIBREF8 reported an increase of 0.71 INLINEFORM1 when adding gazetteers to their baseline. In the Chunking task, previous work has reported from 0.28 to 0.75 improvement in INLINEFORM2 when including supervised labels from the PTB POS tags or CoNLL 2003 entities BIBREF3 , BIBREF7 , BIBREF6 .
Analysis
To elucidate the characteristics of our LM augmented sequence tagger, we ran a number of additional experiments on the CoNLL 2003 NER task.
In this experiment, we concatenate the LM embeddings at different locations in the baseline sequence tagger. In particular, we used the LM embeddings INLINEFORM0 to:
augment the input of the first RNN layer; i.e.,
INLINEFORM0 ,
augment the output of the first RNN layer; i.e., INLINEFORM0 , and
augment the output of the second RNN layer; i.e., INLINEFORM0 .
Table TABREF20 shows that the second alternative performs best. We speculate that the second RNN layer in the sequence tagging model is able to capture interactions between task specific context as expressed in the first RNN layer and general context as expressed in the LM embeddings in a way that improves overall system performance. These results are consistent with BIBREF7 who found that chunking performance was sensitive to the level at which additional POS supervision was added.
In this experiment, we compare six different configurations of the forward and backward language models (including the baseline model which does not use any language models). The results are reported in Table TABREF21 .
We find that adding backward LM embeddings consistently outperforms forward-only LM embeddings, with INLINEFORM0 improvements between 0.22 and 0.27%, even with the relatively small backward LSTM-2048-512 LM.
LM size is important, and replacing the forward LSTM-2048-512 with CNN-BIG-LSTM (test perplexities of 47.7 to 30.0 on 1B Word Benchmark) improves INLINEFORM0 by 0.26 - 0.31%, about as much as adding backward LM. Accordingly, we hypothesize (but have not tested) that replacing the backward LSTM-2048-512 with a backward LM analogous to the CNN-BIG-LSTM would further improve performance.
To highlight the importance of including language models trained on a large scale data, we also experimented with training a language model on just the CoNLL 2003 training and development data. Due to the much smaller size of this data set, we decreased the model size to 512 hidden units with a 256 dimension projection and normalized tokens in the same manner as input to the sequence tagging model (lower-cased, with all digits replaced with 0). The test set perplexities for the forward and backward models (measured on the CoNLL 2003 test data) were 106.9 and 104.2, respectively. Including embeddings from these language models decreased performance slightly compared to the baseline system without any LM. This result supports the hypothesis that adding language models help because they learn composition functions (i.e., the RNN parameters in the language model) from much larger data compared to the composition functions in the baseline tagger, which are only learned from labeled data.
To understand the importance of including a task specific sequence RNN we ran an experiment that removed the task specific sequence RNN and used only the LM embeddings with a dense layer and CRF to predict output tags. In this setup, performance was very low, 88.17 INLINEFORM0 , well below our baseline. This result confirms that the RNNs in the baseline tagger encode essential information which is not encoded in the LM embeddings. This is unsurprising since the RNNs in the baseline tagger are trained on labeled examples, unlike the RNN in the language model which is only trained on unlabeled examples. Note that the LM weights are fixed in this experiment.
A priori, we expect the addition of LM embeddings to be most beneficial in cases where the task specific annotated datasets are small. To test this hypothesis, we replicated the setup from BIBREF3 that samples 1% of the CoNLL 2003 training set and compared the performance of TagLM to our baseline without LM. In this scenario, test INLINEFORM0 increased 3.35% (from 67.66 to 71.01%) compared to an increase of 1.06% INLINEFORM1 for a similar comparison with the full training dataset. The analogous increases in BIBREF3 are 3.97% for cross-lingual transfer from CoNLL 2002 Spanish NER and 6.28% INLINEFORM2 for transfer from PTB POS tags. However, they found only a 0.06% INLINEFORM3 increase when using the full training data and transferring from both CoNLL 2000 chunks and PTB POS tags. Taken together, this suggests that for very small labeled training sets, transferring from other tasks yields a large improvement, but this improvement almost disappears when the training data is large. On the other hand, our approach is less dependent on the training set size and significantly improves performance even with larger training sets.
Our TagLM formulation increases the number of parameters in the second RNN layer INLINEFORM0 due to the increase in the input dimension INLINEFORM1 if all other hyperparameters are held constant. To confirm that this did not have a material impact on the results, we ran two additional experiments. In the first, we trained a system without a LM but increased the second RNN layer hidden dimension so that number of parameters was the same as in TagLM. In this case, performance decreased slightly (by 0.15% INLINEFORM2 ) compared to the baseline model, indicating that solely increasing parameters does not improve performance. In the second experiment, we decreased the hidden dimension of the second RNN layer in TagLM to give it the same number of parameters as the baseline no LM model. In this case, test INLINEFORM3 increased slightly to INLINEFORM4 indicating that the additional parameters in TagLM are slightly hurting performance.
One artifact of our evaluation framework is that both the labeled data in the chunking and NER tasks and the unlabeled text in the 1 Billion Word Benchmark used to train the bidirectional LMs are derived from news articles. To test the sensitivity to the LM training domain, we also applied TagLM with a LM trained on news articles to the SemEval 2017 Shared Task 10, ScienceIE. ScienceIE requires end-to-end joint entity and relationship extraction from scientific publications across three diverse fields (computer science, material sciences, and physics) and defines three broad entity types (Task, Material and Process). For this task, TagLM increased INLINEFORM0 on the development set by 4.12% (from 49.93 to to 54.05%) for entity extraction over our baseline without LM embeddings and it was a major component in our winning submission to ScienceIE, Scenario 1 BIBREF20 . We conclude that LM embeddings can improve the performance of a sequence tagger even when the data comes from a different domain.
Conclusion
In this paper, we proposed a simple and general semi-supervised method using pre-trained neural language models to augment token representations in sequence tagging models. Our method significantly outperforms current state of the art models in two popular datasets for NER and Chunking. Our analysis shows that adding a backward LM in addition to traditional forward LMs consistently improves performance. The proposed method is robust even when the LM is trained on unlabeled data from a different domain, or when the baseline model is trained on a large number of labeled examples.
Acknowledgments
We thank Chris Dyer, Julia Hockenmaier, Jayant Krishnamurthy, Matt Gardner and Oren Etzioni for comments on earlier drafts that led to substantial improvements in the final version. | CoNLL 2003, CoNLL 2000 |
a45edc04277a458911086752af4f17405501230f | a45edc04277a458911086752af4f17405501230f_0 | Q: Are datasets publicly available?
Text: Introduction
The design and discovery of novel drugs for protein targets is powered by an understanding of the underlying principles of protein-compound interaction. Biochemical methods that measure affinity and biophysical methods that describe the interaction in atomistic level detail have provided valuable information toward a mechanistic explanation for bimolecular recognition BIBREF0. However, more often than not, compounds with drug potential are discovered serendipitously or by phenotypic drug discovery BIBREF1 since this highly specific interaction is still difficult to predict BIBREF2. Protein structure based computational strategies such as docking BIBREF3, ultra-large library docking for discovering new chemotypes BIBREF4, and molecular dynamics simulations BIBREF3 or ligand based strategies such as quantitative structure-activity relationship (QSAR) BIBREF5, BIBREF6, and molecular similarity BIBREF7 have been powerful at narrowing down the list of compounds to be tested experimentally. With the increase in available data, machine learning and deep learning architectures are also starting to play a significant role in cheminformatics and drug discovery BIBREF8. These approaches often require extensive computational resources or they are limited by the availability of 3D information. On the other hand, text based representations of biochemical entities are more readily available as evidenced by the 19,588 biomolecular complexes (3D structures) in PDB-Bind BIBREF9 (accessed on Nov 13, 2019) compared with 561,356 (manually annotated and reviewed) protein sequences in Uniprot BIBREF10 (accessed on Nov 13, 2019) or 97 million compounds in Pubchem BIBREF11 (accessed on Nov 13, 2019). The advances in natural language processing (NLP) methodologies make processing of text based representations of biomolecules an area of intense research interest.
The discipline of natural language processing (NLP) comprises a variety of methods that explore a large amount of textual data in order to bring unstructured, latent (or hidden) knowledge to the fore BIBREF12. Advances in this field are beneficial for tasks that use language (textual data) to build insight. The languages in the domains of bioinformatics and cheminformatics can be investigated under three categories: (i) natural language (mostly English) that is used in documents such as scientific publications, patents, and web pages, (ii) domain specific language, codified by a systematic set of rules extracted from empirical data and describing the human understanding of that domain (e.g. proteins, chemicals, etc), and (iii) structured forms such as tables, ontologies, knowledge graphs or databases BIBREF13. Processing and extracting information from textual data written in natural languages is one of the major application areas of NLP methodologies in the biomedical domain (also known as BioNLP). Information extracted with BioNLP methods is most often shared in structured databases or knowledge graphs BIBREF14. We refer the reader to the comprehensive review on BioNLP by BIBREF15. Here, we will be focusing on the application of NLP to domain specific, unstructured biochemical textual representations toward exploration of chemical space in drug discovery efforts.
We can view the textual representation of biomedical/biochemical entities as a domain-specific language. For instance, a genome sequence is an extensive script of four characters (A, T, G, C) constituting a genomic language. In proteins, the composition of 20 different natural amino acids in varying lengths builds the protein sequences. Post-translational modifications expand this 20 letter alphabet and confer different properties to proteins BIBREF16. For chemicals there are several text based alternatives such as chemical formula, IUPAC International Chemical Identifier (InChI) BIBREF17 and Simplified Molecular Input Line Entry Specification (SMILES) BIBREF18.
Today, the era of “big data" boosts the “learning" aspect of computational approaches substantially, with the ever-growing amounts of information provided by publicly available databases such as PubChem BIBREF11, ChEMBL BIBREF19, UniProt BIBREF10. These databases are rich in biochemical domain knowledge that is in textual form, thus building an efficient environment in which NLP-based techniques can thrive. Furthermore, advances in computational power allow the design of more complex methodologies, which in turn drive the fields of machine learning (ML) and NLP. However, biological and chemical interpretability and explainability remain among the major challenges of AI-based approaches. Data management in terms of access, interoperability and reusability are also critical for the development of NLP models that can be shared across disciplines.
With this review, we aim to provide an outline of how the field of NLP has influenced the studies in bioinformatics and cheminformatics and the impact it has had over the last decade. Not only are NLP methodologies facilitating processing and exploitation of biochemical text, they also promise an “understanding" of biochemical language to elucidate the underlying principles of bimolecular recognition. NLP technologies are enhancing the biological and chemical knowledge with the final goal of accelerating drug discovery for improving human health. We highlight the significance of an interdisciplinary approach that integrates computer science and natural sciences.
Introduction ::: NLP Basics
BIBREF20 describes NLP on three levels: (i) the word level in which the smallest meaningful unit is extracted to define the morphological structure, (ii) the sentence level where grammar and syntactic validity are determined, and (iii) the domain or context level in which the sentences have global meaning. Similarly, our review is organized in three parts in which bio-chemical data is investigated at: (i) word level, (ii) sentence (text) level, and (iii) understanding text and generating meaningful sequences. Table TABREF37 summarizes important NLP concepts related to the processing of biochemical data. We refer to these concepts and explain their applications in the following sections.
All NLP technology relates to specific AI architectures. In Table TABREF38 W-we summarize the main ML and deep learning (DL) architectures that will be mentioned throughout the review.
Biochemical Language Processing
The language-like properties of text-based representations of chemicals were recognized more than 50 years ago by Garfield BIBREF21. He proposed a “chemico-linguistic" approach to representing chemical nomenclature with the aim of instructing the computer to draw chemical diagrams. Protein sequence has been an important source of information about protein structure and function since Anfinsen's experiment BIBREF22. Alignment algorithms, such as Needleman-Wunsh BIBREF23 and Smith-Waterman BIBREF24, rely on sequence information to identify functionally or structurally critical elements of proteins (or genes).
To make predictions about the structure and function of compounds or proteins, the understanding of these sequences is critical for bioinformatics tasks with the final goal of accelerating drug discovery. Much like a linguist who uses the tools of language to bring out hidden knowledge, biochemical sequences can be processed to propose novel solutions, such as predicting interactions between chemicals and proteins or generating new compounds based on the level of understanding. In this section, we will review the applications of some of the NLP-concepts to biochemical data in order to solve bio/cheminformatics problems.
Biochemical Language Processing ::: Textual Chemical Data
Information about chemicals can be found in repositories such as PubChem BIBREF11, which includes information on around 100 million compounds, or Drugbank BIBREF25, which includes information on around 10,000 drugs. The main textual sources used in drug discovery are textual representations of chemicals and proteins. Table TABREF39 lists some sources that store different types of biochemical information.
Chemical structures can be represented in different forms that can be one-dimensional (1D), 2D, and 3D. Table TABREF40 depicts different identifiers/representations of the drug ampicillin. While the 2D and 3D representations are also used in ML based approaches BIBREF8, here we focus on the 1D form, which is the representation commonly used in NLP.
Biochemical Language Processing ::: Textual Chemical Data ::: IUPAC name
The International Union of Pure and Applied Chemistry (IUPAC) scheme (i.e. nomenclature) is used to name compounds following pre-defined rules such that the names of the compounds are unique and consistent with each other (iupac.org/).
Biochemical Language Processing ::: Textual Chemical Data ::: Chemical Formula
The chemical formula is one of the simplest and most widely-known ways of describing chemicals using letters (i.e. element symbols), numbers, parentheses, and (-/+) signs. This representation gives information about which elements and how many of them are present in the compound.
Biochemical Language Processing ::: Textual Chemical Data ::: SMILES
The Simplified Molecular Input Entry Specification (SMILES) is a text-based form of describing molecular structures and reactions BIBREF18. SMILES strings can be obtained by traversing the 2D graph representation of the compound and therefore SMILES provides more complex information than the chemical formula. Moreover, due to its textual form, SMILES takes 50% to 70% less space than other representation methods such as an identical connection table (daylight.com/dayhtml/doc/theory/theory.smiles.html).
SMILES notation is similar to a language with its own set of rules. Just like it is possible to express the same concept with different words in natural languages, the SMILES notation allows molecules to be represented with more than one unique SMILES. Although this may sound like a significant ambiguity, the possibility of using different SMILES to represent the same molecule was successfully adopted as a data augmentation strategy by various groups (BIBREF26, BIBREF27, BIBREF28).
Canonical SMILES can provide a unique SMILES representation. However, different databases such as PubChem and ChEMBL might use different canonicalization algorithms to generate different unique SMILES. OpenSMILES (opensmiles.org/opensmiles.html) is a new platform that aims to universalize the SMILES notation. In isomeric SMILES, isotopism and stereochemistry information of a molecule is encoded using a variety of symbols (“/", “\", “@", “@@").
Biochemical Language Processing ::: Textual Chemical Data ::: DeepSMILES
DeepSMILES is a novel SMILES-like notation that was proposed to address two challenges of the SMILES syntax: (i) unbalanced parentheses and (ii) ring closure pairs BIBREF29. It was initially designed to enhance machine/deep-learning based approaches that utilize SMILES data as input (github.com/nextmovesoftware/deepsmiles). DeepSMILES was adopted in a drug-target binding affinity prediction task in which the findings highlighted the efficacy of DeepSMILES over SMILES in terms of identifying undetectable patterns BIBREF30. DeepSMILES was also utilized in a molecule generation task in which it was compared to canonical and randomized SMILES text BIBREF31. Here, the results suggested that DeepSMILES might limit the learning ability of the SMILES-based molecule generation models because its syntax is more grammar sensitive with the ring closure alteration and the use of a single symbol for branching (i.e. “)") introducing longer sequences.
Biochemical Language Processing ::: Textual Chemical Data ::: SELFIES
SELF-referencIng Embedding Strings (SELFIES) is an alternative sequence-based representation that is built upon “semantically constrained graphs" BIBREF32. Each symbol in a SELFIES sequence indicates a recursive Chomsky-2 type grammar, and can thus be used to convert the sequence representation to a unique graph. SELFIES utilize SMILES syntax to extract words that will correspond to semantically valid graphs (github.com/aspuru-guzik-group/selfies). BIBREF32 compared SELFIES, DeepSMILES and SMILES representations in terms of validity in cases where random character mutations are introduced. The evaluations on the QM9 dataset yielded results in the favor of SELFIES.
Biochemical Language Processing ::: Textual Chemical Data ::: InChI
InChI is the IUPAC International Chemical Identifier, which is a non-proprietary and open-source structural representation (inchi-trust.org) BIBREF33. The InChIKey is a character-based representation that is generated by hashing the InChI strings in order to shorten them. InChi representation has several layers (each) separated by the “/" symbol.
The software that generates InChi is publicly available and InChi does not suffer from ambiguity problems. However, its less complex structure makes the SMILES representation easier to use as shown in a molecular generation study BIBREF34 and in building meaningful chemical representations with a translation-based system BIBREF35. Interestingly, the translation model was able to translate from InChi to canonical SMILES, whereas it failed to translate from canonical SMILES to InChi. BIBREF35 suggested that the complex syntax of InChi made it difficult for the model to generate a correct sequence.
Biochemical Language Processing ::: Textual Chemical Data ::: SMARTS
SMiles ARbitrary Target Specification (SMARTS) is a language that contains specialized symbols and logic operators that enable substructure (pattern) search on SMILES strings BIBREF36. SMARTS can be used in any task that requires pattern matching on a SMILES string such as, querying databases or creating rule dictionaries such as RECAP BIBREF37 and BRICS BIBREF38 to extract fragments from SMILES (daylight.com/dayhtml/doc/theory/theory.smarts.html).
Biochemical Language Processing ::: Textual Chemical Data ::: SMIRKS
SMIRKS notation can be used to describe generic reactions (also known as transforms) that comprise one or more changes in atoms and bonds (https://daylight.com/daycgi_tutorials/smirks_examples.html). These transforms are based on “reactant to product" notation, and thus make use of SMILES and SMARTS languages. SMIRKS is utilized in tasks such as constructing an online transform database BIBREF39 and predicting metabolic transformations BIBREF40. A recent study achieves a similar performance to rule-based systems in classifying chemical reactions by learning directly from SMILES text with transforms via neural networks BIBREF41.
Biochemical Language Processing ::: Identification of Words/Tokens
Similar to words in natural languages, we can assume that the “words" of biochemical sequences convey significant information (e.g. folding, function etc) about the entities. In this regard, each compound/protein is analogous to a sentence, and each compound/protein unit is analogous to a word. Therefore, if we can decipher the grammar of biochemical languages, it would be easier to model bio/cheminformatics problems. However, protein and chemical words are not explicitly known and different approaches are needed to extract syntactically and semantically meaningful biochemical word units from these textual information sources (i.e. sequences). Here, we review some of the most common tokenization approaches used to determine the words of biochemical languages.
Biochemical Language Processing ::: Identification of Words/Tokens ::: @!START@$k$@!END@-mers (@!START@$n$@!END@-grams)
One of the simplest approaches in NLP to extract a small language unit is to use $k$-mers, also known as $n$-grams. $k$-mers indicate $k$ consecutive overlapping characters that are extracted from the sequence with a sliding window approach. “LINGO", which is one of the earliest applications of $k$-mers in cheminformatics, is the name of the overlapping 4-mers that are extracted from SMILES strings BIBREF42. 4-mers of the SMILES of ampicillin, “CC1(C(N2C(S1)C(C2=O)NC(=O)C(C3=CC=CC=C3)N)C(=O)O)C", can be listed as { `CC1(', `C1(C', `1(C(', ..., `O)O)', `)O)C' }. From a sequence of length $l$, a total of $(l-n)+1$ $k$-mers can be extracted. Extracting LINGOs from SMILES is a simple yet powerful idea that has been successfully used to compute molecular similarities, to differentiate between bioisosteric and random molecular pairs BIBREF42 and in a drug-target interaction prediction task BIBREF43, without requiring 2D or 3D information. The results suggested that a SMILES-based approach to compute the similarity of chemicals is not only as good as a 2D-based similarity measurement, but also faster BIBREF43.
$k$-mers were successfully utilized as protein BIBREF44 and chemical words BIBREF45 in protein family classification tasks. 3-mers to 5-mers were often considered as the words of the protein sequence. BIBREF46 reported that some 5-mers could be matched to motifs and protein words are most likely a mixture of different $k$-mers. For the protein function prediction task, BIBREF47 decided to choose among the 1000 most frequent words to build the protein vocabulary, whereas BIBREF48 utilized each $k$-mer type separately and showed that 4-mers provided the best performance. In the latter work, instead of using the whole protein sequence, the words were extracted from different length protein segments, which are also long $k$-mers (i.e. 100-mer, 120-mer) with 30 amino-acid gaps. The use of segmented protein sequences yielded better results than using the whole protein sequence, and important and conserved subsequences were highlighted. $k$-mers were also used as features, along with position specific score matrix features, in the protein fold prediction problem BIBREF49.
Biochemical Language Processing ::: Identification of Words/Tokens ::: Longest Common Subsequences
The identification of the longest common subsequence (LCS) of two sequences is critical for detecting their similarity. When there are multiple sequences, LCSs can point to informative patterns. LCSs extracted from SMILES sequences performed similarly well to 4-mers in chemical similarity calculation BIBREF43.
Biochemical Language Processing ::: Identification of Words/Tokens ::: Maximum Common Substructure
BIBREF50 investigated organic chemistry as a language in an interesting study that extracts maximum common substructures (MCS) from the 2D structures of pairs of compounds to build a vocabulary of the molecule corpus. Contrary to the common idea of functional groups (e.g. methyl, ethyl etc.) being “words" of the chemical language, the authors argued that MCSs (i.e. fragments) can be described as the words of the chemical language BIBREF50. A recent work investigated the distribution of these words in different molecule subsets BIBREF51. The “words" followed Zipf's Law, which indicates the relationship between the frequency of a word and its rank (based on the frequency) BIBREF52, similar to most natural languages. Their results also showed that drug “words" are shorter compared to natural product “words".
Biochemical Language Processing ::: Identification of Words/Tokens ::: Minimum Description Length
Minimum Description Length (MDL) is an unsupervised compression-based word segmentation technique in which words of an unknown language are detected by compressing the text corpus. In a protein classification task, each protein was assigned to the family in which its sequence is compressed the most, according to the MDL-based representation BIBREF53. BIBREF53 investigated whether the MDL-based words of the proteins show similarities to PROSITE patterns BIBREF54 and showed that less conserved residues were compressed less by the algorithm. BIBREF53 also emphasized that the integration of domain knowledge, such as the consideration of the hydrophilic and hydrophobic aminoacids in the words (i.e. grammar building), might prove effective.
Biochemical Language Processing ::: Identification of Words/Tokens ::: Byte-Pair Encoding
Byte-Pair Encoding (BPE) generates words based on high frequency subsequences starting from frequent characters BIBREF55. A recent study adopted a linguistic-inspired approach to predict protein-protein interactions (PPIs) BIBREF56. Their model was built upon “words" (i.e. bio-words) of the protein language, in which BPE was utilized to build the bio-word vocabulary. BIBREF56 suggested that BPE-segmented words indicate a language-like behavior for the protein sequences and reported improved accuracy results compared to using 3-mers as words.
Biochemical Language Processing ::: Identification of Words/Tokens ::: Pattern-based words
Subsequences that are conserved throughout evolution are usually associated with protein structure and function. These conserved sequences can be detected as patterns via multiple sequence alignment (MSA) techniques and Hidden Markov Models (HMM). PROSITE BIBREF54, a public database that provides information on domains and motifs of proteins, uses regular expressions (i.e. RE or regex) to match these subsequences.
Protein domains have been investigated for their potential of being the words of the protein language. One earlier study suggested that folded domains could be considered as “phrases/clauses" rather than “words" because of the higher semantic complexity between them BIBREF57. Later, domains were described as the words, and domain architectures as sentences of the language BIBREF58, BIBREF59. Protein domains were treated as the words of multi-domain proteins in order to evaluate the semantic meaning behind the domains BIBREF60. The study supported prior work by BIBREF59 suggesting that domains displayed syntactic and semantic features, but there are only a few multi-domain proteins with more than six domains limiting the use of domains as words to build sentences. Protein domains and motifs have also been utilized as words in different drug discovery tasks such as the prediction of drug-target interaction affinity BIBREF61, BIBREF62. These studies showed that motifs and domains together contribute to the prediction as much as the use of the full protein sequence.
SMARTS is a well-known regex-based querying language that is used to identify patterns in a SMILES string. SMARTS has been utilized to build specific rules for small-molecule protonation BIBREF63, to design novel ligands based on the fragments connected to the active site of a target BIBREF64, and to help generate products in reaction prediction BIBREF65. MolBlocks, a molecular fragmentation tool, also adopted SMARTS dictionaries to partition a SMILES string into overlapping fragments BIBREF36. Furthermore, MACCS BIBREF66 and PubChem BIBREF11 Fingerprints (FP) are molecular descriptors that are described as binary vectors based on the absence/presence of substructures that are predefined with SMARTS language. A recent study on protein family clustering uses a ligand-centric representation to describe proteins in which ligands were represented with SMILES-based (i.e. 8-mers) representation, MACCS and Extended Connectivity Fingerprint (ECFP6) BIBREF45. The results indicate that three of the ligand representation approaches provide similar performances for protein family clustering.
To the best of our knowledge, there is no comprehensive evaluation of the different word extraction techniques except a comparison by BIBREF56 of the performance of BPE-based words against $k$-mers in a PPI prediction task. Such comparison would provide important insights to the bio/cheminformatics community.
Biochemical Language Processing ::: Text representation
The representation of a text (e.g. molecule or protein sequence) aims to capture syntactic, semantic or relational meaning. In the widely used Vector Space Model (VSM), a text is represented by a feature vector of either weighted or un-weighted terms BIBREF67. The terms of this vector may correspond to words, phrases, k-grams, characters, or dimensions in a semantic space such as in the distributed word embedding representation models. The similarity between two texts represented in the vector space model is usually computed using the cosine similarity metric BIBREF68, which corresponds to the cosine of the angle between the two vectors.
Similarly to the one-hot encoding scheme BIBREF69, in the traditional bag-of-words BIBREF70 and term frequency-inverse document frequency (TF-IDF) BIBREF71 text representation models, each word corresponds to a different dimension in the vector space. Therefore, the similarity between two words in the vector space is zero, even if they are synonymous or related to each other. In the distributed representation models BIBREF72 on the other hand, words are represented as dense vectors based on their context. Words that occur in similar contexts have similar vector representations. In this subsection, we review these commonly used text representation models with their applications in cheminformatics.
Biochemical Language Processing ::: Text representation ::: Bag-of-words representation
In this representation model, a text is represented as a vector of bag-of-words, where the multiplicity of the words is taken into account, but the order of the words in the text is lost BIBREF70. For instance, the SMILES of ampicillin “CC1(C(N2C(S1)C(C2=O)NC(=O)C(
C3=CC=CC=C3)N)C(=O)O)C" can be represented as a bag-of 8-mers as follows: {“CC1(C(N2", “C1(C(N2C", “1(C(N2C(", “(C(N2C(S",...,“N)C(=O)O" ,“)C(=O)O)" ,“C(=O)O)C" }. We can vectorize it as $S = [1, 1, 1, 1, ...,1, 1, 1]$ in which each number refers to the frequency of the corresponding 8-mer.
Bag-of-words representation was used in molecular similarity computation, in which the SMILES string and the LINGOs extracted from it were treated as the sentence and words, respectively BIBREF42. The unique LINGOs were considered for each pair and a Tanimoto coefficient was used to measure the similarity BIBREF42. Another approach called SMILES Fingerprint (SMIfp) also adopted bag-of-words to create representations of molecules for a ligand-based virtual screening task BIBREF73. SMIfp considered 34 unique symbols in SMILES strings to create a frequency-based vector representation, which was utilized to compute molecular similarity. SMIfp provided comparable results to a chemical representation technique that also incorporated polar group and topological information, as well as atom and bond information, in recovering active compounds amongst decoys BIBREF73.
Biochemical Language Processing ::: Text representation ::: TF-IDF
The bag-of-words model, which is based on counting the terms of the sentence/document, might prioritize insignificant but frequent words. To overcome this issue, a weighting scheme can be integrated into the vector representation in order to give more importance to the rare terms that might play a key role in detecting similarity between two documents. One popular weighting approach is to use term frequency-inverse document frequency (TF-IDF) BIBREF71. TF refers to the frequency of a term in the document, and IDF denotes the logarithm of the total number of documents over the number of documents in which the term appears. IDF is therefore an indicator of uniqueness. For instance, the IDF of “C3=CC=CC" is lower than that of “(C(N2C(S", which appears in fewer compounds. Therefore, the existence of “(C(N2C(S" in a compound may be more informative.
TF-IDF weigthing was utilized to assign weights to LINGOs that were extracted from SMILES in order to compute molecule similarity using cosine similarity BIBREF43. Molecular similarities were then used as input for drug-target interaction prediction. A similar performance between TF-IDF weighted LINGO and a graph-based chemical similarity measurement was obtained. BIBREF50 used TF-IDF weighting on chemical bonds to show that bonds with higher TF-IDF scores have a higher probability of breaking.
Biochemical Language Processing ::: Text representation ::: One-hot representation
In one-hot representation, for a given vocabulary of a text, each unique word/character is represented with a binary vector that has a 1 in the corresponding position, while the vector positions for the remaining words/characters are filled with 0s BIBREF69. One-hot encoding is fast to build, but might lead to sparse vectors with large dimensions based on the size of the vocabulary (e.g. one million unique words in the vocabulary means one million dimensional binary vectors filled with zeros except one). It is a popular choice, especially in machine learning-based bio/cheminformatic studies to encode different types of information such as SMILES characters BIBREF74, BIBREF75, atom/bond types BIBREF76, BIBREF77 and molecular properties BIBREF78.
Biochemical Language Processing ::: Text representation ::: Distributed representations
The one-hot encoding builds discrete representations, and thus does not consider the relationships between words. For instance, the cosine similarity of two different words is 0 even if they are semantically similar. However, if the word (i.e. 8-mer) “(C(N2C(S" frequently appears together with the word “C(C2=O)N" in SMILES strings, this might suggest that they have related “meanings". Furthermore, two words might have similar semantic meanings even though they are syntactically apart. This is where distributed vector representations come into play.
The distributed word embeddings models gained popularity with the introduction of Word2Vec BIBREF72 and GloVe BIBREF79. The main motivation behind the Word2Vec model is to build real-valued high-dimensional vectors for each word in the vocabulary based on the context in which they appear. There are two main approaches in Word2Vec: (i) Skip-Gram and (ii) Continuous Bag of Words (CBOW). The aim of the Skip-Gram model is to predict context words given the center word, whereas in CBOW the objective is to predict the target word given the context words. Figure FIGREF32 depicts the Skip-gram architecture in Word2Vec BIBREF72. For the vocabulary of size $V$, given the target word “2C(S", the model learns to predict two context words. Both target word and context words are represented as one-hot encoded binary vectors of size $V$. The number of neurons in the hidden layer determines the size of the embedding vectors. The weight matrix between the input layer and the hidden layer stores the embeddings of the vocabulary words. The $i^{th}$ row of the embedding matrix corresponds to the embedding of the $i^{th}$ word.
The Word2Vec architecture has inspired a great deal of research in the bio/cheminformatics domains. The Word2Vec algorithm has been successfully applied for determining protein classes BIBREF44 and protein-protein interactions (PPI) BIBREF56. BIBREF44 treated 3-mers as the words of the protein sequence and observed that 3-mers with similar biophysical and biochemical properties clustered together when their embeddings were mapped onto the 2D space. BIBREF56, on the other hand, utilized BPE-based word segmentation (i.e. bio-words) to determine the words. The authors argued that the improved performance for bio-words in the PPI prediction task might be due to the segmentation-based model providing more distinct words than $k$-mers, which include repetitive segments. Another recent study treated multi-domain proteins as sentences in which each domain was recognized as a word BIBREF60. The Word2Vec algorithm was trained on the domains (i.e. PFAM domain identifiers) of eukaryotic protein sequences to learn semantically interpretable representations of them. The domain representations were then investigated in terms of the Gene Ontology (GO) annotations that they inherit. The results indicated that semantically similar domains share similar GO terms.
The Word2Vec algorithm was also utilized for representation of chemicals. SMILESVec, a text-based ligand representation technique, utilized Word2Vec to learn embeddings for 8-mers (i.e. chemical words) that are extracted from SMILES strings BIBREF45. SMILESVec was utilized in protein representation such that proteins were represented as the average of the SMILESVec vectors of their interacting ligands. The results indicated comparable performances for ligand-based and sequence based protein representations in protein family/superfamily clustering. Mol2Vec BIBREF80, on the other hand, was based on the identifiers of the substructures (i.e. words of the chemical) that were extracted via Extended Connectivity Fingerprint (ECFP) BIBREF81. The results showed a better performance with Mol2Vec than with the simple Morgan Fingerprint in a solubility prediction task, and a comparable performance to graph-based chemical representation BIBREF82. BIBREF83 also employed the Word2vec model that was trained on the fragments that are extracted from SMILES strings using a graph traversing algorithm. The results favored the distributed fragment-based ligand representation over fragment-based binary vector representation in a ring system clustering task and showed a comparable performance in the prediction of toxicity against Tetrahymena BIBREF83. Figure FIGREF33 illustrates the pipeline of a text-based molecule representation based on $k$-mers.
FP2Vec is another method that utilizes embedding representation for molecules, however instead of the Word2Vec algorithm, it depends on a Convolutional Neural Network (CNN) to build molecule representations to be used in toxicity prediction tasks BIBREF84. CNN architectures have also been utilized for drug-target binding affinity prediction BIBREF85 and drug-drug interaction prediction BIBREF75 to build representations for chemicals from raw SMILES strings, as well as for protein fold prediction BIBREF86 to learn representations for proteins from amino-acid sequences. SMILES2Vec adopted different DL architectures (GRU, LSTM, CNN+GRU, and CNN+LSTM) to learn molecule embeddings, which were then used to predict toxicity, affinity and solubility BIBREF87. A CNN+GRU combination was better at the prediction of chemical properties. A recent study compared several DL approaches to investigate the effect of different chemical representations, which were learned through these architectures, on a chemical property prediction problem BIBREF88. The authors also combined DL architectures that were trained on SMILES strings with the MACCS fingerprint, proposing a combined representation for molecules (i.e. CheMixNet). The CheMixNet representation outperformed the other representations that were trained on a single data type such as SMILES2Vec (i.e. SMILES) and Chemception (i.e. 2D graph) BIBREF89.
Biochemical Language Processing ::: Text generation
Text generation is a primary NLP task, where the aim is to generate grammatically and semantically correct text, with many applications ranging from question answering to machine translation BIBREF90. It is generally formulated as a language modeling task, where a statistical model is trained using a large corpus to predict the distribution of the next word in a given context. In machine translation, the generated text is the translation of an input text in another language.
Medicinal chemistry campaigns use methods such as scaffold hopping BIBREF91 or fragment-based drug design BIBREF3 to build and test novel molecules but the chemotype diversity and novelty may be limited. It is possible to explore uncharted chemical space with text generation models, which learn a distribution from the available data (i.e. SMILES language) and generate novel molecules that share similar physicochemical properties with the existing molecules BIBREF74. Molecule generation can then be followed by assessing physicochemical properties of the generated compound or its binding potential to a target protein BIBREF74. For a comprehensive review of molecule generation methodologies, including graph-based models, we refer the reader to the review of BIBREF92. Machine translation models have also been recently adapted to text-based molecule generation, which start with one “language" such as that of reactants and generate a novel text in another “language" such as that of products BIBREF28. Below, we present recent studies on text based molecule generation.
RNN models, which learn a probability distribution from a training set of molecules, are commonly used in molecule generation to propose novel molecules similar to the ones in the training data set. For instance, given the SMILES sequence “C(=O", the model would predict the next character to be “)" with a higher probability than “(". The production of valid SMILES strings, however, is a challenge because of the complicated SMILES syntax that utilizes parentheses to indicate branches and ring numbers. The sequential nature of RNNs, which may miss long range dependencies, is a disadvantage of these models BIBREF74. RNN descendants LSTM and GRU, which model long-term dependencies, are better suited for remembering matching rings and branch closures. Motivated by such a hypothesis, BIBREF74 and BIBREF93 successfully pioneered de novo molecule generation using LSTM architecture to generate valid novel SMILES. BIBREF74 further modified their model to generate target-specific molecules by integrating a target bioactivity prediction step to filter out inactive molecules and then retraining the LSTM network. In another study, transfer learning was adopted to fine-tune an LSTM-based SMILES generation model so that structurally similar leads were generated for targets with few known ligands BIBREF94. BIBREF95 and BIBREF96 used reinforcement learning (RL) to bias their model toward compounds with desired properties. Merk et al. BIBREF97, BIBREF98 fine-tuned their LSTM model on a target-focused library of active molecules and synthesized some novel compounds. BIBREF99 explored how much of the GDB-13 database BIBREF100 they could rediscover by using an RNN-based generative model.
The variational Auto-encoder (VAE) is another widely adopted text generation architecture BIBREF101. BIBREF34 adopted this architecture for molecule generation. A traditional auto-encoder encodes the input into the latent space, which is then decoded to reconstruct the input. VAE differs from AE by explicitly defining a probability distribution on the latent space to generate new samples. BIBREF34 hypothesized that the variational part of the system integrates noise to the encoder, so that the decoder can be more robust to the large diversity of molecules. However, the authors also reported that the non-context free property of SMILES caused by matching ring numbers and parentheses might often lead the decoder to generate invalid SMILES strings. A grammar variational auto-encoder (GVAE), where the grammar for SMILES is explicitly defined instead of the auto-encoder learning the grammar itself, was proposed to address this issue BIBREF102. This way, the generation is based on the pre-defined grammar rules and the decoding process generates grammar production rules that should also be grammatically valid. Although syntactic validity would be ensured, the molecules may not have semantic validity (chemical validity). BIBREF103 built upon the VAE BIBREF34 and GVAE BIBREF102 architectures and introduced a syntax-directed variational autoencoder (SD-VAE) model for the molecular generation task. The syntax-direct generative mechanism in the decoder contributed to creating both syntactically and semantically valid SMILES sequences. BIBREF103 compared the latent representations of molecules generated by VAE, GVAE, and SD-VAE, and showed that SD-VAE provided better discriminative features for druglikeness. BIBREF104 proposed an adversarial AE for the same task. Conditional VAEs BIBREF105, BIBREF106 were trained to generate molecules conditioned on a desired property. The challenges that SMILES syntax presents inspired the introduction of new syntax such as DeepSMILES BIBREF29 and SELFIES BIBREF32 (details in Section SECREF3).
Generative Adversarial Network (GAN) models generate novel molecules by using two components: the generator network generates novel molecules, and the discriminator network aims to distinguish between the generated molecules and real molecules BIBREF107. In text generation models, the novel molecules are drawn from a distribution, which are then fine-tuned to obtain specific features, whereas adversarial learning utilizes generator and discriminator networks to produce novel molecules BIBREF107, BIBREF108. ORGAN BIBREF108, a molecular generation methodology, was built upon a sequence generative adversarial network (SeqGAN) from NLP BIBREF109. ORGAN integrated RL in order to generate molecules with desirable properties such as solubility, druglikeness, and synthetizability through using domain-specific rewards BIBREF108.
Biochemical Language Processing ::: Text generation ::: Machine Translation
Machine translation finds use in cheminformatics in “translation" from one language (e.g. reactants) to another (e.g. products). Machine translation is a challenging task because the syntactic and semantic dependencies of each language differ from one another and this may give rise to ambiguities. Neural Machine Translation (NMT) models benefit from the potential of deep learning architectures to build a statistical model that aims to find the most probable target sequence for an input sequence by learning from a corpus of examples BIBREF110, BIBREF111. The main advantage of NMT models is that they provide an end-to-end system that utilizes a single neural network to convert the source sequence into the target sequence. BIBREF110 refer to their model as a sequence-to-sequence (seq2seq) system that addresses a major limitation of DNNs that can only work with fixed-dimensionality information as input and output. However, in the machine translation task, the length of the input sequences is not fixed, and the length of the output sequences is not known in advance.
The NMT models are based on an encoder-decoder architecture that aims to maximize the probability of generating the target sequence (i.e. most likely correct translation) for the given source sequence. The first encoder-decoder architectures in NMT performed poorly as the sequence length increased mainly because the encoder mapped the source sequence into a single fixed-length vector. However, fixed-size representation may be too small to encode all the information required to translate long sequences BIBREF112. To overcome the issue of the fixed context vector (Figure FIGREF35a), a new method was developed, in which every source token was encoded into a memory bank independently (Figure FIGREF35b). The decoder could then selectively focus on parts of this memory bank during translation BIBREF112, BIBREF113. This technique is known as “attention mechanism" BIBREF114.
Inspired by the successes in NMT, the first application of seq2seq models in cheminformatics was for reaction prediction by BIBREF115, who proposed to translate the SMILES strings of reactants and separated reagents to the corresponding product SMILES. The authors hypothesized that the reaction prediction problem can be re-modelled as a translation system in which both inputs and output are sequences. Their model used GRUs for the encoder-decoder and a Bahdanau BIBREF112 attention layer in between. BIBREF116 in contrast, performed the opposite task, the single-step retrosynthesis prediction, using a similar encoder-decoder model. When given a product and a reaction class, their model predicted the reactants that would react together to form that product. One major challenge in the retrosynthesis prediction task is the possibility of multiple correct targets, because more than one reactant combination could lead to the same product. Similarly to BIBREF115, BIBREF117 also adopted a seq2seq model to translate precursors into products, utilizing the SMILES representation for the reaction prediction problem. Their model used a different attention mechanism by BIBREF113 and LSTMs in the encoder and decoder. By visualizing the attention weights, an atom-wise mapping between the product and the reactants could be obtained and used to understand the predictions better. BIBREF117 showed that seq2seq models could compete with graph neural network-based models in the reaction prediction task BIBREF118.
A translation model was also employed to learn a data-driven representation of molecules BIBREF35. BIBREF35 translated between two textual representations of a chemical, InChi and SMILES, to extract latent representations that can integrate the semantic “meaning" of the molecule. The results indicated a statistically significant improvement with the latent representations in a ligand-based virtual screening task against fingerprint methods such as ECFP (i.e. Morgan algorithm). NMT architectures were also adopted in a protein function prediction task for the first time, in which “words" that were extracted from protein sequences are translated into GO identifiers using RNNs as encoder and decoder BIBREF47. Although exhibiting a comparable performance to the state-of-the-art protein function prediction methods, the authors argued that the performance of the model could be improved by determining more meaningful “words" such as biologically interpretable fragments.
Transformer is an attention-based encoder-decoder architecture that was introduced in NMT by BIBREF119. Although similar to previous studies BIBREF110, BIBREF111, BIBREF112 in terms of adopting an encoder-decoder architecture, Transformer differs from the others because it only consists of attention and feed-forward layers in the encoder and decoder. As transformers do not contain an RNN, positional embeddings are needed to capture order relationships in the sequences. BIBREF28 were the first to adopt the Transformer architecture in cheminformatics and designed a Molecular Transformer for the chemical reaction prediction task. The Molecular Transformer, which was atom-mapping independent, outperformed the other algorithms (e.g. based on a two-step convolutional graph neural network BIBREF120) on commonly used benchmark data sets. Transformer architecture was also adopted to learn representations for chemicals in prediction of drug-target interactions BIBREF121 and molecular properties BIBREF122 in which the proposed systems either outperformed the state-of-the-art systems or obtained comparable results.
Future Perspectives
The increase in the biochemical data available in public databases combined with the advances in computational power and NLP methodologies have given rise to a rapid growth in the publication rate in bio/cheminformatics, especially through pre-print servers. As this interdisciplinary field grows, novel opportunities come hand in hand with novel challenges.
Future Perspectives ::: Challenges
The major challenges that can be observed from investigating these studies can be summarized as follows: (i) the need for universalized benchmarks and metrics, (ii) reproducibility of the published methodologies, (iii) bias in available data, and (iv) biological and chemical interpretability/explainability of the solutions.
Future Perspectives ::: Challenges ::: Benchmarking
There are several steps in the drug discovery pipeline, from affinity prediction to the prediction of other chemical properties such as toxicity, and solubility. The use of different datasets and different evaluation metrics makes the assessment of model performance challenging. Comprehensive benchmarking platforms that can assess the success of different tools are still lacking. A benchmarking environment rigorously brings together the suitable data sets and evaluation methodologies in order to provide a fair comparison between the available tools. Such environments are available for molecule generation task from MOSES BIBREF123 and GuacaMol BIBREF124. MoleculeNet is also a similar attempt to build a benchmarking platform for tasks such as prediction of binding affinity and toxicity BIBREF82.
Future Perspectives ::: Challenges ::: Reproducibility
Despite the focus on sharing datasets and source codes on popular software development platforms such as GitHub (github.com) or Zenodo (zenodo.org), it is still a challenge to use data or code from other groups. The use of FAIR (Findable, Accessible, Interoperable and Reusable) (meta)data principles can guide the management of scientific data BIBREF125. Automated workflows that are easy to use and do not require programming knowledge encourage the flow of information from one discipline to the other. Platform-free solutions such as Docker (docker.com) in which an image of the source code is saved and can be opened without requiring further installation could accelerate the reproduction process. A recent initiative to provide a unified-framework for predictive models in genomics can quickly be adopted by the medicinal chemistry community BIBREF126.
Future Perspectives ::: Challenges ::: Bias in data
The available data has two significant sources of bias, one related to the limited sampling of chemical space and the other related to the quality and reproducibility of the data. The lack of information about some regions of the protein/chemical landscape limits the current methodologies to the exploitation of data rather than full exploration. The data on protein-compound interactions is biased toward some privileged molecules or proteins because the protein targets are related to common diseases or the molecules are similar to known actives. Hence, not all of chemical space is sampled, and chemical space is expanded based on the similarity of an active compound to others, which is also referred to as inductive bias BIBREF127. Data about proteins or molecules related to rare diseases is limited and inactive molecules are frequently not reported. Moreover, some experimental measurements that are not reproducible across different labs or conditions limit their reliability BIBREF128. BIBREF129 and BIBREF130 have recently discussed the bias factors in dataset composition. Zhang and Lee have also addressed the sources of bias in the data and proposed to use Bayesian deep learning to quantify uncertainty.
Future Perspectives ::: Challenges ::: Interpretability
The black box nature of ML/DL methodologies makes assigning meaning to the results difficult. Explainability of an ML model is especially critical in drug discovery to facilitate the use of these findings by medicinal chemists, who can contribute to the knowledge loop. explainable-AI (XAI) is a current challenge that calls for increased interpretability of AI solutions for a given context and includes several factors such as trust, safety, privacy, security, fairness and confidence BIBREF131. Explainability is also critical for the domain experts to assess the reliability of new methodolodogies. Interpretability is usually classified into two categories: post-hoc (i.e. after) and ante-hoc (i.e. before). Post-hoc approaches explain the predictions of the model, whereas ante-hoc approaches integrate explainability into the model. Recent studies have already aimed to map the semantic meaning behind the models onto the biochemical description. An attentive pooling network, a two-way attention system that extends the attention mechanism by allowing input nodes to be aware of one another, is one approach that has been employed in drug-target interaction prediction BIBREF132. BIBREF76 showed that mapping activations of hidden neurons in feed-forward neural networks to pharmacophores, or linking atom representations computed by convolutional filters to substructures in a graph-convolution model, are possible ways of integrating explainability into AI-based drug discovery systems. BIBREF133 also demonstrated a novel approach that combines molecule generation and retrosynthesis prediction to generate synthesizable molecules. Integration of such solutions to drug discovery problems will not only be useful for computational researchers but also for the medicinal chemistry community.
Future Perspectives ::: Opportunities
The NLP field has seen tremendous advances in the past five years, starting with the introduction of distributed word embedding algorithms such as Word2Vec BIBREF72 and Glove BIBREF79. The concept of contextualized word embeddings (i.e. ELMo) was introduced soon after BIBREF134. Here, the embedding of the word is not fixed, but changes according to the context (i.e. sentence) in which it appears. These advances continued with more complicated architectures such as Transformer (i.e. Generative Pre-Training or GPT) BIBREF135 and BERT BIBREF136, RoBERTa BIBREF137, GPT2 BIBREF138, Transformer-XL BIBREF139, and XLNet BIBREF140 models. Such models with a focus on context might have significant impact not only on drug discovery, but also on the protein folding problem, which is critical for predicting structural properties of the protein partner. Secondary structure BIBREF141, BIBREF142, BIBREF143, domain boundary BIBREF144 and fold BIBREF49 prediction studies often use sequence information in combination with similarity to available structures. The recent success of AlphaFold BIBREF145 in Critical Assessment of Protein Structure Prediction (CASP) competitions (http://predictioncenter.org/) showed that the enhanced definitions of context, brought about by the advances in machine/deep learning systems, might be useful for capturing the global dependencies in protein sequences to detect interactions between residues separated in sequence space but close together in 3D space BIBREF141.
Unsupervised learning can be used on “big" textual data through using language models with attention BIBREF119 and using pre-trained checkpoints from language models BIBREF146. Encoder-decoder architectures have also had significant impact on solving text generation and machine translation problems and were successfully applied to molecule generation problem. As NLP moves forward, the most recent approaches such as Topic-Guided VAE BIBREF90 and knowledge graphs with graph transformers BIBREF147 will easily find application in bio/cheminformatics.
Recent NLP models are not domain-specific, and they can help with the generalization of models BIBREF138. Current studies emphasize multi-task learning, which requires the use of DNNs that share parameters to learn more information from related but individual tasks BIBREF148, BIBREF138. Combined with the transferability of contextual word representation models, multi-task learning can also provide solutions to drug discovery which has many interwoven tasks, such as chemical property prediction and molecule generation.
Language has an important power, not only for daily communication but also for the communication of codified domain knowledge. Deciphering the meaning behind text is the primary purpose of NLP, which inevitably has found its way to bio/cheminformatics. The complicated nature of biochemical text makes understanding the semantic construction of the hidden words all the more challenging and interesting. The applications we discussed in this review provide a broad perspective of how NLP is already integrated with the processing of biochemical text. A common theme in all of these applications is the use of AI-based methodologies that drive and benefit from the NLP field. Novel advances in NLP and ML are providing auspicious results to solving long-standing bio/cheminformatics problems.
With this review, we have summarized the impact of NLP on bio/cheminformatics to encourage this already interdisciplinary field to take advantage of recent advances. The communication between researchers from different backgrounds and domains can be enhanced through establishing a common vocabulary toward common goals. This review has been an attempt to facilitate this conversation.
Acknowledgement
This work is partially supported by TUBITAK (The Scientific and Technological Research Council of Turkey) under grant number 119E133. HO acknowledges TUBITAK-BIDEB 2211 scholarship program and thanks Gökçe Uludoğan for her comments on figures. EO thanks Prof. Amedeo Caflisch for hosting her at the University of Zurich during her sabbatical. | Yes |
8c8a32592184c88f61fac1eef12c7d233dbec9dc | 8c8a32592184c88f61fac1eef12c7d233dbec9dc_0 | Q: Are this models usually semi/supervised or unsupervised?
Text: Introduction
The design and discovery of novel drugs for protein targets is powered by an understanding of the underlying principles of protein-compound interaction. Biochemical methods that measure affinity and biophysical methods that describe the interaction in atomistic level detail have provided valuable information toward a mechanistic explanation for bimolecular recognition BIBREF0. However, more often than not, compounds with drug potential are discovered serendipitously or by phenotypic drug discovery BIBREF1 since this highly specific interaction is still difficult to predict BIBREF2. Protein structure based computational strategies such as docking BIBREF3, ultra-large library docking for discovering new chemotypes BIBREF4, and molecular dynamics simulations BIBREF3 or ligand based strategies such as quantitative structure-activity relationship (QSAR) BIBREF5, BIBREF6, and molecular similarity BIBREF7 have been powerful at narrowing down the list of compounds to be tested experimentally. With the increase in available data, machine learning and deep learning architectures are also starting to play a significant role in cheminformatics and drug discovery BIBREF8. These approaches often require extensive computational resources or they are limited by the availability of 3D information. On the other hand, text based representations of biochemical entities are more readily available as evidenced by the 19,588 biomolecular complexes (3D structures) in PDB-Bind BIBREF9 (accessed on Nov 13, 2019) compared with 561,356 (manually annotated and reviewed) protein sequences in Uniprot BIBREF10 (accessed on Nov 13, 2019) or 97 million compounds in Pubchem BIBREF11 (accessed on Nov 13, 2019). The advances in natural language processing (NLP) methodologies make processing of text based representations of biomolecules an area of intense research interest.
The discipline of natural language processing (NLP) comprises a variety of methods that explore a large amount of textual data in order to bring unstructured, latent (or hidden) knowledge to the fore BIBREF12. Advances in this field are beneficial for tasks that use language (textual data) to build insight. The languages in the domains of bioinformatics and cheminformatics can be investigated under three categories: (i) natural language (mostly English) that is used in documents such as scientific publications, patents, and web pages, (ii) domain specific language, codified by a systematic set of rules extracted from empirical data and describing the human understanding of that domain (e.g. proteins, chemicals, etc), and (iii) structured forms such as tables, ontologies, knowledge graphs or databases BIBREF13. Processing and extracting information from textual data written in natural languages is one of the major application areas of NLP methodologies in the biomedical domain (also known as BioNLP). Information extracted with BioNLP methods is most often shared in structured databases or knowledge graphs BIBREF14. We refer the reader to the comprehensive review on BioNLP by BIBREF15. Here, we will be focusing on the application of NLP to domain specific, unstructured biochemical textual representations toward exploration of chemical space in drug discovery efforts.
We can view the textual representation of biomedical/biochemical entities as a domain-specific language. For instance, a genome sequence is an extensive script of four characters (A, T, G, C) constituting a genomic language. In proteins, the composition of 20 different natural amino acids in varying lengths builds the protein sequences. Post-translational modifications expand this 20 letter alphabet and confer different properties to proteins BIBREF16. For chemicals there are several text based alternatives such as chemical formula, IUPAC International Chemical Identifier (InChI) BIBREF17 and Simplified Molecular Input Line Entry Specification (SMILES) BIBREF18.
Today, the era of “big data" boosts the “learning" aspect of computational approaches substantially, with the ever-growing amounts of information provided by publicly available databases such as PubChem BIBREF11, ChEMBL BIBREF19, UniProt BIBREF10. These databases are rich in biochemical domain knowledge that is in textual form, thus building an efficient environment in which NLP-based techniques can thrive. Furthermore, advances in computational power allow the design of more complex methodologies, which in turn drive the fields of machine learning (ML) and NLP. However, biological and chemical interpretability and explainability remain among the major challenges of AI-based approaches. Data management in terms of access, interoperability and reusability are also critical for the development of NLP models that can be shared across disciplines.
With this review, we aim to provide an outline of how the field of NLP has influenced the studies in bioinformatics and cheminformatics and the impact it has had over the last decade. Not only are NLP methodologies facilitating processing and exploitation of biochemical text, they also promise an “understanding" of biochemical language to elucidate the underlying principles of bimolecular recognition. NLP technologies are enhancing the biological and chemical knowledge with the final goal of accelerating drug discovery for improving human health. We highlight the significance of an interdisciplinary approach that integrates computer science and natural sciences.
Introduction ::: NLP Basics
BIBREF20 describes NLP on three levels: (i) the word level in which the smallest meaningful unit is extracted to define the morphological structure, (ii) the sentence level where grammar and syntactic validity are determined, and (iii) the domain or context level in which the sentences have global meaning. Similarly, our review is organized in three parts in which bio-chemical data is investigated at: (i) word level, (ii) sentence (text) level, and (iii) understanding text and generating meaningful sequences. Table TABREF37 summarizes important NLP concepts related to the processing of biochemical data. We refer to these concepts and explain their applications in the following sections.
All NLP technology relates to specific AI architectures. In Table TABREF38 W-we summarize the main ML and deep learning (DL) architectures that will be mentioned throughout the review.
Biochemical Language Processing
The language-like properties of text-based representations of chemicals were recognized more than 50 years ago by Garfield BIBREF21. He proposed a “chemico-linguistic" approach to representing chemical nomenclature with the aim of instructing the computer to draw chemical diagrams. Protein sequence has been an important source of information about protein structure and function since Anfinsen's experiment BIBREF22. Alignment algorithms, such as Needleman-Wunsh BIBREF23 and Smith-Waterman BIBREF24, rely on sequence information to identify functionally or structurally critical elements of proteins (or genes).
To make predictions about the structure and function of compounds or proteins, the understanding of these sequences is critical for bioinformatics tasks with the final goal of accelerating drug discovery. Much like a linguist who uses the tools of language to bring out hidden knowledge, biochemical sequences can be processed to propose novel solutions, such as predicting interactions between chemicals and proteins or generating new compounds based on the level of understanding. In this section, we will review the applications of some of the NLP-concepts to biochemical data in order to solve bio/cheminformatics problems.
Biochemical Language Processing ::: Textual Chemical Data
Information about chemicals can be found in repositories such as PubChem BIBREF11, which includes information on around 100 million compounds, or Drugbank BIBREF25, which includes information on around 10,000 drugs. The main textual sources used in drug discovery are textual representations of chemicals and proteins. Table TABREF39 lists some sources that store different types of biochemical information.
Chemical structures can be represented in different forms that can be one-dimensional (1D), 2D, and 3D. Table TABREF40 depicts different identifiers/representations of the drug ampicillin. While the 2D and 3D representations are also used in ML based approaches BIBREF8, here we focus on the 1D form, which is the representation commonly used in NLP.
Biochemical Language Processing ::: Textual Chemical Data ::: IUPAC name
The International Union of Pure and Applied Chemistry (IUPAC) scheme (i.e. nomenclature) is used to name compounds following pre-defined rules such that the names of the compounds are unique and consistent with each other (iupac.org/).
Biochemical Language Processing ::: Textual Chemical Data ::: Chemical Formula
The chemical formula is one of the simplest and most widely-known ways of describing chemicals using letters (i.e. element symbols), numbers, parentheses, and (-/+) signs. This representation gives information about which elements and how many of them are present in the compound.
Biochemical Language Processing ::: Textual Chemical Data ::: SMILES
The Simplified Molecular Input Entry Specification (SMILES) is a text-based form of describing molecular structures and reactions BIBREF18. SMILES strings can be obtained by traversing the 2D graph representation of the compound and therefore SMILES provides more complex information than the chemical formula. Moreover, due to its textual form, SMILES takes 50% to 70% less space than other representation methods such as an identical connection table (daylight.com/dayhtml/doc/theory/theory.smiles.html).
SMILES notation is similar to a language with its own set of rules. Just like it is possible to express the same concept with different words in natural languages, the SMILES notation allows molecules to be represented with more than one unique SMILES. Although this may sound like a significant ambiguity, the possibility of using different SMILES to represent the same molecule was successfully adopted as a data augmentation strategy by various groups (BIBREF26, BIBREF27, BIBREF28).
Canonical SMILES can provide a unique SMILES representation. However, different databases such as PubChem and ChEMBL might use different canonicalization algorithms to generate different unique SMILES. OpenSMILES (opensmiles.org/opensmiles.html) is a new platform that aims to universalize the SMILES notation. In isomeric SMILES, isotopism and stereochemistry information of a molecule is encoded using a variety of symbols (“/", “\", “@", “@@").
Biochemical Language Processing ::: Textual Chemical Data ::: DeepSMILES
DeepSMILES is a novel SMILES-like notation that was proposed to address two challenges of the SMILES syntax: (i) unbalanced parentheses and (ii) ring closure pairs BIBREF29. It was initially designed to enhance machine/deep-learning based approaches that utilize SMILES data as input (github.com/nextmovesoftware/deepsmiles). DeepSMILES was adopted in a drug-target binding affinity prediction task in which the findings highlighted the efficacy of DeepSMILES over SMILES in terms of identifying undetectable patterns BIBREF30. DeepSMILES was also utilized in a molecule generation task in which it was compared to canonical and randomized SMILES text BIBREF31. Here, the results suggested that DeepSMILES might limit the learning ability of the SMILES-based molecule generation models because its syntax is more grammar sensitive with the ring closure alteration and the use of a single symbol for branching (i.e. “)") introducing longer sequences.
Biochemical Language Processing ::: Textual Chemical Data ::: SELFIES
SELF-referencIng Embedding Strings (SELFIES) is an alternative sequence-based representation that is built upon “semantically constrained graphs" BIBREF32. Each symbol in a SELFIES sequence indicates a recursive Chomsky-2 type grammar, and can thus be used to convert the sequence representation to a unique graph. SELFIES utilize SMILES syntax to extract words that will correspond to semantically valid graphs (github.com/aspuru-guzik-group/selfies). BIBREF32 compared SELFIES, DeepSMILES and SMILES representations in terms of validity in cases where random character mutations are introduced. The evaluations on the QM9 dataset yielded results in the favor of SELFIES.
Biochemical Language Processing ::: Textual Chemical Data ::: InChI
InChI is the IUPAC International Chemical Identifier, which is a non-proprietary and open-source structural representation (inchi-trust.org) BIBREF33. The InChIKey is a character-based representation that is generated by hashing the InChI strings in order to shorten them. InChi representation has several layers (each) separated by the “/" symbol.
The software that generates InChi is publicly available and InChi does not suffer from ambiguity problems. However, its less complex structure makes the SMILES representation easier to use as shown in a molecular generation study BIBREF34 and in building meaningful chemical representations with a translation-based system BIBREF35. Interestingly, the translation model was able to translate from InChi to canonical SMILES, whereas it failed to translate from canonical SMILES to InChi. BIBREF35 suggested that the complex syntax of InChi made it difficult for the model to generate a correct sequence.
Biochemical Language Processing ::: Textual Chemical Data ::: SMARTS
SMiles ARbitrary Target Specification (SMARTS) is a language that contains specialized symbols and logic operators that enable substructure (pattern) search on SMILES strings BIBREF36. SMARTS can be used in any task that requires pattern matching on a SMILES string such as, querying databases or creating rule dictionaries such as RECAP BIBREF37 and BRICS BIBREF38 to extract fragments from SMILES (daylight.com/dayhtml/doc/theory/theory.smarts.html).
Biochemical Language Processing ::: Textual Chemical Data ::: SMIRKS
SMIRKS notation can be used to describe generic reactions (also known as transforms) that comprise one or more changes in atoms and bonds (https://daylight.com/daycgi_tutorials/smirks_examples.html). These transforms are based on “reactant to product" notation, and thus make use of SMILES and SMARTS languages. SMIRKS is utilized in tasks such as constructing an online transform database BIBREF39 and predicting metabolic transformations BIBREF40. A recent study achieves a similar performance to rule-based systems in classifying chemical reactions by learning directly from SMILES text with transforms via neural networks BIBREF41.
Biochemical Language Processing ::: Identification of Words/Tokens
Similar to words in natural languages, we can assume that the “words" of biochemical sequences convey significant information (e.g. folding, function etc) about the entities. In this regard, each compound/protein is analogous to a sentence, and each compound/protein unit is analogous to a word. Therefore, if we can decipher the grammar of biochemical languages, it would be easier to model bio/cheminformatics problems. However, protein and chemical words are not explicitly known and different approaches are needed to extract syntactically and semantically meaningful biochemical word units from these textual information sources (i.e. sequences). Here, we review some of the most common tokenization approaches used to determine the words of biochemical languages.
Biochemical Language Processing ::: Identification of Words/Tokens ::: @!START@$k$@!END@-mers (@!START@$n$@!END@-grams)
One of the simplest approaches in NLP to extract a small language unit is to use $k$-mers, also known as $n$-grams. $k$-mers indicate $k$ consecutive overlapping characters that are extracted from the sequence with a sliding window approach. “LINGO", which is one of the earliest applications of $k$-mers in cheminformatics, is the name of the overlapping 4-mers that are extracted from SMILES strings BIBREF42. 4-mers of the SMILES of ampicillin, “CC1(C(N2C(S1)C(C2=O)NC(=O)C(C3=CC=CC=C3)N)C(=O)O)C", can be listed as { `CC1(', `C1(C', `1(C(', ..., `O)O)', `)O)C' }. From a sequence of length $l$, a total of $(l-n)+1$ $k$-mers can be extracted. Extracting LINGOs from SMILES is a simple yet powerful idea that has been successfully used to compute molecular similarities, to differentiate between bioisosteric and random molecular pairs BIBREF42 and in a drug-target interaction prediction task BIBREF43, without requiring 2D or 3D information. The results suggested that a SMILES-based approach to compute the similarity of chemicals is not only as good as a 2D-based similarity measurement, but also faster BIBREF43.
$k$-mers were successfully utilized as protein BIBREF44 and chemical words BIBREF45 in protein family classification tasks. 3-mers to 5-mers were often considered as the words of the protein sequence. BIBREF46 reported that some 5-mers could be matched to motifs and protein words are most likely a mixture of different $k$-mers. For the protein function prediction task, BIBREF47 decided to choose among the 1000 most frequent words to build the protein vocabulary, whereas BIBREF48 utilized each $k$-mer type separately and showed that 4-mers provided the best performance. In the latter work, instead of using the whole protein sequence, the words were extracted from different length protein segments, which are also long $k$-mers (i.e. 100-mer, 120-mer) with 30 amino-acid gaps. The use of segmented protein sequences yielded better results than using the whole protein sequence, and important and conserved subsequences were highlighted. $k$-mers were also used as features, along with position specific score matrix features, in the protein fold prediction problem BIBREF49.
Biochemical Language Processing ::: Identification of Words/Tokens ::: Longest Common Subsequences
The identification of the longest common subsequence (LCS) of two sequences is critical for detecting their similarity. When there are multiple sequences, LCSs can point to informative patterns. LCSs extracted from SMILES sequences performed similarly well to 4-mers in chemical similarity calculation BIBREF43.
Biochemical Language Processing ::: Identification of Words/Tokens ::: Maximum Common Substructure
BIBREF50 investigated organic chemistry as a language in an interesting study that extracts maximum common substructures (MCS) from the 2D structures of pairs of compounds to build a vocabulary of the molecule corpus. Contrary to the common idea of functional groups (e.g. methyl, ethyl etc.) being “words" of the chemical language, the authors argued that MCSs (i.e. fragments) can be described as the words of the chemical language BIBREF50. A recent work investigated the distribution of these words in different molecule subsets BIBREF51. The “words" followed Zipf's Law, which indicates the relationship between the frequency of a word and its rank (based on the frequency) BIBREF52, similar to most natural languages. Their results also showed that drug “words" are shorter compared to natural product “words".
Biochemical Language Processing ::: Identification of Words/Tokens ::: Minimum Description Length
Minimum Description Length (MDL) is an unsupervised compression-based word segmentation technique in which words of an unknown language are detected by compressing the text corpus. In a protein classification task, each protein was assigned to the family in which its sequence is compressed the most, according to the MDL-based representation BIBREF53. BIBREF53 investigated whether the MDL-based words of the proteins show similarities to PROSITE patterns BIBREF54 and showed that less conserved residues were compressed less by the algorithm. BIBREF53 also emphasized that the integration of domain knowledge, such as the consideration of the hydrophilic and hydrophobic aminoacids in the words (i.e. grammar building), might prove effective.
Biochemical Language Processing ::: Identification of Words/Tokens ::: Byte-Pair Encoding
Byte-Pair Encoding (BPE) generates words based on high frequency subsequences starting from frequent characters BIBREF55. A recent study adopted a linguistic-inspired approach to predict protein-protein interactions (PPIs) BIBREF56. Their model was built upon “words" (i.e. bio-words) of the protein language, in which BPE was utilized to build the bio-word vocabulary. BIBREF56 suggested that BPE-segmented words indicate a language-like behavior for the protein sequences and reported improved accuracy results compared to using 3-mers as words.
Biochemical Language Processing ::: Identification of Words/Tokens ::: Pattern-based words
Subsequences that are conserved throughout evolution are usually associated with protein structure and function. These conserved sequences can be detected as patterns via multiple sequence alignment (MSA) techniques and Hidden Markov Models (HMM). PROSITE BIBREF54, a public database that provides information on domains and motifs of proteins, uses regular expressions (i.e. RE or regex) to match these subsequences.
Protein domains have been investigated for their potential of being the words of the protein language. One earlier study suggested that folded domains could be considered as “phrases/clauses" rather than “words" because of the higher semantic complexity between them BIBREF57. Later, domains were described as the words, and domain architectures as sentences of the language BIBREF58, BIBREF59. Protein domains were treated as the words of multi-domain proteins in order to evaluate the semantic meaning behind the domains BIBREF60. The study supported prior work by BIBREF59 suggesting that domains displayed syntactic and semantic features, but there are only a few multi-domain proteins with more than six domains limiting the use of domains as words to build sentences. Protein domains and motifs have also been utilized as words in different drug discovery tasks such as the prediction of drug-target interaction affinity BIBREF61, BIBREF62. These studies showed that motifs and domains together contribute to the prediction as much as the use of the full protein sequence.
SMARTS is a well-known regex-based querying language that is used to identify patterns in a SMILES string. SMARTS has been utilized to build specific rules for small-molecule protonation BIBREF63, to design novel ligands based on the fragments connected to the active site of a target BIBREF64, and to help generate products in reaction prediction BIBREF65. MolBlocks, a molecular fragmentation tool, also adopted SMARTS dictionaries to partition a SMILES string into overlapping fragments BIBREF36. Furthermore, MACCS BIBREF66 and PubChem BIBREF11 Fingerprints (FP) are molecular descriptors that are described as binary vectors based on the absence/presence of substructures that are predefined with SMARTS language. A recent study on protein family clustering uses a ligand-centric representation to describe proteins in which ligands were represented with SMILES-based (i.e. 8-mers) representation, MACCS and Extended Connectivity Fingerprint (ECFP6) BIBREF45. The results indicate that three of the ligand representation approaches provide similar performances for protein family clustering.
To the best of our knowledge, there is no comprehensive evaluation of the different word extraction techniques except a comparison by BIBREF56 of the performance of BPE-based words against $k$-mers in a PPI prediction task. Such comparison would provide important insights to the bio/cheminformatics community.
Biochemical Language Processing ::: Text representation
The representation of a text (e.g. molecule or protein sequence) aims to capture syntactic, semantic or relational meaning. In the widely used Vector Space Model (VSM), a text is represented by a feature vector of either weighted or un-weighted terms BIBREF67. The terms of this vector may correspond to words, phrases, k-grams, characters, or dimensions in a semantic space such as in the distributed word embedding representation models. The similarity between two texts represented in the vector space model is usually computed using the cosine similarity metric BIBREF68, which corresponds to the cosine of the angle between the two vectors.
Similarly to the one-hot encoding scheme BIBREF69, in the traditional bag-of-words BIBREF70 and term frequency-inverse document frequency (TF-IDF) BIBREF71 text representation models, each word corresponds to a different dimension in the vector space. Therefore, the similarity between two words in the vector space is zero, even if they are synonymous or related to each other. In the distributed representation models BIBREF72 on the other hand, words are represented as dense vectors based on their context. Words that occur in similar contexts have similar vector representations. In this subsection, we review these commonly used text representation models with their applications in cheminformatics.
Biochemical Language Processing ::: Text representation ::: Bag-of-words representation
In this representation model, a text is represented as a vector of bag-of-words, where the multiplicity of the words is taken into account, but the order of the words in the text is lost BIBREF70. For instance, the SMILES of ampicillin “CC1(C(N2C(S1)C(C2=O)NC(=O)C(
C3=CC=CC=C3)N)C(=O)O)C" can be represented as a bag-of 8-mers as follows: {“CC1(C(N2", “C1(C(N2C", “1(C(N2C(", “(C(N2C(S",...,“N)C(=O)O" ,“)C(=O)O)" ,“C(=O)O)C" }. We can vectorize it as $S = [1, 1, 1, 1, ...,1, 1, 1]$ in which each number refers to the frequency of the corresponding 8-mer.
Bag-of-words representation was used in molecular similarity computation, in which the SMILES string and the LINGOs extracted from it were treated as the sentence and words, respectively BIBREF42. The unique LINGOs were considered for each pair and a Tanimoto coefficient was used to measure the similarity BIBREF42. Another approach called SMILES Fingerprint (SMIfp) also adopted bag-of-words to create representations of molecules for a ligand-based virtual screening task BIBREF73. SMIfp considered 34 unique symbols in SMILES strings to create a frequency-based vector representation, which was utilized to compute molecular similarity. SMIfp provided comparable results to a chemical representation technique that also incorporated polar group and topological information, as well as atom and bond information, in recovering active compounds amongst decoys BIBREF73.
Biochemical Language Processing ::: Text representation ::: TF-IDF
The bag-of-words model, which is based on counting the terms of the sentence/document, might prioritize insignificant but frequent words. To overcome this issue, a weighting scheme can be integrated into the vector representation in order to give more importance to the rare terms that might play a key role in detecting similarity between two documents. One popular weighting approach is to use term frequency-inverse document frequency (TF-IDF) BIBREF71. TF refers to the frequency of a term in the document, and IDF denotes the logarithm of the total number of documents over the number of documents in which the term appears. IDF is therefore an indicator of uniqueness. For instance, the IDF of “C3=CC=CC" is lower than that of “(C(N2C(S", which appears in fewer compounds. Therefore, the existence of “(C(N2C(S" in a compound may be more informative.
TF-IDF weigthing was utilized to assign weights to LINGOs that were extracted from SMILES in order to compute molecule similarity using cosine similarity BIBREF43. Molecular similarities were then used as input for drug-target interaction prediction. A similar performance between TF-IDF weighted LINGO and a graph-based chemical similarity measurement was obtained. BIBREF50 used TF-IDF weighting on chemical bonds to show that bonds with higher TF-IDF scores have a higher probability of breaking.
Biochemical Language Processing ::: Text representation ::: One-hot representation
In one-hot representation, for a given vocabulary of a text, each unique word/character is represented with a binary vector that has a 1 in the corresponding position, while the vector positions for the remaining words/characters are filled with 0s BIBREF69. One-hot encoding is fast to build, but might lead to sparse vectors with large dimensions based on the size of the vocabulary (e.g. one million unique words in the vocabulary means one million dimensional binary vectors filled with zeros except one). It is a popular choice, especially in machine learning-based bio/cheminformatic studies to encode different types of information such as SMILES characters BIBREF74, BIBREF75, atom/bond types BIBREF76, BIBREF77 and molecular properties BIBREF78.
Biochemical Language Processing ::: Text representation ::: Distributed representations
The one-hot encoding builds discrete representations, and thus does not consider the relationships between words. For instance, the cosine similarity of two different words is 0 even if they are semantically similar. However, if the word (i.e. 8-mer) “(C(N2C(S" frequently appears together with the word “C(C2=O)N" in SMILES strings, this might suggest that they have related “meanings". Furthermore, two words might have similar semantic meanings even though they are syntactically apart. This is where distributed vector representations come into play.
The distributed word embeddings models gained popularity with the introduction of Word2Vec BIBREF72 and GloVe BIBREF79. The main motivation behind the Word2Vec model is to build real-valued high-dimensional vectors for each word in the vocabulary based on the context in which they appear. There are two main approaches in Word2Vec: (i) Skip-Gram and (ii) Continuous Bag of Words (CBOW). The aim of the Skip-Gram model is to predict context words given the center word, whereas in CBOW the objective is to predict the target word given the context words. Figure FIGREF32 depicts the Skip-gram architecture in Word2Vec BIBREF72. For the vocabulary of size $V$, given the target word “2C(S", the model learns to predict two context words. Both target word and context words are represented as one-hot encoded binary vectors of size $V$. The number of neurons in the hidden layer determines the size of the embedding vectors. The weight matrix between the input layer and the hidden layer stores the embeddings of the vocabulary words. The $i^{th}$ row of the embedding matrix corresponds to the embedding of the $i^{th}$ word.
The Word2Vec architecture has inspired a great deal of research in the bio/cheminformatics domains. The Word2Vec algorithm has been successfully applied for determining protein classes BIBREF44 and protein-protein interactions (PPI) BIBREF56. BIBREF44 treated 3-mers as the words of the protein sequence and observed that 3-mers with similar biophysical and biochemical properties clustered together when their embeddings were mapped onto the 2D space. BIBREF56, on the other hand, utilized BPE-based word segmentation (i.e. bio-words) to determine the words. The authors argued that the improved performance for bio-words in the PPI prediction task might be due to the segmentation-based model providing more distinct words than $k$-mers, which include repetitive segments. Another recent study treated multi-domain proteins as sentences in which each domain was recognized as a word BIBREF60. The Word2Vec algorithm was trained on the domains (i.e. PFAM domain identifiers) of eukaryotic protein sequences to learn semantically interpretable representations of them. The domain representations were then investigated in terms of the Gene Ontology (GO) annotations that they inherit. The results indicated that semantically similar domains share similar GO terms.
The Word2Vec algorithm was also utilized for representation of chemicals. SMILESVec, a text-based ligand representation technique, utilized Word2Vec to learn embeddings for 8-mers (i.e. chemical words) that are extracted from SMILES strings BIBREF45. SMILESVec was utilized in protein representation such that proteins were represented as the average of the SMILESVec vectors of their interacting ligands. The results indicated comparable performances for ligand-based and sequence based protein representations in protein family/superfamily clustering. Mol2Vec BIBREF80, on the other hand, was based on the identifiers of the substructures (i.e. words of the chemical) that were extracted via Extended Connectivity Fingerprint (ECFP) BIBREF81. The results showed a better performance with Mol2Vec than with the simple Morgan Fingerprint in a solubility prediction task, and a comparable performance to graph-based chemical representation BIBREF82. BIBREF83 also employed the Word2vec model that was trained on the fragments that are extracted from SMILES strings using a graph traversing algorithm. The results favored the distributed fragment-based ligand representation over fragment-based binary vector representation in a ring system clustering task and showed a comparable performance in the prediction of toxicity against Tetrahymena BIBREF83. Figure FIGREF33 illustrates the pipeline of a text-based molecule representation based on $k$-mers.
FP2Vec is another method that utilizes embedding representation for molecules, however instead of the Word2Vec algorithm, it depends on a Convolutional Neural Network (CNN) to build molecule representations to be used in toxicity prediction tasks BIBREF84. CNN architectures have also been utilized for drug-target binding affinity prediction BIBREF85 and drug-drug interaction prediction BIBREF75 to build representations for chemicals from raw SMILES strings, as well as for protein fold prediction BIBREF86 to learn representations for proteins from amino-acid sequences. SMILES2Vec adopted different DL architectures (GRU, LSTM, CNN+GRU, and CNN+LSTM) to learn molecule embeddings, which were then used to predict toxicity, affinity and solubility BIBREF87. A CNN+GRU combination was better at the prediction of chemical properties. A recent study compared several DL approaches to investigate the effect of different chemical representations, which were learned through these architectures, on a chemical property prediction problem BIBREF88. The authors also combined DL architectures that were trained on SMILES strings with the MACCS fingerprint, proposing a combined representation for molecules (i.e. CheMixNet). The CheMixNet representation outperformed the other representations that were trained on a single data type such as SMILES2Vec (i.e. SMILES) and Chemception (i.e. 2D graph) BIBREF89.
Biochemical Language Processing ::: Text generation
Text generation is a primary NLP task, where the aim is to generate grammatically and semantically correct text, with many applications ranging from question answering to machine translation BIBREF90. It is generally formulated as a language modeling task, where a statistical model is trained using a large corpus to predict the distribution of the next word in a given context. In machine translation, the generated text is the translation of an input text in another language.
Medicinal chemistry campaigns use methods such as scaffold hopping BIBREF91 or fragment-based drug design BIBREF3 to build and test novel molecules but the chemotype diversity and novelty may be limited. It is possible to explore uncharted chemical space with text generation models, which learn a distribution from the available data (i.e. SMILES language) and generate novel molecules that share similar physicochemical properties with the existing molecules BIBREF74. Molecule generation can then be followed by assessing physicochemical properties of the generated compound or its binding potential to a target protein BIBREF74. For a comprehensive review of molecule generation methodologies, including graph-based models, we refer the reader to the review of BIBREF92. Machine translation models have also been recently adapted to text-based molecule generation, which start with one “language" such as that of reactants and generate a novel text in another “language" such as that of products BIBREF28. Below, we present recent studies on text based molecule generation.
RNN models, which learn a probability distribution from a training set of molecules, are commonly used in molecule generation to propose novel molecules similar to the ones in the training data set. For instance, given the SMILES sequence “C(=O", the model would predict the next character to be “)" with a higher probability than “(". The production of valid SMILES strings, however, is a challenge because of the complicated SMILES syntax that utilizes parentheses to indicate branches and ring numbers. The sequential nature of RNNs, which may miss long range dependencies, is a disadvantage of these models BIBREF74. RNN descendants LSTM and GRU, which model long-term dependencies, are better suited for remembering matching rings and branch closures. Motivated by such a hypothesis, BIBREF74 and BIBREF93 successfully pioneered de novo molecule generation using LSTM architecture to generate valid novel SMILES. BIBREF74 further modified their model to generate target-specific molecules by integrating a target bioactivity prediction step to filter out inactive molecules and then retraining the LSTM network. In another study, transfer learning was adopted to fine-tune an LSTM-based SMILES generation model so that structurally similar leads were generated for targets with few known ligands BIBREF94. BIBREF95 and BIBREF96 used reinforcement learning (RL) to bias their model toward compounds with desired properties. Merk et al. BIBREF97, BIBREF98 fine-tuned their LSTM model on a target-focused library of active molecules and synthesized some novel compounds. BIBREF99 explored how much of the GDB-13 database BIBREF100 they could rediscover by using an RNN-based generative model.
The variational Auto-encoder (VAE) is another widely adopted text generation architecture BIBREF101. BIBREF34 adopted this architecture for molecule generation. A traditional auto-encoder encodes the input into the latent space, which is then decoded to reconstruct the input. VAE differs from AE by explicitly defining a probability distribution on the latent space to generate new samples. BIBREF34 hypothesized that the variational part of the system integrates noise to the encoder, so that the decoder can be more robust to the large diversity of molecules. However, the authors also reported that the non-context free property of SMILES caused by matching ring numbers and parentheses might often lead the decoder to generate invalid SMILES strings. A grammar variational auto-encoder (GVAE), where the grammar for SMILES is explicitly defined instead of the auto-encoder learning the grammar itself, was proposed to address this issue BIBREF102. This way, the generation is based on the pre-defined grammar rules and the decoding process generates grammar production rules that should also be grammatically valid. Although syntactic validity would be ensured, the molecules may not have semantic validity (chemical validity). BIBREF103 built upon the VAE BIBREF34 and GVAE BIBREF102 architectures and introduced a syntax-directed variational autoencoder (SD-VAE) model for the molecular generation task. The syntax-direct generative mechanism in the decoder contributed to creating both syntactically and semantically valid SMILES sequences. BIBREF103 compared the latent representations of molecules generated by VAE, GVAE, and SD-VAE, and showed that SD-VAE provided better discriminative features for druglikeness. BIBREF104 proposed an adversarial AE for the same task. Conditional VAEs BIBREF105, BIBREF106 were trained to generate molecules conditioned on a desired property. The challenges that SMILES syntax presents inspired the introduction of new syntax such as DeepSMILES BIBREF29 and SELFIES BIBREF32 (details in Section SECREF3).
Generative Adversarial Network (GAN) models generate novel molecules by using two components: the generator network generates novel molecules, and the discriminator network aims to distinguish between the generated molecules and real molecules BIBREF107. In text generation models, the novel molecules are drawn from a distribution, which are then fine-tuned to obtain specific features, whereas adversarial learning utilizes generator and discriminator networks to produce novel molecules BIBREF107, BIBREF108. ORGAN BIBREF108, a molecular generation methodology, was built upon a sequence generative adversarial network (SeqGAN) from NLP BIBREF109. ORGAN integrated RL in order to generate molecules with desirable properties such as solubility, druglikeness, and synthetizability through using domain-specific rewards BIBREF108.
Biochemical Language Processing ::: Text generation ::: Machine Translation
Machine translation finds use in cheminformatics in “translation" from one language (e.g. reactants) to another (e.g. products). Machine translation is a challenging task because the syntactic and semantic dependencies of each language differ from one another and this may give rise to ambiguities. Neural Machine Translation (NMT) models benefit from the potential of deep learning architectures to build a statistical model that aims to find the most probable target sequence for an input sequence by learning from a corpus of examples BIBREF110, BIBREF111. The main advantage of NMT models is that they provide an end-to-end system that utilizes a single neural network to convert the source sequence into the target sequence. BIBREF110 refer to their model as a sequence-to-sequence (seq2seq) system that addresses a major limitation of DNNs that can only work with fixed-dimensionality information as input and output. However, in the machine translation task, the length of the input sequences is not fixed, and the length of the output sequences is not known in advance.
The NMT models are based on an encoder-decoder architecture that aims to maximize the probability of generating the target sequence (i.e. most likely correct translation) for the given source sequence. The first encoder-decoder architectures in NMT performed poorly as the sequence length increased mainly because the encoder mapped the source sequence into a single fixed-length vector. However, fixed-size representation may be too small to encode all the information required to translate long sequences BIBREF112. To overcome the issue of the fixed context vector (Figure FIGREF35a), a new method was developed, in which every source token was encoded into a memory bank independently (Figure FIGREF35b). The decoder could then selectively focus on parts of this memory bank during translation BIBREF112, BIBREF113. This technique is known as “attention mechanism" BIBREF114.
Inspired by the successes in NMT, the first application of seq2seq models in cheminformatics was for reaction prediction by BIBREF115, who proposed to translate the SMILES strings of reactants and separated reagents to the corresponding product SMILES. The authors hypothesized that the reaction prediction problem can be re-modelled as a translation system in which both inputs and output are sequences. Their model used GRUs for the encoder-decoder and a Bahdanau BIBREF112 attention layer in between. BIBREF116 in contrast, performed the opposite task, the single-step retrosynthesis prediction, using a similar encoder-decoder model. When given a product and a reaction class, their model predicted the reactants that would react together to form that product. One major challenge in the retrosynthesis prediction task is the possibility of multiple correct targets, because more than one reactant combination could lead to the same product. Similarly to BIBREF115, BIBREF117 also adopted a seq2seq model to translate precursors into products, utilizing the SMILES representation for the reaction prediction problem. Their model used a different attention mechanism by BIBREF113 and LSTMs in the encoder and decoder. By visualizing the attention weights, an atom-wise mapping between the product and the reactants could be obtained and used to understand the predictions better. BIBREF117 showed that seq2seq models could compete with graph neural network-based models in the reaction prediction task BIBREF118.
A translation model was also employed to learn a data-driven representation of molecules BIBREF35. BIBREF35 translated between two textual representations of a chemical, InChi and SMILES, to extract latent representations that can integrate the semantic “meaning" of the molecule. The results indicated a statistically significant improvement with the latent representations in a ligand-based virtual screening task against fingerprint methods such as ECFP (i.e. Morgan algorithm). NMT architectures were also adopted in a protein function prediction task for the first time, in which “words" that were extracted from protein sequences are translated into GO identifiers using RNNs as encoder and decoder BIBREF47. Although exhibiting a comparable performance to the state-of-the-art protein function prediction methods, the authors argued that the performance of the model could be improved by determining more meaningful “words" such as biologically interpretable fragments.
Transformer is an attention-based encoder-decoder architecture that was introduced in NMT by BIBREF119. Although similar to previous studies BIBREF110, BIBREF111, BIBREF112 in terms of adopting an encoder-decoder architecture, Transformer differs from the others because it only consists of attention and feed-forward layers in the encoder and decoder. As transformers do not contain an RNN, positional embeddings are needed to capture order relationships in the sequences. BIBREF28 were the first to adopt the Transformer architecture in cheminformatics and designed a Molecular Transformer for the chemical reaction prediction task. The Molecular Transformer, which was atom-mapping independent, outperformed the other algorithms (e.g. based on a two-step convolutional graph neural network BIBREF120) on commonly used benchmark data sets. Transformer architecture was also adopted to learn representations for chemicals in prediction of drug-target interactions BIBREF121 and molecular properties BIBREF122 in which the proposed systems either outperformed the state-of-the-art systems or obtained comparable results.
Future Perspectives
The increase in the biochemical data available in public databases combined with the advances in computational power and NLP methodologies have given rise to a rapid growth in the publication rate in bio/cheminformatics, especially through pre-print servers. As this interdisciplinary field grows, novel opportunities come hand in hand with novel challenges.
Future Perspectives ::: Challenges
The major challenges that can be observed from investigating these studies can be summarized as follows: (i) the need for universalized benchmarks and metrics, (ii) reproducibility of the published methodologies, (iii) bias in available data, and (iv) biological and chemical interpretability/explainability of the solutions.
Future Perspectives ::: Challenges ::: Benchmarking
There are several steps in the drug discovery pipeline, from affinity prediction to the prediction of other chemical properties such as toxicity, and solubility. The use of different datasets and different evaluation metrics makes the assessment of model performance challenging. Comprehensive benchmarking platforms that can assess the success of different tools are still lacking. A benchmarking environment rigorously brings together the suitable data sets and evaluation methodologies in order to provide a fair comparison between the available tools. Such environments are available for molecule generation task from MOSES BIBREF123 and GuacaMol BIBREF124. MoleculeNet is also a similar attempt to build a benchmarking platform for tasks such as prediction of binding affinity and toxicity BIBREF82.
Future Perspectives ::: Challenges ::: Reproducibility
Despite the focus on sharing datasets and source codes on popular software development platforms such as GitHub (github.com) or Zenodo (zenodo.org), it is still a challenge to use data or code from other groups. The use of FAIR (Findable, Accessible, Interoperable and Reusable) (meta)data principles can guide the management of scientific data BIBREF125. Automated workflows that are easy to use and do not require programming knowledge encourage the flow of information from one discipline to the other. Platform-free solutions such as Docker (docker.com) in which an image of the source code is saved and can be opened without requiring further installation could accelerate the reproduction process. A recent initiative to provide a unified-framework for predictive models in genomics can quickly be adopted by the medicinal chemistry community BIBREF126.
Future Perspectives ::: Challenges ::: Bias in data
The available data has two significant sources of bias, one related to the limited sampling of chemical space and the other related to the quality and reproducibility of the data. The lack of information about some regions of the protein/chemical landscape limits the current methodologies to the exploitation of data rather than full exploration. The data on protein-compound interactions is biased toward some privileged molecules or proteins because the protein targets are related to common diseases or the molecules are similar to known actives. Hence, not all of chemical space is sampled, and chemical space is expanded based on the similarity of an active compound to others, which is also referred to as inductive bias BIBREF127. Data about proteins or molecules related to rare diseases is limited and inactive molecules are frequently not reported. Moreover, some experimental measurements that are not reproducible across different labs or conditions limit their reliability BIBREF128. BIBREF129 and BIBREF130 have recently discussed the bias factors in dataset composition. Zhang and Lee have also addressed the sources of bias in the data and proposed to use Bayesian deep learning to quantify uncertainty.
Future Perspectives ::: Challenges ::: Interpretability
The black box nature of ML/DL methodologies makes assigning meaning to the results difficult. Explainability of an ML model is especially critical in drug discovery to facilitate the use of these findings by medicinal chemists, who can contribute to the knowledge loop. explainable-AI (XAI) is a current challenge that calls for increased interpretability of AI solutions for a given context and includes several factors such as trust, safety, privacy, security, fairness and confidence BIBREF131. Explainability is also critical for the domain experts to assess the reliability of new methodolodogies. Interpretability is usually classified into two categories: post-hoc (i.e. after) and ante-hoc (i.e. before). Post-hoc approaches explain the predictions of the model, whereas ante-hoc approaches integrate explainability into the model. Recent studies have already aimed to map the semantic meaning behind the models onto the biochemical description. An attentive pooling network, a two-way attention system that extends the attention mechanism by allowing input nodes to be aware of one another, is one approach that has been employed in drug-target interaction prediction BIBREF132. BIBREF76 showed that mapping activations of hidden neurons in feed-forward neural networks to pharmacophores, or linking atom representations computed by convolutional filters to substructures in a graph-convolution model, are possible ways of integrating explainability into AI-based drug discovery systems. BIBREF133 also demonstrated a novel approach that combines molecule generation and retrosynthesis prediction to generate synthesizable molecules. Integration of such solutions to drug discovery problems will not only be useful for computational researchers but also for the medicinal chemistry community.
Future Perspectives ::: Opportunities
The NLP field has seen tremendous advances in the past five years, starting with the introduction of distributed word embedding algorithms such as Word2Vec BIBREF72 and Glove BIBREF79. The concept of contextualized word embeddings (i.e. ELMo) was introduced soon after BIBREF134. Here, the embedding of the word is not fixed, but changes according to the context (i.e. sentence) in which it appears. These advances continued with more complicated architectures such as Transformer (i.e. Generative Pre-Training or GPT) BIBREF135 and BERT BIBREF136, RoBERTa BIBREF137, GPT2 BIBREF138, Transformer-XL BIBREF139, and XLNet BIBREF140 models. Such models with a focus on context might have significant impact not only on drug discovery, but also on the protein folding problem, which is critical for predicting structural properties of the protein partner. Secondary structure BIBREF141, BIBREF142, BIBREF143, domain boundary BIBREF144 and fold BIBREF49 prediction studies often use sequence information in combination with similarity to available structures. The recent success of AlphaFold BIBREF145 in Critical Assessment of Protein Structure Prediction (CASP) competitions (http://predictioncenter.org/) showed that the enhanced definitions of context, brought about by the advances in machine/deep learning systems, might be useful for capturing the global dependencies in protein sequences to detect interactions between residues separated in sequence space but close together in 3D space BIBREF141.
Unsupervised learning can be used on “big" textual data through using language models with attention BIBREF119 and using pre-trained checkpoints from language models BIBREF146. Encoder-decoder architectures have also had significant impact on solving text generation and machine translation problems and were successfully applied to molecule generation problem. As NLP moves forward, the most recent approaches such as Topic-Guided VAE BIBREF90 and knowledge graphs with graph transformers BIBREF147 will easily find application in bio/cheminformatics.
Recent NLP models are not domain-specific, and they can help with the generalization of models BIBREF138. Current studies emphasize multi-task learning, which requires the use of DNNs that share parameters to learn more information from related but individual tasks BIBREF148, BIBREF138. Combined with the transferability of contextual word representation models, multi-task learning can also provide solutions to drug discovery which has many interwoven tasks, such as chemical property prediction and molecule generation.
Language has an important power, not only for daily communication but also for the communication of codified domain knowledge. Deciphering the meaning behind text is the primary purpose of NLP, which inevitably has found its way to bio/cheminformatics. The complicated nature of biochemical text makes understanding the semantic construction of the hidden words all the more challenging and interesting. The applications we discussed in this review provide a broad perspective of how NLP is already integrated with the processing of biochemical text. A common theme in all of these applications is the use of AI-based methodologies that drive and benefit from the NLP field. Novel advances in NLP and ML are providing auspicious results to solving long-standing bio/cheminformatics problems.
With this review, we have summarized the impact of NLP on bio/cheminformatics to encourage this already interdisciplinary field to take advantage of recent advances. The communication between researchers from different backgrounds and domains can be enhanced through establishing a common vocabulary toward common goals. This review has been an attempt to facilitate this conversation.
Acknowledgement
This work is partially supported by TUBITAK (The Scientific and Technological Research Council of Turkey) under grant number 119E133. HO acknowledges TUBITAK-BIDEB 2211 scholarship program and thanks Gökçe Uludoğan for her comments on figures. EO thanks Prof. Amedeo Caflisch for hosting her at the University of Zurich during her sabbatical. | Both supervised and unsupervised, depending on the task that needs to be solved. |
16646ee77975fed372b76ce639e2664ae2105dcf | 16646ee77975fed372b76ce639e2664ae2105dcf_0 | Q: Is there any concrete example in the paper that shows that this approach had huge impact on drug discovery?
Text: Introduction
The design and discovery of novel drugs for protein targets is powered by an understanding of the underlying principles of protein-compound interaction. Biochemical methods that measure affinity and biophysical methods that describe the interaction in atomistic level detail have provided valuable information toward a mechanistic explanation for bimolecular recognition BIBREF0. However, more often than not, compounds with drug potential are discovered serendipitously or by phenotypic drug discovery BIBREF1 since this highly specific interaction is still difficult to predict BIBREF2. Protein structure based computational strategies such as docking BIBREF3, ultra-large library docking for discovering new chemotypes BIBREF4, and molecular dynamics simulations BIBREF3 or ligand based strategies such as quantitative structure-activity relationship (QSAR) BIBREF5, BIBREF6, and molecular similarity BIBREF7 have been powerful at narrowing down the list of compounds to be tested experimentally. With the increase in available data, machine learning and deep learning architectures are also starting to play a significant role in cheminformatics and drug discovery BIBREF8. These approaches often require extensive computational resources or they are limited by the availability of 3D information. On the other hand, text based representations of biochemical entities are more readily available as evidenced by the 19,588 biomolecular complexes (3D structures) in PDB-Bind BIBREF9 (accessed on Nov 13, 2019) compared with 561,356 (manually annotated and reviewed) protein sequences in Uniprot BIBREF10 (accessed on Nov 13, 2019) or 97 million compounds in Pubchem BIBREF11 (accessed on Nov 13, 2019). The advances in natural language processing (NLP) methodologies make processing of text based representations of biomolecules an area of intense research interest.
The discipline of natural language processing (NLP) comprises a variety of methods that explore a large amount of textual data in order to bring unstructured, latent (or hidden) knowledge to the fore BIBREF12. Advances in this field are beneficial for tasks that use language (textual data) to build insight. The languages in the domains of bioinformatics and cheminformatics can be investigated under three categories: (i) natural language (mostly English) that is used in documents such as scientific publications, patents, and web pages, (ii) domain specific language, codified by a systematic set of rules extracted from empirical data and describing the human understanding of that domain (e.g. proteins, chemicals, etc), and (iii) structured forms such as tables, ontologies, knowledge graphs or databases BIBREF13. Processing and extracting information from textual data written in natural languages is one of the major application areas of NLP methodologies in the biomedical domain (also known as BioNLP). Information extracted with BioNLP methods is most often shared in structured databases or knowledge graphs BIBREF14. We refer the reader to the comprehensive review on BioNLP by BIBREF15. Here, we will be focusing on the application of NLP to domain specific, unstructured biochemical textual representations toward exploration of chemical space in drug discovery efforts.
We can view the textual representation of biomedical/biochemical entities as a domain-specific language. For instance, a genome sequence is an extensive script of four characters (A, T, G, C) constituting a genomic language. In proteins, the composition of 20 different natural amino acids in varying lengths builds the protein sequences. Post-translational modifications expand this 20 letter alphabet and confer different properties to proteins BIBREF16. For chemicals there are several text based alternatives such as chemical formula, IUPAC International Chemical Identifier (InChI) BIBREF17 and Simplified Molecular Input Line Entry Specification (SMILES) BIBREF18.
Today, the era of “big data" boosts the “learning" aspect of computational approaches substantially, with the ever-growing amounts of information provided by publicly available databases such as PubChem BIBREF11, ChEMBL BIBREF19, UniProt BIBREF10. These databases are rich in biochemical domain knowledge that is in textual form, thus building an efficient environment in which NLP-based techniques can thrive. Furthermore, advances in computational power allow the design of more complex methodologies, which in turn drive the fields of machine learning (ML) and NLP. However, biological and chemical interpretability and explainability remain among the major challenges of AI-based approaches. Data management in terms of access, interoperability and reusability are also critical for the development of NLP models that can be shared across disciplines.
With this review, we aim to provide an outline of how the field of NLP has influenced the studies in bioinformatics and cheminformatics and the impact it has had over the last decade. Not only are NLP methodologies facilitating processing and exploitation of biochemical text, they also promise an “understanding" of biochemical language to elucidate the underlying principles of bimolecular recognition. NLP technologies are enhancing the biological and chemical knowledge with the final goal of accelerating drug discovery for improving human health. We highlight the significance of an interdisciplinary approach that integrates computer science and natural sciences.
Introduction ::: NLP Basics
BIBREF20 describes NLP on three levels: (i) the word level in which the smallest meaningful unit is extracted to define the morphological structure, (ii) the sentence level where grammar and syntactic validity are determined, and (iii) the domain or context level in which the sentences have global meaning. Similarly, our review is organized in three parts in which bio-chemical data is investigated at: (i) word level, (ii) sentence (text) level, and (iii) understanding text and generating meaningful sequences. Table TABREF37 summarizes important NLP concepts related to the processing of biochemical data. We refer to these concepts and explain their applications in the following sections.
All NLP technology relates to specific AI architectures. In Table TABREF38 W-we summarize the main ML and deep learning (DL) architectures that will be mentioned throughout the review.
Biochemical Language Processing
The language-like properties of text-based representations of chemicals were recognized more than 50 years ago by Garfield BIBREF21. He proposed a “chemico-linguistic" approach to representing chemical nomenclature with the aim of instructing the computer to draw chemical diagrams. Protein sequence has been an important source of information about protein structure and function since Anfinsen's experiment BIBREF22. Alignment algorithms, such as Needleman-Wunsh BIBREF23 and Smith-Waterman BIBREF24, rely on sequence information to identify functionally or structurally critical elements of proteins (or genes).
To make predictions about the structure and function of compounds or proteins, the understanding of these sequences is critical for bioinformatics tasks with the final goal of accelerating drug discovery. Much like a linguist who uses the tools of language to bring out hidden knowledge, biochemical sequences can be processed to propose novel solutions, such as predicting interactions between chemicals and proteins or generating new compounds based on the level of understanding. In this section, we will review the applications of some of the NLP-concepts to biochemical data in order to solve bio/cheminformatics problems.
Biochemical Language Processing ::: Textual Chemical Data
Information about chemicals can be found in repositories such as PubChem BIBREF11, which includes information on around 100 million compounds, or Drugbank BIBREF25, which includes information on around 10,000 drugs. The main textual sources used in drug discovery are textual representations of chemicals and proteins. Table TABREF39 lists some sources that store different types of biochemical information.
Chemical structures can be represented in different forms that can be one-dimensional (1D), 2D, and 3D. Table TABREF40 depicts different identifiers/representations of the drug ampicillin. While the 2D and 3D representations are also used in ML based approaches BIBREF8, here we focus on the 1D form, which is the representation commonly used in NLP.
Biochemical Language Processing ::: Textual Chemical Data ::: IUPAC name
The International Union of Pure and Applied Chemistry (IUPAC) scheme (i.e. nomenclature) is used to name compounds following pre-defined rules such that the names of the compounds are unique and consistent with each other (iupac.org/).
Biochemical Language Processing ::: Textual Chemical Data ::: Chemical Formula
The chemical formula is one of the simplest and most widely-known ways of describing chemicals using letters (i.e. element symbols), numbers, parentheses, and (-/+) signs. This representation gives information about which elements and how many of them are present in the compound.
Biochemical Language Processing ::: Textual Chemical Data ::: SMILES
The Simplified Molecular Input Entry Specification (SMILES) is a text-based form of describing molecular structures and reactions BIBREF18. SMILES strings can be obtained by traversing the 2D graph representation of the compound and therefore SMILES provides more complex information than the chemical formula. Moreover, due to its textual form, SMILES takes 50% to 70% less space than other representation methods such as an identical connection table (daylight.com/dayhtml/doc/theory/theory.smiles.html).
SMILES notation is similar to a language with its own set of rules. Just like it is possible to express the same concept with different words in natural languages, the SMILES notation allows molecules to be represented with more than one unique SMILES. Although this may sound like a significant ambiguity, the possibility of using different SMILES to represent the same molecule was successfully adopted as a data augmentation strategy by various groups (BIBREF26, BIBREF27, BIBREF28).
Canonical SMILES can provide a unique SMILES representation. However, different databases such as PubChem and ChEMBL might use different canonicalization algorithms to generate different unique SMILES. OpenSMILES (opensmiles.org/opensmiles.html) is a new platform that aims to universalize the SMILES notation. In isomeric SMILES, isotopism and stereochemistry information of a molecule is encoded using a variety of symbols (“/", “\", “@", “@@").
Biochemical Language Processing ::: Textual Chemical Data ::: DeepSMILES
DeepSMILES is a novel SMILES-like notation that was proposed to address two challenges of the SMILES syntax: (i) unbalanced parentheses and (ii) ring closure pairs BIBREF29. It was initially designed to enhance machine/deep-learning based approaches that utilize SMILES data as input (github.com/nextmovesoftware/deepsmiles). DeepSMILES was adopted in a drug-target binding affinity prediction task in which the findings highlighted the efficacy of DeepSMILES over SMILES in terms of identifying undetectable patterns BIBREF30. DeepSMILES was also utilized in a molecule generation task in which it was compared to canonical and randomized SMILES text BIBREF31. Here, the results suggested that DeepSMILES might limit the learning ability of the SMILES-based molecule generation models because its syntax is more grammar sensitive with the ring closure alteration and the use of a single symbol for branching (i.e. “)") introducing longer sequences.
Biochemical Language Processing ::: Textual Chemical Data ::: SELFIES
SELF-referencIng Embedding Strings (SELFIES) is an alternative sequence-based representation that is built upon “semantically constrained graphs" BIBREF32. Each symbol in a SELFIES sequence indicates a recursive Chomsky-2 type grammar, and can thus be used to convert the sequence representation to a unique graph. SELFIES utilize SMILES syntax to extract words that will correspond to semantically valid graphs (github.com/aspuru-guzik-group/selfies). BIBREF32 compared SELFIES, DeepSMILES and SMILES representations in terms of validity in cases where random character mutations are introduced. The evaluations on the QM9 dataset yielded results in the favor of SELFIES.
Biochemical Language Processing ::: Textual Chemical Data ::: InChI
InChI is the IUPAC International Chemical Identifier, which is a non-proprietary and open-source structural representation (inchi-trust.org) BIBREF33. The InChIKey is a character-based representation that is generated by hashing the InChI strings in order to shorten them. InChi representation has several layers (each) separated by the “/" symbol.
The software that generates InChi is publicly available and InChi does not suffer from ambiguity problems. However, its less complex structure makes the SMILES representation easier to use as shown in a molecular generation study BIBREF34 and in building meaningful chemical representations with a translation-based system BIBREF35. Interestingly, the translation model was able to translate from InChi to canonical SMILES, whereas it failed to translate from canonical SMILES to InChi. BIBREF35 suggested that the complex syntax of InChi made it difficult for the model to generate a correct sequence.
Biochemical Language Processing ::: Textual Chemical Data ::: SMARTS
SMiles ARbitrary Target Specification (SMARTS) is a language that contains specialized symbols and logic operators that enable substructure (pattern) search on SMILES strings BIBREF36. SMARTS can be used in any task that requires pattern matching on a SMILES string such as, querying databases or creating rule dictionaries such as RECAP BIBREF37 and BRICS BIBREF38 to extract fragments from SMILES (daylight.com/dayhtml/doc/theory/theory.smarts.html).
Biochemical Language Processing ::: Textual Chemical Data ::: SMIRKS
SMIRKS notation can be used to describe generic reactions (also known as transforms) that comprise one or more changes in atoms and bonds (https://daylight.com/daycgi_tutorials/smirks_examples.html). These transforms are based on “reactant to product" notation, and thus make use of SMILES and SMARTS languages. SMIRKS is utilized in tasks such as constructing an online transform database BIBREF39 and predicting metabolic transformations BIBREF40. A recent study achieves a similar performance to rule-based systems in classifying chemical reactions by learning directly from SMILES text with transforms via neural networks BIBREF41.
Biochemical Language Processing ::: Identification of Words/Tokens
Similar to words in natural languages, we can assume that the “words" of biochemical sequences convey significant information (e.g. folding, function etc) about the entities. In this regard, each compound/protein is analogous to a sentence, and each compound/protein unit is analogous to a word. Therefore, if we can decipher the grammar of biochemical languages, it would be easier to model bio/cheminformatics problems. However, protein and chemical words are not explicitly known and different approaches are needed to extract syntactically and semantically meaningful biochemical word units from these textual information sources (i.e. sequences). Here, we review some of the most common tokenization approaches used to determine the words of biochemical languages.
Biochemical Language Processing ::: Identification of Words/Tokens ::: @!START@$k$@!END@-mers (@!START@$n$@!END@-grams)
One of the simplest approaches in NLP to extract a small language unit is to use $k$-mers, also known as $n$-grams. $k$-mers indicate $k$ consecutive overlapping characters that are extracted from the sequence with a sliding window approach. “LINGO", which is one of the earliest applications of $k$-mers in cheminformatics, is the name of the overlapping 4-mers that are extracted from SMILES strings BIBREF42. 4-mers of the SMILES of ampicillin, “CC1(C(N2C(S1)C(C2=O)NC(=O)C(C3=CC=CC=C3)N)C(=O)O)C", can be listed as { `CC1(', `C1(C', `1(C(', ..., `O)O)', `)O)C' }. From a sequence of length $l$, a total of $(l-n)+1$ $k$-mers can be extracted. Extracting LINGOs from SMILES is a simple yet powerful idea that has been successfully used to compute molecular similarities, to differentiate between bioisosteric and random molecular pairs BIBREF42 and in a drug-target interaction prediction task BIBREF43, without requiring 2D or 3D information. The results suggested that a SMILES-based approach to compute the similarity of chemicals is not only as good as a 2D-based similarity measurement, but also faster BIBREF43.
$k$-mers were successfully utilized as protein BIBREF44 and chemical words BIBREF45 in protein family classification tasks. 3-mers to 5-mers were often considered as the words of the protein sequence. BIBREF46 reported that some 5-mers could be matched to motifs and protein words are most likely a mixture of different $k$-mers. For the protein function prediction task, BIBREF47 decided to choose among the 1000 most frequent words to build the protein vocabulary, whereas BIBREF48 utilized each $k$-mer type separately and showed that 4-mers provided the best performance. In the latter work, instead of using the whole protein sequence, the words were extracted from different length protein segments, which are also long $k$-mers (i.e. 100-mer, 120-mer) with 30 amino-acid gaps. The use of segmented protein sequences yielded better results than using the whole protein sequence, and important and conserved subsequences were highlighted. $k$-mers were also used as features, along with position specific score matrix features, in the protein fold prediction problem BIBREF49.
Biochemical Language Processing ::: Identification of Words/Tokens ::: Longest Common Subsequences
The identification of the longest common subsequence (LCS) of two sequences is critical for detecting their similarity. When there are multiple sequences, LCSs can point to informative patterns. LCSs extracted from SMILES sequences performed similarly well to 4-mers in chemical similarity calculation BIBREF43.
Biochemical Language Processing ::: Identification of Words/Tokens ::: Maximum Common Substructure
BIBREF50 investigated organic chemistry as a language in an interesting study that extracts maximum common substructures (MCS) from the 2D structures of pairs of compounds to build a vocabulary of the molecule corpus. Contrary to the common idea of functional groups (e.g. methyl, ethyl etc.) being “words" of the chemical language, the authors argued that MCSs (i.e. fragments) can be described as the words of the chemical language BIBREF50. A recent work investigated the distribution of these words in different molecule subsets BIBREF51. The “words" followed Zipf's Law, which indicates the relationship between the frequency of a word and its rank (based on the frequency) BIBREF52, similar to most natural languages. Their results also showed that drug “words" are shorter compared to natural product “words".
Biochemical Language Processing ::: Identification of Words/Tokens ::: Minimum Description Length
Minimum Description Length (MDL) is an unsupervised compression-based word segmentation technique in which words of an unknown language are detected by compressing the text corpus. In a protein classification task, each protein was assigned to the family in which its sequence is compressed the most, according to the MDL-based representation BIBREF53. BIBREF53 investigated whether the MDL-based words of the proteins show similarities to PROSITE patterns BIBREF54 and showed that less conserved residues were compressed less by the algorithm. BIBREF53 also emphasized that the integration of domain knowledge, such as the consideration of the hydrophilic and hydrophobic aminoacids in the words (i.e. grammar building), might prove effective.
Biochemical Language Processing ::: Identification of Words/Tokens ::: Byte-Pair Encoding
Byte-Pair Encoding (BPE) generates words based on high frequency subsequences starting from frequent characters BIBREF55. A recent study adopted a linguistic-inspired approach to predict protein-protein interactions (PPIs) BIBREF56. Their model was built upon “words" (i.e. bio-words) of the protein language, in which BPE was utilized to build the bio-word vocabulary. BIBREF56 suggested that BPE-segmented words indicate a language-like behavior for the protein sequences and reported improved accuracy results compared to using 3-mers as words.
Biochemical Language Processing ::: Identification of Words/Tokens ::: Pattern-based words
Subsequences that are conserved throughout evolution are usually associated with protein structure and function. These conserved sequences can be detected as patterns via multiple sequence alignment (MSA) techniques and Hidden Markov Models (HMM). PROSITE BIBREF54, a public database that provides information on domains and motifs of proteins, uses regular expressions (i.e. RE or regex) to match these subsequences.
Protein domains have been investigated for their potential of being the words of the protein language. One earlier study suggested that folded domains could be considered as “phrases/clauses" rather than “words" because of the higher semantic complexity between them BIBREF57. Later, domains were described as the words, and domain architectures as sentences of the language BIBREF58, BIBREF59. Protein domains were treated as the words of multi-domain proteins in order to evaluate the semantic meaning behind the domains BIBREF60. The study supported prior work by BIBREF59 suggesting that domains displayed syntactic and semantic features, but there are only a few multi-domain proteins with more than six domains limiting the use of domains as words to build sentences. Protein domains and motifs have also been utilized as words in different drug discovery tasks such as the prediction of drug-target interaction affinity BIBREF61, BIBREF62. These studies showed that motifs and domains together contribute to the prediction as much as the use of the full protein sequence.
SMARTS is a well-known regex-based querying language that is used to identify patterns in a SMILES string. SMARTS has been utilized to build specific rules for small-molecule protonation BIBREF63, to design novel ligands based on the fragments connected to the active site of a target BIBREF64, and to help generate products in reaction prediction BIBREF65. MolBlocks, a molecular fragmentation tool, also adopted SMARTS dictionaries to partition a SMILES string into overlapping fragments BIBREF36. Furthermore, MACCS BIBREF66 and PubChem BIBREF11 Fingerprints (FP) are molecular descriptors that are described as binary vectors based on the absence/presence of substructures that are predefined with SMARTS language. A recent study on protein family clustering uses a ligand-centric representation to describe proteins in which ligands were represented with SMILES-based (i.e. 8-mers) representation, MACCS and Extended Connectivity Fingerprint (ECFP6) BIBREF45. The results indicate that three of the ligand representation approaches provide similar performances for protein family clustering.
To the best of our knowledge, there is no comprehensive evaluation of the different word extraction techniques except a comparison by BIBREF56 of the performance of BPE-based words against $k$-mers in a PPI prediction task. Such comparison would provide important insights to the bio/cheminformatics community.
Biochemical Language Processing ::: Text representation
The representation of a text (e.g. molecule or protein sequence) aims to capture syntactic, semantic or relational meaning. In the widely used Vector Space Model (VSM), a text is represented by a feature vector of either weighted or un-weighted terms BIBREF67. The terms of this vector may correspond to words, phrases, k-grams, characters, or dimensions in a semantic space such as in the distributed word embedding representation models. The similarity between two texts represented in the vector space model is usually computed using the cosine similarity metric BIBREF68, which corresponds to the cosine of the angle between the two vectors.
Similarly to the one-hot encoding scheme BIBREF69, in the traditional bag-of-words BIBREF70 and term frequency-inverse document frequency (TF-IDF) BIBREF71 text representation models, each word corresponds to a different dimension in the vector space. Therefore, the similarity between two words in the vector space is zero, even if they are synonymous or related to each other. In the distributed representation models BIBREF72 on the other hand, words are represented as dense vectors based on their context. Words that occur in similar contexts have similar vector representations. In this subsection, we review these commonly used text representation models with their applications in cheminformatics.
Biochemical Language Processing ::: Text representation ::: Bag-of-words representation
In this representation model, a text is represented as a vector of bag-of-words, where the multiplicity of the words is taken into account, but the order of the words in the text is lost BIBREF70. For instance, the SMILES of ampicillin “CC1(C(N2C(S1)C(C2=O)NC(=O)C(
C3=CC=CC=C3)N)C(=O)O)C" can be represented as a bag-of 8-mers as follows: {“CC1(C(N2", “C1(C(N2C", “1(C(N2C(", “(C(N2C(S",...,“N)C(=O)O" ,“)C(=O)O)" ,“C(=O)O)C" }. We can vectorize it as $S = [1, 1, 1, 1, ...,1, 1, 1]$ in which each number refers to the frequency of the corresponding 8-mer.
Bag-of-words representation was used in molecular similarity computation, in which the SMILES string and the LINGOs extracted from it were treated as the sentence and words, respectively BIBREF42. The unique LINGOs were considered for each pair and a Tanimoto coefficient was used to measure the similarity BIBREF42. Another approach called SMILES Fingerprint (SMIfp) also adopted bag-of-words to create representations of molecules for a ligand-based virtual screening task BIBREF73. SMIfp considered 34 unique symbols in SMILES strings to create a frequency-based vector representation, which was utilized to compute molecular similarity. SMIfp provided comparable results to a chemical representation technique that also incorporated polar group and topological information, as well as atom and bond information, in recovering active compounds amongst decoys BIBREF73.
Biochemical Language Processing ::: Text representation ::: TF-IDF
The bag-of-words model, which is based on counting the terms of the sentence/document, might prioritize insignificant but frequent words. To overcome this issue, a weighting scheme can be integrated into the vector representation in order to give more importance to the rare terms that might play a key role in detecting similarity between two documents. One popular weighting approach is to use term frequency-inverse document frequency (TF-IDF) BIBREF71. TF refers to the frequency of a term in the document, and IDF denotes the logarithm of the total number of documents over the number of documents in which the term appears. IDF is therefore an indicator of uniqueness. For instance, the IDF of “C3=CC=CC" is lower than that of “(C(N2C(S", which appears in fewer compounds. Therefore, the existence of “(C(N2C(S" in a compound may be more informative.
TF-IDF weigthing was utilized to assign weights to LINGOs that were extracted from SMILES in order to compute molecule similarity using cosine similarity BIBREF43. Molecular similarities were then used as input for drug-target interaction prediction. A similar performance between TF-IDF weighted LINGO and a graph-based chemical similarity measurement was obtained. BIBREF50 used TF-IDF weighting on chemical bonds to show that bonds with higher TF-IDF scores have a higher probability of breaking.
Biochemical Language Processing ::: Text representation ::: One-hot representation
In one-hot representation, for a given vocabulary of a text, each unique word/character is represented with a binary vector that has a 1 in the corresponding position, while the vector positions for the remaining words/characters are filled with 0s BIBREF69. One-hot encoding is fast to build, but might lead to sparse vectors with large dimensions based on the size of the vocabulary (e.g. one million unique words in the vocabulary means one million dimensional binary vectors filled with zeros except one). It is a popular choice, especially in machine learning-based bio/cheminformatic studies to encode different types of information such as SMILES characters BIBREF74, BIBREF75, atom/bond types BIBREF76, BIBREF77 and molecular properties BIBREF78.
Biochemical Language Processing ::: Text representation ::: Distributed representations
The one-hot encoding builds discrete representations, and thus does not consider the relationships between words. For instance, the cosine similarity of two different words is 0 even if they are semantically similar. However, if the word (i.e. 8-mer) “(C(N2C(S" frequently appears together with the word “C(C2=O)N" in SMILES strings, this might suggest that they have related “meanings". Furthermore, two words might have similar semantic meanings even though they are syntactically apart. This is where distributed vector representations come into play.
The distributed word embeddings models gained popularity with the introduction of Word2Vec BIBREF72 and GloVe BIBREF79. The main motivation behind the Word2Vec model is to build real-valued high-dimensional vectors for each word in the vocabulary based on the context in which they appear. There are two main approaches in Word2Vec: (i) Skip-Gram and (ii) Continuous Bag of Words (CBOW). The aim of the Skip-Gram model is to predict context words given the center word, whereas in CBOW the objective is to predict the target word given the context words. Figure FIGREF32 depicts the Skip-gram architecture in Word2Vec BIBREF72. For the vocabulary of size $V$, given the target word “2C(S", the model learns to predict two context words. Both target word and context words are represented as one-hot encoded binary vectors of size $V$. The number of neurons in the hidden layer determines the size of the embedding vectors. The weight matrix between the input layer and the hidden layer stores the embeddings of the vocabulary words. The $i^{th}$ row of the embedding matrix corresponds to the embedding of the $i^{th}$ word.
The Word2Vec architecture has inspired a great deal of research in the bio/cheminformatics domains. The Word2Vec algorithm has been successfully applied for determining protein classes BIBREF44 and protein-protein interactions (PPI) BIBREF56. BIBREF44 treated 3-mers as the words of the protein sequence and observed that 3-mers with similar biophysical and biochemical properties clustered together when their embeddings were mapped onto the 2D space. BIBREF56, on the other hand, utilized BPE-based word segmentation (i.e. bio-words) to determine the words. The authors argued that the improved performance for bio-words in the PPI prediction task might be due to the segmentation-based model providing more distinct words than $k$-mers, which include repetitive segments. Another recent study treated multi-domain proteins as sentences in which each domain was recognized as a word BIBREF60. The Word2Vec algorithm was trained on the domains (i.e. PFAM domain identifiers) of eukaryotic protein sequences to learn semantically interpretable representations of them. The domain representations were then investigated in terms of the Gene Ontology (GO) annotations that they inherit. The results indicated that semantically similar domains share similar GO terms.
The Word2Vec algorithm was also utilized for representation of chemicals. SMILESVec, a text-based ligand representation technique, utilized Word2Vec to learn embeddings for 8-mers (i.e. chemical words) that are extracted from SMILES strings BIBREF45. SMILESVec was utilized in protein representation such that proteins were represented as the average of the SMILESVec vectors of their interacting ligands. The results indicated comparable performances for ligand-based and sequence based protein representations in protein family/superfamily clustering. Mol2Vec BIBREF80, on the other hand, was based on the identifiers of the substructures (i.e. words of the chemical) that were extracted via Extended Connectivity Fingerprint (ECFP) BIBREF81. The results showed a better performance with Mol2Vec than with the simple Morgan Fingerprint in a solubility prediction task, and a comparable performance to graph-based chemical representation BIBREF82. BIBREF83 also employed the Word2vec model that was trained on the fragments that are extracted from SMILES strings using a graph traversing algorithm. The results favored the distributed fragment-based ligand representation over fragment-based binary vector representation in a ring system clustering task and showed a comparable performance in the prediction of toxicity against Tetrahymena BIBREF83. Figure FIGREF33 illustrates the pipeline of a text-based molecule representation based on $k$-mers.
FP2Vec is another method that utilizes embedding representation for molecules, however instead of the Word2Vec algorithm, it depends on a Convolutional Neural Network (CNN) to build molecule representations to be used in toxicity prediction tasks BIBREF84. CNN architectures have also been utilized for drug-target binding affinity prediction BIBREF85 and drug-drug interaction prediction BIBREF75 to build representations for chemicals from raw SMILES strings, as well as for protein fold prediction BIBREF86 to learn representations for proteins from amino-acid sequences. SMILES2Vec adopted different DL architectures (GRU, LSTM, CNN+GRU, and CNN+LSTM) to learn molecule embeddings, which were then used to predict toxicity, affinity and solubility BIBREF87. A CNN+GRU combination was better at the prediction of chemical properties. A recent study compared several DL approaches to investigate the effect of different chemical representations, which were learned through these architectures, on a chemical property prediction problem BIBREF88. The authors also combined DL architectures that were trained on SMILES strings with the MACCS fingerprint, proposing a combined representation for molecules (i.e. CheMixNet). The CheMixNet representation outperformed the other representations that were trained on a single data type such as SMILES2Vec (i.e. SMILES) and Chemception (i.e. 2D graph) BIBREF89.
Biochemical Language Processing ::: Text generation
Text generation is a primary NLP task, where the aim is to generate grammatically and semantically correct text, with many applications ranging from question answering to machine translation BIBREF90. It is generally formulated as a language modeling task, where a statistical model is trained using a large corpus to predict the distribution of the next word in a given context. In machine translation, the generated text is the translation of an input text in another language.
Medicinal chemistry campaigns use methods such as scaffold hopping BIBREF91 or fragment-based drug design BIBREF3 to build and test novel molecules but the chemotype diversity and novelty may be limited. It is possible to explore uncharted chemical space with text generation models, which learn a distribution from the available data (i.e. SMILES language) and generate novel molecules that share similar physicochemical properties with the existing molecules BIBREF74. Molecule generation can then be followed by assessing physicochemical properties of the generated compound or its binding potential to a target protein BIBREF74. For a comprehensive review of molecule generation methodologies, including graph-based models, we refer the reader to the review of BIBREF92. Machine translation models have also been recently adapted to text-based molecule generation, which start with one “language" such as that of reactants and generate a novel text in another “language" such as that of products BIBREF28. Below, we present recent studies on text based molecule generation.
RNN models, which learn a probability distribution from a training set of molecules, are commonly used in molecule generation to propose novel molecules similar to the ones in the training data set. For instance, given the SMILES sequence “C(=O", the model would predict the next character to be “)" with a higher probability than “(". The production of valid SMILES strings, however, is a challenge because of the complicated SMILES syntax that utilizes parentheses to indicate branches and ring numbers. The sequential nature of RNNs, which may miss long range dependencies, is a disadvantage of these models BIBREF74. RNN descendants LSTM and GRU, which model long-term dependencies, are better suited for remembering matching rings and branch closures. Motivated by such a hypothesis, BIBREF74 and BIBREF93 successfully pioneered de novo molecule generation using LSTM architecture to generate valid novel SMILES. BIBREF74 further modified their model to generate target-specific molecules by integrating a target bioactivity prediction step to filter out inactive molecules and then retraining the LSTM network. In another study, transfer learning was adopted to fine-tune an LSTM-based SMILES generation model so that structurally similar leads were generated for targets with few known ligands BIBREF94. BIBREF95 and BIBREF96 used reinforcement learning (RL) to bias their model toward compounds with desired properties. Merk et al. BIBREF97, BIBREF98 fine-tuned their LSTM model on a target-focused library of active molecules and synthesized some novel compounds. BIBREF99 explored how much of the GDB-13 database BIBREF100 they could rediscover by using an RNN-based generative model.
The variational Auto-encoder (VAE) is another widely adopted text generation architecture BIBREF101. BIBREF34 adopted this architecture for molecule generation. A traditional auto-encoder encodes the input into the latent space, which is then decoded to reconstruct the input. VAE differs from AE by explicitly defining a probability distribution on the latent space to generate new samples. BIBREF34 hypothesized that the variational part of the system integrates noise to the encoder, so that the decoder can be more robust to the large diversity of molecules. However, the authors also reported that the non-context free property of SMILES caused by matching ring numbers and parentheses might often lead the decoder to generate invalid SMILES strings. A grammar variational auto-encoder (GVAE), where the grammar for SMILES is explicitly defined instead of the auto-encoder learning the grammar itself, was proposed to address this issue BIBREF102. This way, the generation is based on the pre-defined grammar rules and the decoding process generates grammar production rules that should also be grammatically valid. Although syntactic validity would be ensured, the molecules may not have semantic validity (chemical validity). BIBREF103 built upon the VAE BIBREF34 and GVAE BIBREF102 architectures and introduced a syntax-directed variational autoencoder (SD-VAE) model for the molecular generation task. The syntax-direct generative mechanism in the decoder contributed to creating both syntactically and semantically valid SMILES sequences. BIBREF103 compared the latent representations of molecules generated by VAE, GVAE, and SD-VAE, and showed that SD-VAE provided better discriminative features for druglikeness. BIBREF104 proposed an adversarial AE for the same task. Conditional VAEs BIBREF105, BIBREF106 were trained to generate molecules conditioned on a desired property. The challenges that SMILES syntax presents inspired the introduction of new syntax such as DeepSMILES BIBREF29 and SELFIES BIBREF32 (details in Section SECREF3).
Generative Adversarial Network (GAN) models generate novel molecules by using two components: the generator network generates novel molecules, and the discriminator network aims to distinguish between the generated molecules and real molecules BIBREF107. In text generation models, the novel molecules are drawn from a distribution, which are then fine-tuned to obtain specific features, whereas adversarial learning utilizes generator and discriminator networks to produce novel molecules BIBREF107, BIBREF108. ORGAN BIBREF108, a molecular generation methodology, was built upon a sequence generative adversarial network (SeqGAN) from NLP BIBREF109. ORGAN integrated RL in order to generate molecules with desirable properties such as solubility, druglikeness, and synthetizability through using domain-specific rewards BIBREF108.
Biochemical Language Processing ::: Text generation ::: Machine Translation
Machine translation finds use in cheminformatics in “translation" from one language (e.g. reactants) to another (e.g. products). Machine translation is a challenging task because the syntactic and semantic dependencies of each language differ from one another and this may give rise to ambiguities. Neural Machine Translation (NMT) models benefit from the potential of deep learning architectures to build a statistical model that aims to find the most probable target sequence for an input sequence by learning from a corpus of examples BIBREF110, BIBREF111. The main advantage of NMT models is that they provide an end-to-end system that utilizes a single neural network to convert the source sequence into the target sequence. BIBREF110 refer to their model as a sequence-to-sequence (seq2seq) system that addresses a major limitation of DNNs that can only work with fixed-dimensionality information as input and output. However, in the machine translation task, the length of the input sequences is not fixed, and the length of the output sequences is not known in advance.
The NMT models are based on an encoder-decoder architecture that aims to maximize the probability of generating the target sequence (i.e. most likely correct translation) for the given source sequence. The first encoder-decoder architectures in NMT performed poorly as the sequence length increased mainly because the encoder mapped the source sequence into a single fixed-length vector. However, fixed-size representation may be too small to encode all the information required to translate long sequences BIBREF112. To overcome the issue of the fixed context vector (Figure FIGREF35a), a new method was developed, in which every source token was encoded into a memory bank independently (Figure FIGREF35b). The decoder could then selectively focus on parts of this memory bank during translation BIBREF112, BIBREF113. This technique is known as “attention mechanism" BIBREF114.
Inspired by the successes in NMT, the first application of seq2seq models in cheminformatics was for reaction prediction by BIBREF115, who proposed to translate the SMILES strings of reactants and separated reagents to the corresponding product SMILES. The authors hypothesized that the reaction prediction problem can be re-modelled as a translation system in which both inputs and output are sequences. Their model used GRUs for the encoder-decoder and a Bahdanau BIBREF112 attention layer in between. BIBREF116 in contrast, performed the opposite task, the single-step retrosynthesis prediction, using a similar encoder-decoder model. When given a product and a reaction class, their model predicted the reactants that would react together to form that product. One major challenge in the retrosynthesis prediction task is the possibility of multiple correct targets, because more than one reactant combination could lead to the same product. Similarly to BIBREF115, BIBREF117 also adopted a seq2seq model to translate precursors into products, utilizing the SMILES representation for the reaction prediction problem. Their model used a different attention mechanism by BIBREF113 and LSTMs in the encoder and decoder. By visualizing the attention weights, an atom-wise mapping between the product and the reactants could be obtained and used to understand the predictions better. BIBREF117 showed that seq2seq models could compete with graph neural network-based models in the reaction prediction task BIBREF118.
A translation model was also employed to learn a data-driven representation of molecules BIBREF35. BIBREF35 translated between two textual representations of a chemical, InChi and SMILES, to extract latent representations that can integrate the semantic “meaning" of the molecule. The results indicated a statistically significant improvement with the latent representations in a ligand-based virtual screening task against fingerprint methods such as ECFP (i.e. Morgan algorithm). NMT architectures were also adopted in a protein function prediction task for the first time, in which “words" that were extracted from protein sequences are translated into GO identifiers using RNNs as encoder and decoder BIBREF47. Although exhibiting a comparable performance to the state-of-the-art protein function prediction methods, the authors argued that the performance of the model could be improved by determining more meaningful “words" such as biologically interpretable fragments.
Transformer is an attention-based encoder-decoder architecture that was introduced in NMT by BIBREF119. Although similar to previous studies BIBREF110, BIBREF111, BIBREF112 in terms of adopting an encoder-decoder architecture, Transformer differs from the others because it only consists of attention and feed-forward layers in the encoder and decoder. As transformers do not contain an RNN, positional embeddings are needed to capture order relationships in the sequences. BIBREF28 were the first to adopt the Transformer architecture in cheminformatics and designed a Molecular Transformer for the chemical reaction prediction task. The Molecular Transformer, which was atom-mapping independent, outperformed the other algorithms (e.g. based on a two-step convolutional graph neural network BIBREF120) on commonly used benchmark data sets. Transformer architecture was also adopted to learn representations for chemicals in prediction of drug-target interactions BIBREF121 and molecular properties BIBREF122 in which the proposed systems either outperformed the state-of-the-art systems or obtained comparable results.
Future Perspectives
The increase in the biochemical data available in public databases combined with the advances in computational power and NLP methodologies have given rise to a rapid growth in the publication rate in bio/cheminformatics, especially through pre-print servers. As this interdisciplinary field grows, novel opportunities come hand in hand with novel challenges.
Future Perspectives ::: Challenges
The major challenges that can be observed from investigating these studies can be summarized as follows: (i) the need for universalized benchmarks and metrics, (ii) reproducibility of the published methodologies, (iii) bias in available data, and (iv) biological and chemical interpretability/explainability of the solutions.
Future Perspectives ::: Challenges ::: Benchmarking
There are several steps in the drug discovery pipeline, from affinity prediction to the prediction of other chemical properties such as toxicity, and solubility. The use of different datasets and different evaluation metrics makes the assessment of model performance challenging. Comprehensive benchmarking platforms that can assess the success of different tools are still lacking. A benchmarking environment rigorously brings together the suitable data sets and evaluation methodologies in order to provide a fair comparison between the available tools. Such environments are available for molecule generation task from MOSES BIBREF123 and GuacaMol BIBREF124. MoleculeNet is also a similar attempt to build a benchmarking platform for tasks such as prediction of binding affinity and toxicity BIBREF82.
Future Perspectives ::: Challenges ::: Reproducibility
Despite the focus on sharing datasets and source codes on popular software development platforms such as GitHub (github.com) or Zenodo (zenodo.org), it is still a challenge to use data or code from other groups. The use of FAIR (Findable, Accessible, Interoperable and Reusable) (meta)data principles can guide the management of scientific data BIBREF125. Automated workflows that are easy to use and do not require programming knowledge encourage the flow of information from one discipline to the other. Platform-free solutions such as Docker (docker.com) in which an image of the source code is saved and can be opened without requiring further installation could accelerate the reproduction process. A recent initiative to provide a unified-framework for predictive models in genomics can quickly be adopted by the medicinal chemistry community BIBREF126.
Future Perspectives ::: Challenges ::: Bias in data
The available data has two significant sources of bias, one related to the limited sampling of chemical space and the other related to the quality and reproducibility of the data. The lack of information about some regions of the protein/chemical landscape limits the current methodologies to the exploitation of data rather than full exploration. The data on protein-compound interactions is biased toward some privileged molecules or proteins because the protein targets are related to common diseases or the molecules are similar to known actives. Hence, not all of chemical space is sampled, and chemical space is expanded based on the similarity of an active compound to others, which is also referred to as inductive bias BIBREF127. Data about proteins or molecules related to rare diseases is limited and inactive molecules are frequently not reported. Moreover, some experimental measurements that are not reproducible across different labs or conditions limit their reliability BIBREF128. BIBREF129 and BIBREF130 have recently discussed the bias factors in dataset composition. Zhang and Lee have also addressed the sources of bias in the data and proposed to use Bayesian deep learning to quantify uncertainty.
Future Perspectives ::: Challenges ::: Interpretability
The black box nature of ML/DL methodologies makes assigning meaning to the results difficult. Explainability of an ML model is especially critical in drug discovery to facilitate the use of these findings by medicinal chemists, who can contribute to the knowledge loop. explainable-AI (XAI) is a current challenge that calls for increased interpretability of AI solutions for a given context and includes several factors such as trust, safety, privacy, security, fairness and confidence BIBREF131. Explainability is also critical for the domain experts to assess the reliability of new methodolodogies. Interpretability is usually classified into two categories: post-hoc (i.e. after) and ante-hoc (i.e. before). Post-hoc approaches explain the predictions of the model, whereas ante-hoc approaches integrate explainability into the model. Recent studies have already aimed to map the semantic meaning behind the models onto the biochemical description. An attentive pooling network, a two-way attention system that extends the attention mechanism by allowing input nodes to be aware of one another, is one approach that has been employed in drug-target interaction prediction BIBREF132. BIBREF76 showed that mapping activations of hidden neurons in feed-forward neural networks to pharmacophores, or linking atom representations computed by convolutional filters to substructures in a graph-convolution model, are possible ways of integrating explainability into AI-based drug discovery systems. BIBREF133 also demonstrated a novel approach that combines molecule generation and retrosynthesis prediction to generate synthesizable molecules. Integration of such solutions to drug discovery problems will not only be useful for computational researchers but also for the medicinal chemistry community.
Future Perspectives ::: Opportunities
The NLP field has seen tremendous advances in the past five years, starting with the introduction of distributed word embedding algorithms such as Word2Vec BIBREF72 and Glove BIBREF79. The concept of contextualized word embeddings (i.e. ELMo) was introduced soon after BIBREF134. Here, the embedding of the word is not fixed, but changes according to the context (i.e. sentence) in which it appears. These advances continued with more complicated architectures such as Transformer (i.e. Generative Pre-Training or GPT) BIBREF135 and BERT BIBREF136, RoBERTa BIBREF137, GPT2 BIBREF138, Transformer-XL BIBREF139, and XLNet BIBREF140 models. Such models with a focus on context might have significant impact not only on drug discovery, but also on the protein folding problem, which is critical for predicting structural properties of the protein partner. Secondary structure BIBREF141, BIBREF142, BIBREF143, domain boundary BIBREF144 and fold BIBREF49 prediction studies often use sequence information in combination with similarity to available structures. The recent success of AlphaFold BIBREF145 in Critical Assessment of Protein Structure Prediction (CASP) competitions (http://predictioncenter.org/) showed that the enhanced definitions of context, brought about by the advances in machine/deep learning systems, might be useful for capturing the global dependencies in protein sequences to detect interactions between residues separated in sequence space but close together in 3D space BIBREF141.
Unsupervised learning can be used on “big" textual data through using language models with attention BIBREF119 and using pre-trained checkpoints from language models BIBREF146. Encoder-decoder architectures have also had significant impact on solving text generation and machine translation problems and were successfully applied to molecule generation problem. As NLP moves forward, the most recent approaches such as Topic-Guided VAE BIBREF90 and knowledge graphs with graph transformers BIBREF147 will easily find application in bio/cheminformatics.
Recent NLP models are not domain-specific, and they can help with the generalization of models BIBREF138. Current studies emphasize multi-task learning, which requires the use of DNNs that share parameters to learn more information from related but individual tasks BIBREF148, BIBREF138. Combined with the transferability of contextual word representation models, multi-task learning can also provide solutions to drug discovery which has many interwoven tasks, such as chemical property prediction and molecule generation.
Language has an important power, not only for daily communication but also for the communication of codified domain knowledge. Deciphering the meaning behind text is the primary purpose of NLP, which inevitably has found its way to bio/cheminformatics. The complicated nature of biochemical text makes understanding the semantic construction of the hidden words all the more challenging and interesting. The applications we discussed in this review provide a broad perspective of how NLP is already integrated with the processing of biochemical text. A common theme in all of these applications is the use of AI-based methodologies that drive and benefit from the NLP field. Novel advances in NLP and ML are providing auspicious results to solving long-standing bio/cheminformatics problems.
With this review, we have summarized the impact of NLP on bio/cheminformatics to encourage this already interdisciplinary field to take advantage of recent advances. The communication between researchers from different backgrounds and domains can be enhanced through establishing a common vocabulary toward common goals. This review has been an attempt to facilitate this conversation.
Acknowledgement
This work is partially supported by TUBITAK (The Scientific and Technological Research Council of Turkey) under grant number 119E133. HO acknowledges TUBITAK-BIDEB 2211 scholarship program and thanks Gökçe Uludoğan for her comments on figures. EO thanks Prof. Amedeo Caflisch for hosting her at the University of Zurich during her sabbatical. | Yes |
9c0cf1630804366f7a79a40934e7495ad9f32346 | 9c0cf1630804366f7a79a40934e7495ad9f32346_0 | Q: Do the authors analyze what kinds of cases their new embeddings fail in where the original, less-interpretable embeddings didn't?
Text: Introduction
Knowledge Graphs such as Freebase, WordNet etc. have become important resources for supporting many AI applications like web search, Q&A etc. They store a collection of facts in the form of a graph. The nodes in the graph represent real world entities such as Roger Federer, Tennis, United States etc while the edges represent relationships between them.
These KGs have grown huge, but they are still not complete BIBREF1 . Hence the task of inferring new facts becomes important. Many vector space models have been proposed which can perform reasoning over KGs efficiently BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 , BIBREF0 , BIBREF1 etc. These methods learn representations for entities and relations as vectors in a vector space, capturing global information about the KG. The task of KG inference is then defined as operations over these vectors. Some of these methods like BIBREF0 , BIBREF1 are capable of exploiting additional text data apart from the KG, resulting in better representations.
Although these methods have shown good performance in applications, they don't address the problem of understanding semantics of individual dimensions of the KG embedding. A recent work BIBREF6 addressed the problem of learning semantic features for KGs. However, they don't directly use vector space modeling.
In this work, we focus on incorporating interpretability in KG embeddings. Specifically, we aim to learn interpretable embeddings for KG entities by incorporating additional entity co-occurrence statistics from text data. This work is motivated by BIBREF7 who presented automated methods for evaluating topics learned via topic modelling methods. We adapt these measures for the vector space model and propose a method to directly maximize them while learning KG embedding. To the best of our knowledge, this work presents the first regularization term which induces interpretability in KG embeddings.
Related Work
Several methods have been proposed for learning KG embeddings. They differ on the modeling of entities and relations, usage of text data and interpretability of the learned embeddings. We summarize some of these methods in following sections.
Vector-space models for KG Embeddings
A very effective and powerful set of models are based on translation vectors. These models represent entities as vectors in $d$ -dimensional space, $\mathbb {R}^d$ and relations as translation vectors from head entity to tail entity, in either same or a projected space. TransE BIBREF2 is one of the initial works, which was later improved by many works [ BIBREF3 , BIBREF4 , BIBREF8 , BIBREF9 , BIBREF10 , BIBREF11 ]. Also, there are methods which are able to incorporate text data while learning KG embeddings. BIBREF0 is one such method, which assumes a combined universal schema of relations from KG as well as text. BIBREF1 further improves the performance by sharing parameters among similar textual relations.
Interpretability of Embedding
While the vector space models perform well in many tasks, the semantics of learned representations are not directly clear. This problem for word embeddings was addressed by BIBREF12 where they proposed a set of constraints inducing interpretability. However, its adaptation for KG embeddings hasn't been addressed. A recent work BIBREF6 addressed a similar problem, where they learn coherent semantic features for entities and relations in KG. Our method differs from theirs in the following two aspects. Firstly, we use vector space modeling leading directly to KG embeddings while they need to infer KG embeddings from their probabilistic model. Second, we incorporate additional information about entities which helps in learning interpretable embeddings.
Proposed Method
We are interested in inducing interpretability in KG embeddings and regularization is one good way to do it. So we want to look at novel regularizers in KG embeddings. Hence, we explore a measure of coherence proposed in BIBREF7 . This measure allows automated evaluation of the quality of topics learned by topic modeling methods by using additional Point-wise Mutual Information (PMI) for word pairs. It was also shown to have high correlation with human evaluation of topics.
Based on this measure of coherence, we propose a regularization term. This term can be used with existing KG embedding methods (eg BIBREF0 ) for inducing interpretability. It is described in the following sections.
Coherence
In topic models, coherence of a topic can be determined by semantic relatedness among top entities within the topic. This idea can also be used in vector space models by treating dimensions of the vector space as topics. With this assumption, we can use a measure of coherence defined in following section for evaluating interpretability of the embeddings.
$Coherence@k$ has been shown to have high correlation with human interpretability of topics learned via various topic modeling methods BIBREF7 . Hence, we can expect interpretable embeddings by maximizing it.
Coherence for top $k$ entities along dimension $l$ is defined as follows:
$$Coherence@k^{(l)} = \sum _{i=2}^{k}\sum _{j=1}^{i-1}{p_{ij}}$$ (Eq. 5)
where $p_{ij}$ is PMI score between entities $e_i$ and $e_j$ extracted from text data. $Coherence@k$ for the entity embedding matrix $\theta _e$ is defined as the average over all dimensions.
$$Coherence@k = \frac{1}{d} \sum _{l=1}^{d} Coherence@k^{(l)}$$ (Eq. 6)
We want to learn an embedding matrix $\theta _e$ which has high coherence (i.e. which maximizes $Coherence@k$ ). Since $\theta _e$ changes during training, the set of top $k$ entities along each dimension varies over iterations. Hence, directly maximizing $Coherence@k$ seems to be tricky.
An alternate approach could be to promote higher values for entity pairs having high PMI score $p_{ij}$ . This will result in an embedding matrix $\theta _e$ with a high value of $Coherence@k$ since high PMI entity pairs are more likely to be among top $k$ entities.
This idea can be captured by following coherence term
$$\mathcal {C}(\theta _e, P) = \sum _{i=2}^{n}\sum _{j=1}^{i-1} \left\Vert v(e_i)^\intercal v(e_j) - p_{ij} \right\Vert ^2$$ (Eq. 8)
where $P$ is entity-pair PMI matrix and $v(e)$ denote vector for entity $e$ . This term can be used in the objective function defined in Equation 13
Entity Model (Model-E)
We use the Entity Model proposed in BIBREF0 for learning KG embeddings. This model assumes a vector $v(e)$ for each entity and two vectors $v_s(r)$ and $v_o(r)$ for each relation of the KG. The score for the triple $(e_s, r, e_o)$ is given by,
$$f(e_s, r, e_o) = v(e_s)^\intercal v_s(r) + v(e_o)^\intercal v_o(r)$$ (Eq. 10)
Training these vectors requires incorrect triples. So, we use the closed world assumption. For each triple $t \in \mathcal {T}$ , we create two negative triples $t^-_o$ and $t^-_s$ by corrupting the object and subject of the triples respectively such that the corrupted triples don't appear in training, test or validation data. The loss for a triple pair is defined as $loss(t, t^-) = - \log (\sigma (f(t) - f(t^-)))$ . Then, the aggregate loss function is defined as
$$L(\theta _e, \theta _r, \mathcal {T}) = \frac{1}{|\mathcal {T}|}\sum _{t\in \mathcal {T}} \left(loss(t, t^-_o) + loss(t, t^-_s) \right)$$ (Eq. 11)
Objective
The overall loss function can be written as follows:
$$L(\theta _e, \theta _r, \mathcal {T}) + \lambda _c \mathcal {C}(\theta _e, P) + \lambda _r \mathcal {R}(\theta _e, \theta _r)$$ (Eq. 13)
Where $\mathcal {R}(\theta _e, \theta _r) = \frac{1}{2}\left(\left\Vert \theta _e\right\Vert ^2+\left\Vert \theta _r\right\Vert ^2\right)$ is the $L2$ regularization term and $\lambda _c$ and $\lambda _r$ are hyper-parameters controlling the trade-off among different terms in the objective function.
Datasets
We use the FB15k-237 BIBREF13 dataset for experiments. It contains 14541 entities and 237 relations. The triples are split into training, validation and test set having 272115, 17535 and 20466 triples respectively. For extracting entity co-occurrences, we use the textual relations used in BIBREF1 . It contains around 3.7 millions textual triples, which we use for calculating PMI for entity pairs.
Experimental Setup
We use the method proposed in BIBREF0 as the baseline. Please refer to Section "Entity Model (Model-E)" for more details. For evaluating the learned embeddings, we test them on different tasks. All the hyper-parameters are tuned using performance (MRR) on validation data. We use 100 dimensions after cross validating among 50, 100 and 200 dimensions. For regularization, we use $\lambda _r = 0.01$ (from $10,1,0.1,0.01$ ) and $\lambda _c = 0.01$ (from $10,1,0.1,0.01$ ) for $L2$ and coherence regularization respectively. We use multiple random initializations sampled from a Gaussian distribution. For optimization, we use gradient descent and stop optimization when gradient becomes 0 upto 3 decimal places. The final performance measures are reported for test data.
Results
In following sections, we compare the performance of the proposed method with the baseline method in different tasks. Please refer to Table 1 for results.
For evaluating the interpretability, we use $Coherence@k$ (Equation 6 ) , automated and manual word intrusion tests. In word intrusion test BIBREF14 , top $k(=5)$ entities along a dimension are mixed with the bottom most entity (the intruder) in that dimension and shuffled. Then multiple (3 in our case) human annotators are asked to find out the intruder. We use majority voting to finalize one intruder. Amazon Mechanical Turk was used for crowdsourcing the task and we used 25 randomly selected dimensions for evaluation. For automated word intrusion BIBREF7 , we calculate following score for all $k+1$ entities
$$\text{AutoWI}(e_i) = \sum _{j=1, j\ne i}^{k+1}{p_{ij}}$$ (Eq. 18)
where $p_{ij}$ are the PMI scores. The entity having least score is identified as the intruder. We report the fraction of dimensions for which we were able to identify the intruder correctly.
As we can see in Table 1 , the proposed method achieves better values for $Coherence@5$ as a direct consequence of the regularization term, thereby maximizing coherence between appropriate entities. Performance on the word intrusion task also improves drastically as the intruder along each dimension is a lot easier to identify owing to the fact that the top entities for each dimension group together more conspicuously.
In this experiment, we test the model's ability to predict the best object entity for a given subject entity and relation. For each of the triples, we fix the subject and the relation and rank all entities (within same category as true object entity) based on their score according to Equation 10 . We report Mean Rank (MR) and Mean Reciprocal rank (MRR) of the true object entity and Hits@10 (the number of times true object entity is ranked in top 10) as percentage.
The objective of the coherence regularization term being tangential to that of the original loss function, is not expected to affect performance on the link prediction task. However, the results show a trivial drop of $1.2$ in MRR as the coherence term gives credibility to triples that are otherwise deemed incorrect by the closed world assumption.
We have used abbreviations for BS (Bachelor of Science), MS (Master of Science), UK (United Kingdom) and USA (United States of America). They appear as full form in the data.
In this experiment, we test the model on classifying correct and incorrect triples. For finding incorrect triples, we corrupt the object entity with a randomly selected entity within the same category. For classification, we use validation data to find the best threshold for each relation by training an SVM classifier and later use this threshold for classifying test triples. We report the mean accuracy and mean AUC over all relations.
We observe that the proposed method achieves slightly better performance for triple classification improving the accuracy by $4.4$ . The PMI information adds more evidence to the correct triples which are related in text data, generating a better threshold that more accurately distinguishes correct and incorrect triples.
Qualitative Analysis of Results
Since our aim is to induce interpretability in representations, in this section, we evaluate the embeddings learned by the baseline as well as the proposed method. For both methods, we select some dimensions randomly and present top 5 entities along those dimensions. The results are presented in Table 2 .
As we can see from the results, the proposed method produces more coherent entities than the baseline method.
Conclusion and Future Works
In this work, we proposed a method for inducing interpretability in KG embeddings using a coherence regularization term. We evaluated the proposed and the baseline method on the interpretability of the learned embeddings. We also evaluated the methods on different KG tasks and compared their performance. We found that the proposed method achieves better interpretability while maintaining comparable performance on KG tasks. As next steps, we plan to evaluate the generalizability of the method with more recent KG embeddings. | No |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.