id
stringlengths
40
40
pid
stringlengths
42
42
input
stringlengths
8.37k
169k
output
stringlengths
1
1.63k
bee74e96f2445900e7220bc27795bfe23accd0a7
bee74e96f2445900e7220bc27795bfe23accd0a7_0
Q: Is there a machine learning approach that tries to solve same problem? Text: Introduction Language plays a vital role in the human life. A language is a structured system of communication BIBREF2. There are various language systems in the world with the estimated number being between 5,000 and 7,000 BIBREF3. Natural Language Processing (NLP) which we commonly hear is a subfield of linguistics. NLP aims to provide interactions between computers and human languages. The performance of NLP is evaluated by how computers can process and analyze large amounts of natural language data BIBREF4. In terms of language processing, we cannot but mention Computational Linguistics BIBREF5. Computational Linguistics is the scientific study of language from a computational perspective, and thus an interdisciplinary field, involving linguistics, computer science, mathematics, logic, cognitive science, and cognitive psychology. One of the most useful tools for studying computational linguistics is Prolog programming language BIBREF0. Prolog is a logic programming language associated with artificial intelligence and computational linguistics. Prolog can help deal with issues related to not only logic puzzle (Cryptoarithmetic puzzles, Zebra Puzzle, etc.) but also natural language processing. In this work, I utilized Prolog along with Definite Clause Grammars (DCG) BIBREF1 to solve one of the critical aspects of English grammar, active sentence and passive sentence. DCG proves the efficiency in handling the grammar of the sentence. Basically, a sentence is built out of noun phrase and verb phrase, so the structure of sentence, noun phrase, and verb phrase will be both covered in this work. In terms of English grammar, we have lots of content to solve as shown in Figure FIGREF1. For example, there are 12 tenses in English such as the simple past tense, the simple present tense, the perfect present tense, etc. We also have more than three types of conditional clause, more than three types of comparative clause, and so on. This work covers the contents of active sentence and passive sentence. For instance, if an active sentence is “a man buys an apple in the supermarket", its corresponding passive sentence will be “an apple is bought by a man in the supermarket". The basic rules for rewriting an active sentence to passive sentence are shown clearly in Figure FIGREF2. As shown in Figure FIGREF2, basic rules are: The object of the active sentence becomes the subject of the passive sentence; The subject of the active sentence becomes the object of the passive sentence; The finite form of the verb is changed to “to be + past participle". As my best understanding so far, there are only a few works mentioning the problem of active sentence and passive sentence in terms of language processing and computational linguistics. The conversion between active sentence and passive sentence was early mentioned in BIBREF6 by using a transformation rule to express the relationship between active and passive sentences. According to this rule, a parse tree is produced to represent the deep structure and determine whether the given sentence is active or passive. Similarly, BIBREF7 also used a tree-to-tree mapping to represent the active/passive transformation rule. However, these works just stopped in introducing how to transform an active sentence to passive sentence and did not solve many cases of them. Actually, there are many cases of active and passive sentences, leading to extra rules for converting between them. It is not easy to handle all these cases, and this is the main challenge of this work. My contributions are shown as follows: As far as I know, this may be the first work utilizing Prolog and DCG to solve a variety of cases of converting between active sentence and passive sentence such as 12 English tenses, modal verbs, negative form, etc. I proposed a compact version of the representation of the sentence as shown in Figure FIGREF48 and Figure FIGREF50. In order to deal with 12 tenses in English, I proposed an auxiliary-based solution (is presented in Section SECREF67) for dividing 12 tenses into 4 groups. This is a very nice solution that reduces the workload of defining DCG rules. I also proposed a three-steps conversion (is presented in Section SECREF73) for doing the conversion between active sentence and passive sentence. Analysis and Discussion ::: Cases to be solved The main challenge of this work is how much it can handle cases. There are a variety of cases in terms of active sentence and passive sentence. The cases that I solved in this work are shown as follows. The possibility of the conversion: the prerequisite to convert an active sentence to a passive sentence is that the active sentence must have the object. For instance: The sentence “the man buys an apple" is converted to the passive form being “an apple is bought by the man"; However, the sentence “the man goes to school" cannot be converted to the passive form because of the lack of object. The tenses of the sentence: there are 12 tenses in English such as simple present tense, continuous past tense, perfect present tense, perfect continuous future tense, etc. With each tense, there is a specific way for converting between active sentence and passive sentence. For example (from active form to passive form): In the simple present tense: “the man buys an apple" is converted to “an apple is bought by the man"; In the perfect present tense: “the man has bought an apple" is converted to “an apple has been bought by the man". This work handles all these 12 tenses. The form of past participle: commonly, a verb is converted to past participle form by adding “ed" at the end (example: “add" becomes “added", “look" becomes “looked"). However, there are some exceptions such as “buy" becomes “bought", “see" becomes “seen", etc. The case of negative sentence. For example, the negative form of “the man buys an apple" is “the man does not buy an apple", and the corresponding passive sentence is “an apple is not bought by the man". The case of modal verb: modal verbs (also called modals, modal auxiliary verbs, modal auxiliaries) are special verbs which behave irregularly in English. They are different from normal verbs like “work", “play", “visit", etc. Modal verbs are always followed by an infinitive without “to". For example, the sentence “the boy should bring a pen to the class" is converted to the passive form being “a pen should be brought by the boy to the class" (Figure FIGREF2). Moreover, this work also handles the cases of singular/plural, subject pronoun/object pronoun, etc. For instance, the pronoun “he" is used for the subject as “he" but is used for the object as “him". Analysis and Discussion ::: Representation and Inference The objective of this work is sentences: active sentence and passive sentence, so I need to determine the representation of both active sentence and passive sentence. An active sentence is built out of a noun phrase and a verb phrase. Therefore basically, the representation of an active sentence is s(NP,VP). The noun phrase or verb phrase is built out of fundamental elements such as determiner, noun, adjective, verb, etc. Simply, the representation of fundamental elements are shown as follows: Determiner: det(X). Example: det(a), det(an), det(the), etc. Noun: n(X). Example: n(man), n(woman), n(apple), etc. Pronoun: pro(X). Example: pro(he), pro(she), pro(him), etc. Adjective: adj(X). Example: adj(small), adj(big), adj(beautiful), etc. Verb: v(X). Example: v(play), v(like), v(love), etc. Preposition: pre(X). Example: pre(on), pre(in), pre(by), etc. Auxiliary verb: aux(X). Example: aux(do), aux(does), aux(is), aux(be), etc. Actually, there are three types of auxiliary verbs are used in this work. For example, the sentence “you will have been loving them" (perfect continuous future tense) has three auxiliary verbs are “will", “have", “been" which are determined by three predicates aux/5, aux1/4, aux2/4 as shown in the source code (convertible.pl), respectively. Auxiliary verb for tense in the passive form: auxTense(X). There are three groups of auxTense: Group 1: including only simple future tense: auxTense(be). Example: “an apple will be bought buy the man". Group 2: consisting of continuous past tense, continuous present tense, continuous future tense, perfect continuous past tense, perfect continuous present tense, and perfect continuous future tense: auxTense(being). Example: “an apple was being bought by a man", “an apple will be being bought by him". Group 3: including perfect past tense, perfect present tense, and perfect future tense: auxTense(been). Example: “an apple has been bought by the man", “an apple will have been bought by the man". Modal verb: modal(X). Example: modal(should), modal(can), modal(may), etc. Moreover, this work also uses pol(not) for the negative form and agent(by) for the passive form. With a noun phrase, there are some ways to build the noun phrase such as: A noun phrase is built out of a determiner and a noun, so its representation is np(DET,N). Example: noun phrase “the man" has the representation is np(det(the),n(man)). A noun phrase is built out of pronoun such as “he", “she", “we", etc. In this case, the representation of the noun phrase is simply np(PRO). For example: np(pro(he)). A noun phrase is built out of a determiner, adjectives, and a noun. In this case, the representation of the noun phrase is np(DET,ADJ,N). For example, the noun phrase “a small beautiful girl" has the representation is np(det(a),adi([small, beautiful]), n(girl)). A noun phrase is built out of a noun phrase and a prepositional phrase. The representation of the noun phrase in this case is np(DET,N,PP), np(PRO,PP), or np(DET,ADJ,N,PP). For example, the noun phrase “a cat on the big table" has the representation is np(det(a),n(cat),pp(pre(on),det(the),adj([big]),n(table))). With a verb phrase, there are two ways to build the verb phrase: A verb phrase is built out of a verb and a noun phrase. In this case, the presentation of the verb phrase is vp(V,NP). For example, the verb phrase “love a beautiful woman" has the representation is vp(v(love), np(det(a), adj([beautiful]), n(woman))). A verb phrase is built out of only a verb, so its representation is simply vp(V). Example: vp(v(love)) or vp(v(eat)). In fact, as presented above, in order to be able to convert from an active sentence to a passive sentence, the active sentence has to have the object. Therefore, the case of verb phrase vp(V) will not be considered in this work. After having the representation of noun phrase and verb phrase, the representation of the sentence could be obtained. Originally, the active sentence “he buys an apple" has the representation is s(np(pro(he)),vp(v(buys),np(det(an),n(apple)))). However, as presented above, this work only considers the case of verb phrase vp(V,NP), so I proposed a compact version of the representation of the sentence as shown in Figure FIGREF48. Therefore, the active sentence “he buys an apple" has the representation is s(np(pro(he)), v(buys), np(det(an), n(apple))). The passive sentence “an apple is bought by him" has the representation is s(np(det(an), n(apple)), aux(is), v(bought), agent(by), np(pro( him))). As introduced in the DCG BIBREF1, the representation of the sentence is represented by “parse tree" as illustrated in Figure FIGREF48 (active sentence) and Figure FIGREF50 (passive sentence). Parse tree could be found with the help of advanced techniques like extra arguments and extra goals. “Inference" is the conversion between a sentence and its representation, or even the conversion between an active sentence and a passive sentence: Given a sentence, “inference" is the process of getting the representation of that sentence; Given a representation of a sentence, “inference" is the process of getting that sentence. The final purpose of this work is that: Given an active sentence, we will get the respective passive sentence; and vice versa, Given a passive sentence, we will get the respective active sentence. Design and Implementation ::: Scenario for user interaction User interacts with the program by posing the query with the form (Figure FIGREF56): convert(ActiveS, ActiveRe, PassiveS, PassiveRe). Where: ActiveS: the active sentence represented by a list where each element of the list corresponds to each word of the sentence. Example: [he,buys,an,apple]. ActiveRe: the representation of the active sentence ActiveS. Example: s(np(pro(he)),v(buys),np(det(an),n(apple))). PassiveS: the passive sentence represented by a list where each element of the list corresponds to each word of the sentence. Example: [an,apple,is,bought,by,him]. PassiveRe: the representation of the passive sentence PassiveS. Example: s(np(det(an),n(apple)),aux(is),v(bought),agent(by),np(pro(him))). Input will be either ActiveS or PassiveS for the case of converting from an active sentence to a passive sentence and the case of converting from a passive sentence to an active sentence, respectively. There are several cases of output: If the input is ActiveS and it is able to convert to the passive sentence, the outputs will be ActiveRe, PassiveS, and PassiveRe. If the input is PassiveS and it is able to convert to the active sentence, the outputs will be ActiveS, ActiveRe, and PassiveRe. If the input is either ActiveS or PassiveS but it is not able to convert to passive/active sentence, the output will be ‘false’. There are some cases which cannot be converted: ActiveS is the active sentence but is typed as a passive sentence; PassiveS is the passive sentence but is typed as an active sentence; ActiveS is an active sentence having no object. Example: the sentence “he goes" cannot be converted to the passive sentence. Especially, we can pose the query with no input, and the program will generate all possible cases of the active sentence and passive sentence. Some examples to make user interaction more clear will be presented in Section SECREF4. Design and Implementation ::: Auxiliary-based solution to handle 12 English tenses There are 12 tenses in English. Each tense has a specific structure for the sentence. If each tense is handled individually, it will be quite long and be not an optimal solution. Therefore, as my best observation, I found a solution which divides 12 English tenses into 4 groups (same color means same group) based on the number of auxiliary verbs in the active sentence. This solution is summarized in Figure FIGREF72, consisting of: Group 1: the number of auxiliary verbs in the active sentence is equal to 0. This group consists of the simple past tense and the simple present tense; Group 2: the number of auxiliary verbs in the active sentence is equal to 1. We have 5 tenses in this group, those are the simple future tense, the continuous past tense, the continuous present tense, the perfect past tense, and the perfect present tense; Group 3: the number of auxiliary verbs in the active sentence is equal to 2. This group consists of the continuous future tense, the perfect future tense, the perfect continuous past tense, and the perfect continuous present tense; Group 4: the number of auxiliary verbs in the active sentence is equal to 3. This group has only one tense which is the perfect continuous future tense. As we can easily see in Figure FIGREF72, tenses in the same group has the same structure of representation. For example, DCG rules for active sentence and passive sentence of group 3 are implemented as follows. Design and Implementation ::: Three-steps conversion The three-steps conversion consists of three steps: From the input sentence fed as a list, the program first finds the representation of the sentence. From the representation of active or passive sentence, the program then finds the representation of passive or active sentence, respectively. From the representation achieved in the 2nd step, the program returns the converted sentence as a list. The implementation of the three-steps conversion (written in convert.pl) is shown as follows. The 1st and 3rd steps are done by using DCG rules (implemented in convertible.pl). The 2nd step is easily done by the rule like: As you can see above, the 2nd step is easily done by doing the conversion between corresponding elements. More details for other groups are shown in convert.pl. Design and Implementation ::: Others All implementations above are for the positive form of the sentence. The negative form of the sentence can be easily done by inheriting the rules that are defined for the positive form. DCG rule for the negative form is implemented as follows. DCG rules for the negative form is almost similar to those of the positive form, except from pol/1 predicate. However, in the 2nd step for the negative form, it completely utilizes the rule for the positive form as follows. However, there is an exception of the 2nd step for group 1, it needs an extra rule like: As we can see above, the negative form of group 1 needs the extra rule lex(AUX_POL,pol,Tense ,Qs) because, in this negative form, an extra auxiliary verb is needed. For example, the positive sentence is “he buys an apple", but the corresponding negative sentence is “he does not buy an apple". Other implementations such as lexicon, modal verbs, etc. are carefully written in the source code. Results This work has been already done with three files: convertible.pl: implementing DCG rules for 1st and 3rd steps in the three-steps conversion, as well as other rules including lexicon. convert.pl: implementing the three-steps conversion and its 2nd step. testSuite.pl: providing commands for user interaction. Users do not need to type the input sentence as a list (like [the, man, buys, an, apple]) but can type the sentence in the common way (directly type: the man buys an apple) by using two commands: active and passive. Moreover, users can easily check the correctness of the program by using two test suite commands: activeTestSuite and passiveTestSuite. Some execution examples are shown as follows. It should be noted that if users use active or passive commands, everything they type has to be defined in the lexicon or users have to define them in the lexicon (implemented in convertible.pl). Conclusion I introduced an effort to solve the problem of active and passive sentences using Prolog in terms of computation linguistics. By observing the possibility of converting an active sentence to passive sentence, I proposed a compact version of the representation of the sentence (Figure FIGREF48 and Figure FIGREF50). I also introduced a solution called auxiliary-based solution (Section SECREF67) to deal with 12 tenses in English. The auxiliary-based solution helps to reduce the workload of defining DCG rules. Finally, I proposed the three-steps conversion (Section SECREF73) for converting between active sentence and passive sentence. In the future, this work should consider solving other cases of active and passive sentences as much as possible.
Unanswerable
a56fbe90d5d349336f94ef034ba0d46450525d19
a56fbe90d5d349336f94ef034ba0d46450525d19_0
Q: What DCGs are used? Text: Introduction Language plays a vital role in the human life. A language is a structured system of communication BIBREF2. There are various language systems in the world with the estimated number being between 5,000 and 7,000 BIBREF3. Natural Language Processing (NLP) which we commonly hear is a subfield of linguistics. NLP aims to provide interactions between computers and human languages. The performance of NLP is evaluated by how computers can process and analyze large amounts of natural language data BIBREF4. In terms of language processing, we cannot but mention Computational Linguistics BIBREF5. Computational Linguistics is the scientific study of language from a computational perspective, and thus an interdisciplinary field, involving linguistics, computer science, mathematics, logic, cognitive science, and cognitive psychology. One of the most useful tools for studying computational linguistics is Prolog programming language BIBREF0. Prolog is a logic programming language associated with artificial intelligence and computational linguistics. Prolog can help deal with issues related to not only logic puzzle (Cryptoarithmetic puzzles, Zebra Puzzle, etc.) but also natural language processing. In this work, I utilized Prolog along with Definite Clause Grammars (DCG) BIBREF1 to solve one of the critical aspects of English grammar, active sentence and passive sentence. DCG proves the efficiency in handling the grammar of the sentence. Basically, a sentence is built out of noun phrase and verb phrase, so the structure of sentence, noun phrase, and verb phrase will be both covered in this work. In terms of English grammar, we have lots of content to solve as shown in Figure FIGREF1. For example, there are 12 tenses in English such as the simple past tense, the simple present tense, the perfect present tense, etc. We also have more than three types of conditional clause, more than three types of comparative clause, and so on. This work covers the contents of active sentence and passive sentence. For instance, if an active sentence is “a man buys an apple in the supermarket", its corresponding passive sentence will be “an apple is bought by a man in the supermarket". The basic rules for rewriting an active sentence to passive sentence are shown clearly in Figure FIGREF2. As shown in Figure FIGREF2, basic rules are: The object of the active sentence becomes the subject of the passive sentence; The subject of the active sentence becomes the object of the passive sentence; The finite form of the verb is changed to “to be + past participle". As my best understanding so far, there are only a few works mentioning the problem of active sentence and passive sentence in terms of language processing and computational linguistics. The conversion between active sentence and passive sentence was early mentioned in BIBREF6 by using a transformation rule to express the relationship between active and passive sentences. According to this rule, a parse tree is produced to represent the deep structure and determine whether the given sentence is active or passive. Similarly, BIBREF7 also used a tree-to-tree mapping to represent the active/passive transformation rule. However, these works just stopped in introducing how to transform an active sentence to passive sentence and did not solve many cases of them. Actually, there are many cases of active and passive sentences, leading to extra rules for converting between them. It is not easy to handle all these cases, and this is the main challenge of this work. My contributions are shown as follows: As far as I know, this may be the first work utilizing Prolog and DCG to solve a variety of cases of converting between active sentence and passive sentence such as 12 English tenses, modal verbs, negative form, etc. I proposed a compact version of the representation of the sentence as shown in Figure FIGREF48 and Figure FIGREF50. In order to deal with 12 tenses in English, I proposed an auxiliary-based solution (is presented in Section SECREF67) for dividing 12 tenses into 4 groups. This is a very nice solution that reduces the workload of defining DCG rules. I also proposed a three-steps conversion (is presented in Section SECREF73) for doing the conversion between active sentence and passive sentence. Analysis and Discussion ::: Cases to be solved The main challenge of this work is how much it can handle cases. There are a variety of cases in terms of active sentence and passive sentence. The cases that I solved in this work are shown as follows. The possibility of the conversion: the prerequisite to convert an active sentence to a passive sentence is that the active sentence must have the object. For instance: The sentence “the man buys an apple" is converted to the passive form being “an apple is bought by the man"; However, the sentence “the man goes to school" cannot be converted to the passive form because of the lack of object. The tenses of the sentence: there are 12 tenses in English such as simple present tense, continuous past tense, perfect present tense, perfect continuous future tense, etc. With each tense, there is a specific way for converting between active sentence and passive sentence. For example (from active form to passive form): In the simple present tense: “the man buys an apple" is converted to “an apple is bought by the man"; In the perfect present tense: “the man has bought an apple" is converted to “an apple has been bought by the man". This work handles all these 12 tenses. The form of past participle: commonly, a verb is converted to past participle form by adding “ed" at the end (example: “add" becomes “added", “look" becomes “looked"). However, there are some exceptions such as “buy" becomes “bought", “see" becomes “seen", etc. The case of negative sentence. For example, the negative form of “the man buys an apple" is “the man does not buy an apple", and the corresponding passive sentence is “an apple is not bought by the man". The case of modal verb: modal verbs (also called modals, modal auxiliary verbs, modal auxiliaries) are special verbs which behave irregularly in English. They are different from normal verbs like “work", “play", “visit", etc. Modal verbs are always followed by an infinitive without “to". For example, the sentence “the boy should bring a pen to the class" is converted to the passive form being “a pen should be brought by the boy to the class" (Figure FIGREF2). Moreover, this work also handles the cases of singular/plural, subject pronoun/object pronoun, etc. For instance, the pronoun “he" is used for the subject as “he" but is used for the object as “him". Analysis and Discussion ::: Representation and Inference The objective of this work is sentences: active sentence and passive sentence, so I need to determine the representation of both active sentence and passive sentence. An active sentence is built out of a noun phrase and a verb phrase. Therefore basically, the representation of an active sentence is s(NP,VP). The noun phrase or verb phrase is built out of fundamental elements such as determiner, noun, adjective, verb, etc. Simply, the representation of fundamental elements are shown as follows: Determiner: det(X). Example: det(a), det(an), det(the), etc. Noun: n(X). Example: n(man), n(woman), n(apple), etc. Pronoun: pro(X). Example: pro(he), pro(she), pro(him), etc. Adjective: adj(X). Example: adj(small), adj(big), adj(beautiful), etc. Verb: v(X). Example: v(play), v(like), v(love), etc. Preposition: pre(X). Example: pre(on), pre(in), pre(by), etc. Auxiliary verb: aux(X). Example: aux(do), aux(does), aux(is), aux(be), etc. Actually, there are three types of auxiliary verbs are used in this work. For example, the sentence “you will have been loving them" (perfect continuous future tense) has three auxiliary verbs are “will", “have", “been" which are determined by three predicates aux/5, aux1/4, aux2/4 as shown in the source code (convertible.pl), respectively. Auxiliary verb for tense in the passive form: auxTense(X). There are three groups of auxTense: Group 1: including only simple future tense: auxTense(be). Example: “an apple will be bought buy the man". Group 2: consisting of continuous past tense, continuous present tense, continuous future tense, perfect continuous past tense, perfect continuous present tense, and perfect continuous future tense: auxTense(being). Example: “an apple was being bought by a man", “an apple will be being bought by him". Group 3: including perfect past tense, perfect present tense, and perfect future tense: auxTense(been). Example: “an apple has been bought by the man", “an apple will have been bought by the man". Modal verb: modal(X). Example: modal(should), modal(can), modal(may), etc. Moreover, this work also uses pol(not) for the negative form and agent(by) for the passive form. With a noun phrase, there are some ways to build the noun phrase such as: A noun phrase is built out of a determiner and a noun, so its representation is np(DET,N). Example: noun phrase “the man" has the representation is np(det(the),n(man)). A noun phrase is built out of pronoun such as “he", “she", “we", etc. In this case, the representation of the noun phrase is simply np(PRO). For example: np(pro(he)). A noun phrase is built out of a determiner, adjectives, and a noun. In this case, the representation of the noun phrase is np(DET,ADJ,N). For example, the noun phrase “a small beautiful girl" has the representation is np(det(a),adi([small, beautiful]), n(girl)). A noun phrase is built out of a noun phrase and a prepositional phrase. The representation of the noun phrase in this case is np(DET,N,PP), np(PRO,PP), or np(DET,ADJ,N,PP). For example, the noun phrase “a cat on the big table" has the representation is np(det(a),n(cat),pp(pre(on),det(the),adj([big]),n(table))). With a verb phrase, there are two ways to build the verb phrase: A verb phrase is built out of a verb and a noun phrase. In this case, the presentation of the verb phrase is vp(V,NP). For example, the verb phrase “love a beautiful woman" has the representation is vp(v(love), np(det(a), adj([beautiful]), n(woman))). A verb phrase is built out of only a verb, so its representation is simply vp(V). Example: vp(v(love)) or vp(v(eat)). In fact, as presented above, in order to be able to convert from an active sentence to a passive sentence, the active sentence has to have the object. Therefore, the case of verb phrase vp(V) will not be considered in this work. After having the representation of noun phrase and verb phrase, the representation of the sentence could be obtained. Originally, the active sentence “he buys an apple" has the representation is s(np(pro(he)),vp(v(buys),np(det(an),n(apple)))). However, as presented above, this work only considers the case of verb phrase vp(V,NP), so I proposed a compact version of the representation of the sentence as shown in Figure FIGREF48. Therefore, the active sentence “he buys an apple" has the representation is s(np(pro(he)), v(buys), np(det(an), n(apple))). The passive sentence “an apple is bought by him" has the representation is s(np(det(an), n(apple)), aux(is), v(bought), agent(by), np(pro( him))). As introduced in the DCG BIBREF1, the representation of the sentence is represented by “parse tree" as illustrated in Figure FIGREF48 (active sentence) and Figure FIGREF50 (passive sentence). Parse tree could be found with the help of advanced techniques like extra arguments and extra goals. “Inference" is the conversion between a sentence and its representation, or even the conversion between an active sentence and a passive sentence: Given a sentence, “inference" is the process of getting the representation of that sentence; Given a representation of a sentence, “inference" is the process of getting that sentence. The final purpose of this work is that: Given an active sentence, we will get the respective passive sentence; and vice versa, Given a passive sentence, we will get the respective active sentence. Design and Implementation ::: Scenario for user interaction User interacts with the program by posing the query with the form (Figure FIGREF56): convert(ActiveS, ActiveRe, PassiveS, PassiveRe). Where: ActiveS: the active sentence represented by a list where each element of the list corresponds to each word of the sentence. Example: [he,buys,an,apple]. ActiveRe: the representation of the active sentence ActiveS. Example: s(np(pro(he)),v(buys),np(det(an),n(apple))). PassiveS: the passive sentence represented by a list where each element of the list corresponds to each word of the sentence. Example: [an,apple,is,bought,by,him]. PassiveRe: the representation of the passive sentence PassiveS. Example: s(np(det(an),n(apple)),aux(is),v(bought),agent(by),np(pro(him))). Input will be either ActiveS or PassiveS for the case of converting from an active sentence to a passive sentence and the case of converting from a passive sentence to an active sentence, respectively. There are several cases of output: If the input is ActiveS and it is able to convert to the passive sentence, the outputs will be ActiveRe, PassiveS, and PassiveRe. If the input is PassiveS and it is able to convert to the active sentence, the outputs will be ActiveS, ActiveRe, and PassiveRe. If the input is either ActiveS or PassiveS but it is not able to convert to passive/active sentence, the output will be ‘false’. There are some cases which cannot be converted: ActiveS is the active sentence but is typed as a passive sentence; PassiveS is the passive sentence but is typed as an active sentence; ActiveS is an active sentence having no object. Example: the sentence “he goes" cannot be converted to the passive sentence. Especially, we can pose the query with no input, and the program will generate all possible cases of the active sentence and passive sentence. Some examples to make user interaction more clear will be presented in Section SECREF4. Design and Implementation ::: Auxiliary-based solution to handle 12 English tenses There are 12 tenses in English. Each tense has a specific structure for the sentence. If each tense is handled individually, it will be quite long and be not an optimal solution. Therefore, as my best observation, I found a solution which divides 12 English tenses into 4 groups (same color means same group) based on the number of auxiliary verbs in the active sentence. This solution is summarized in Figure FIGREF72, consisting of: Group 1: the number of auxiliary verbs in the active sentence is equal to 0. This group consists of the simple past tense and the simple present tense; Group 2: the number of auxiliary verbs in the active sentence is equal to 1. We have 5 tenses in this group, those are the simple future tense, the continuous past tense, the continuous present tense, the perfect past tense, and the perfect present tense; Group 3: the number of auxiliary verbs in the active sentence is equal to 2. This group consists of the continuous future tense, the perfect future tense, the perfect continuous past tense, and the perfect continuous present tense; Group 4: the number of auxiliary verbs in the active sentence is equal to 3. This group has only one tense which is the perfect continuous future tense. As we can easily see in Figure FIGREF72, tenses in the same group has the same structure of representation. For example, DCG rules for active sentence and passive sentence of group 3 are implemented as follows. Design and Implementation ::: Three-steps conversion The three-steps conversion consists of three steps: From the input sentence fed as a list, the program first finds the representation of the sentence. From the representation of active or passive sentence, the program then finds the representation of passive or active sentence, respectively. From the representation achieved in the 2nd step, the program returns the converted sentence as a list. The implementation of the three-steps conversion (written in convert.pl) is shown as follows. The 1st and 3rd steps are done by using DCG rules (implemented in convertible.pl). The 2nd step is easily done by the rule like: As you can see above, the 2nd step is easily done by doing the conversion between corresponding elements. More details for other groups are shown in convert.pl. Design and Implementation ::: Others All implementations above are for the positive form of the sentence. The negative form of the sentence can be easily done by inheriting the rules that are defined for the positive form. DCG rule for the negative form is implemented as follows. DCG rules for the negative form is almost similar to those of the positive form, except from pol/1 predicate. However, in the 2nd step for the negative form, it completely utilizes the rule for the positive form as follows. However, there is an exception of the 2nd step for group 1, it needs an extra rule like: As we can see above, the negative form of group 1 needs the extra rule lex(AUX_POL,pol,Tense ,Qs) because, in this negative form, an extra auxiliary verb is needed. For example, the positive sentence is “he buys an apple", but the corresponding negative sentence is “he does not buy an apple". Other implementations such as lexicon, modal verbs, etc. are carefully written in the source code. Results This work has been already done with three files: convertible.pl: implementing DCG rules for 1st and 3rd steps in the three-steps conversion, as well as other rules including lexicon. convert.pl: implementing the three-steps conversion and its 2nd step. testSuite.pl: providing commands for user interaction. Users do not need to type the input sentence as a list (like [the, man, buys, an, apple]) but can type the sentence in the common way (directly type: the man buys an apple) by using two commands: active and passive. Moreover, users can easily check the correctness of the program by using two test suite commands: activeTestSuite and passiveTestSuite. Some execution examples are shown as follows. It should be noted that if users use active or passive commands, everything they type has to be defined in the lexicon or users have to define them in the lexicon (implemented in convertible.pl). Conclusion I introduced an effort to solve the problem of active and passive sentences using Prolog in terms of computation linguistics. By observing the possibility of converting an active sentence to passive sentence, I proposed a compact version of the representation of the sentence (Figure FIGREF48 and Figure FIGREF50). I also introduced a solution called auxiliary-based solution (Section SECREF67) to deal with 12 tenses in English. The auxiliary-based solution helps to reduce the workload of defining DCG rules. Finally, I proposed the three-steps conversion (Section SECREF73) for converting between active sentence and passive sentence. In the future, this work should consider solving other cases of active and passive sentences as much as possible.
Author's own DCG rules are defined from scratch.
b1f2db88a6f89d0f048803e38a0a568f5ba38fc5
b1f2db88a6f89d0f048803e38a0a568f5ba38fc5_0
Q: What else is tried to be solved other than 12 tenses, model verbs and negative form? Text: Introduction Language plays a vital role in the human life. A language is a structured system of communication BIBREF2. There are various language systems in the world with the estimated number being between 5,000 and 7,000 BIBREF3. Natural Language Processing (NLP) which we commonly hear is a subfield of linguistics. NLP aims to provide interactions between computers and human languages. The performance of NLP is evaluated by how computers can process and analyze large amounts of natural language data BIBREF4. In terms of language processing, we cannot but mention Computational Linguistics BIBREF5. Computational Linguistics is the scientific study of language from a computational perspective, and thus an interdisciplinary field, involving linguistics, computer science, mathematics, logic, cognitive science, and cognitive psychology. One of the most useful tools for studying computational linguistics is Prolog programming language BIBREF0. Prolog is a logic programming language associated with artificial intelligence and computational linguistics. Prolog can help deal with issues related to not only logic puzzle (Cryptoarithmetic puzzles, Zebra Puzzle, etc.) but also natural language processing. In this work, I utilized Prolog along with Definite Clause Grammars (DCG) BIBREF1 to solve one of the critical aspects of English grammar, active sentence and passive sentence. DCG proves the efficiency in handling the grammar of the sentence. Basically, a sentence is built out of noun phrase and verb phrase, so the structure of sentence, noun phrase, and verb phrase will be both covered in this work. In terms of English grammar, we have lots of content to solve as shown in Figure FIGREF1. For example, there are 12 tenses in English such as the simple past tense, the simple present tense, the perfect present tense, etc. We also have more than three types of conditional clause, more than three types of comparative clause, and so on. This work covers the contents of active sentence and passive sentence. For instance, if an active sentence is “a man buys an apple in the supermarket", its corresponding passive sentence will be “an apple is bought by a man in the supermarket". The basic rules for rewriting an active sentence to passive sentence are shown clearly in Figure FIGREF2. As shown in Figure FIGREF2, basic rules are: The object of the active sentence becomes the subject of the passive sentence; The subject of the active sentence becomes the object of the passive sentence; The finite form of the verb is changed to “to be + past participle". As my best understanding so far, there are only a few works mentioning the problem of active sentence and passive sentence in terms of language processing and computational linguistics. The conversion between active sentence and passive sentence was early mentioned in BIBREF6 by using a transformation rule to express the relationship between active and passive sentences. According to this rule, a parse tree is produced to represent the deep structure and determine whether the given sentence is active or passive. Similarly, BIBREF7 also used a tree-to-tree mapping to represent the active/passive transformation rule. However, these works just stopped in introducing how to transform an active sentence to passive sentence and did not solve many cases of them. Actually, there are many cases of active and passive sentences, leading to extra rules for converting between them. It is not easy to handle all these cases, and this is the main challenge of this work. My contributions are shown as follows: As far as I know, this may be the first work utilizing Prolog and DCG to solve a variety of cases of converting between active sentence and passive sentence such as 12 English tenses, modal verbs, negative form, etc. I proposed a compact version of the representation of the sentence as shown in Figure FIGREF48 and Figure FIGREF50. In order to deal with 12 tenses in English, I proposed an auxiliary-based solution (is presented in Section SECREF67) for dividing 12 tenses into 4 groups. This is a very nice solution that reduces the workload of defining DCG rules. I also proposed a three-steps conversion (is presented in Section SECREF73) for doing the conversion between active sentence and passive sentence. Analysis and Discussion ::: Cases to be solved The main challenge of this work is how much it can handle cases. There are a variety of cases in terms of active sentence and passive sentence. The cases that I solved in this work are shown as follows. The possibility of the conversion: the prerequisite to convert an active sentence to a passive sentence is that the active sentence must have the object. For instance: The sentence “the man buys an apple" is converted to the passive form being “an apple is bought by the man"; However, the sentence “the man goes to school" cannot be converted to the passive form because of the lack of object. The tenses of the sentence: there are 12 tenses in English such as simple present tense, continuous past tense, perfect present tense, perfect continuous future tense, etc. With each tense, there is a specific way for converting between active sentence and passive sentence. For example (from active form to passive form): In the simple present tense: “the man buys an apple" is converted to “an apple is bought by the man"; In the perfect present tense: “the man has bought an apple" is converted to “an apple has been bought by the man". This work handles all these 12 tenses. The form of past participle: commonly, a verb is converted to past participle form by adding “ed" at the end (example: “add" becomes “added", “look" becomes “looked"). However, there are some exceptions such as “buy" becomes “bought", “see" becomes “seen", etc. The case of negative sentence. For example, the negative form of “the man buys an apple" is “the man does not buy an apple", and the corresponding passive sentence is “an apple is not bought by the man". The case of modal verb: modal verbs (also called modals, modal auxiliary verbs, modal auxiliaries) are special verbs which behave irregularly in English. They are different from normal verbs like “work", “play", “visit", etc. Modal verbs are always followed by an infinitive without “to". For example, the sentence “the boy should bring a pen to the class" is converted to the passive form being “a pen should be brought by the boy to the class" (Figure FIGREF2). Moreover, this work also handles the cases of singular/plural, subject pronoun/object pronoun, etc. For instance, the pronoun “he" is used for the subject as “he" but is used for the object as “him". Analysis and Discussion ::: Representation and Inference The objective of this work is sentences: active sentence and passive sentence, so I need to determine the representation of both active sentence and passive sentence. An active sentence is built out of a noun phrase and a verb phrase. Therefore basically, the representation of an active sentence is s(NP,VP). The noun phrase or verb phrase is built out of fundamental elements such as determiner, noun, adjective, verb, etc. Simply, the representation of fundamental elements are shown as follows: Determiner: det(X). Example: det(a), det(an), det(the), etc. Noun: n(X). Example: n(man), n(woman), n(apple), etc. Pronoun: pro(X). Example: pro(he), pro(she), pro(him), etc. Adjective: adj(X). Example: adj(small), adj(big), adj(beautiful), etc. Verb: v(X). Example: v(play), v(like), v(love), etc. Preposition: pre(X). Example: pre(on), pre(in), pre(by), etc. Auxiliary verb: aux(X). Example: aux(do), aux(does), aux(is), aux(be), etc. Actually, there are three types of auxiliary verbs are used in this work. For example, the sentence “you will have been loving them" (perfect continuous future tense) has three auxiliary verbs are “will", “have", “been" which are determined by three predicates aux/5, aux1/4, aux2/4 as shown in the source code (convertible.pl), respectively. Auxiliary verb for tense in the passive form: auxTense(X). There are three groups of auxTense: Group 1: including only simple future tense: auxTense(be). Example: “an apple will be bought buy the man". Group 2: consisting of continuous past tense, continuous present tense, continuous future tense, perfect continuous past tense, perfect continuous present tense, and perfect continuous future tense: auxTense(being). Example: “an apple was being bought by a man", “an apple will be being bought by him". Group 3: including perfect past tense, perfect present tense, and perfect future tense: auxTense(been). Example: “an apple has been bought by the man", “an apple will have been bought by the man". Modal verb: modal(X). Example: modal(should), modal(can), modal(may), etc. Moreover, this work also uses pol(not) for the negative form and agent(by) for the passive form. With a noun phrase, there are some ways to build the noun phrase such as: A noun phrase is built out of a determiner and a noun, so its representation is np(DET,N). Example: noun phrase “the man" has the representation is np(det(the),n(man)). A noun phrase is built out of pronoun such as “he", “she", “we", etc. In this case, the representation of the noun phrase is simply np(PRO). For example: np(pro(he)). A noun phrase is built out of a determiner, adjectives, and a noun. In this case, the representation of the noun phrase is np(DET,ADJ,N). For example, the noun phrase “a small beautiful girl" has the representation is np(det(a),adi([small, beautiful]), n(girl)). A noun phrase is built out of a noun phrase and a prepositional phrase. The representation of the noun phrase in this case is np(DET,N,PP), np(PRO,PP), or np(DET,ADJ,N,PP). For example, the noun phrase “a cat on the big table" has the representation is np(det(a),n(cat),pp(pre(on),det(the),adj([big]),n(table))). With a verb phrase, there are two ways to build the verb phrase: A verb phrase is built out of a verb and a noun phrase. In this case, the presentation of the verb phrase is vp(V,NP). For example, the verb phrase “love a beautiful woman" has the representation is vp(v(love), np(det(a), adj([beautiful]), n(woman))). A verb phrase is built out of only a verb, so its representation is simply vp(V). Example: vp(v(love)) or vp(v(eat)). In fact, as presented above, in order to be able to convert from an active sentence to a passive sentence, the active sentence has to have the object. Therefore, the case of verb phrase vp(V) will not be considered in this work. After having the representation of noun phrase and verb phrase, the representation of the sentence could be obtained. Originally, the active sentence “he buys an apple" has the representation is s(np(pro(he)),vp(v(buys),np(det(an),n(apple)))). However, as presented above, this work only considers the case of verb phrase vp(V,NP), so I proposed a compact version of the representation of the sentence as shown in Figure FIGREF48. Therefore, the active sentence “he buys an apple" has the representation is s(np(pro(he)), v(buys), np(det(an), n(apple))). The passive sentence “an apple is bought by him" has the representation is s(np(det(an), n(apple)), aux(is), v(bought), agent(by), np(pro( him))). As introduced in the DCG BIBREF1, the representation of the sentence is represented by “parse tree" as illustrated in Figure FIGREF48 (active sentence) and Figure FIGREF50 (passive sentence). Parse tree could be found with the help of advanced techniques like extra arguments and extra goals. “Inference" is the conversion between a sentence and its representation, or even the conversion between an active sentence and a passive sentence: Given a sentence, “inference" is the process of getting the representation of that sentence; Given a representation of a sentence, “inference" is the process of getting that sentence. The final purpose of this work is that: Given an active sentence, we will get the respective passive sentence; and vice versa, Given a passive sentence, we will get the respective active sentence. Design and Implementation ::: Scenario for user interaction User interacts with the program by posing the query with the form (Figure FIGREF56): convert(ActiveS, ActiveRe, PassiveS, PassiveRe). Where: ActiveS: the active sentence represented by a list where each element of the list corresponds to each word of the sentence. Example: [he,buys,an,apple]. ActiveRe: the representation of the active sentence ActiveS. Example: s(np(pro(he)),v(buys),np(det(an),n(apple))). PassiveS: the passive sentence represented by a list where each element of the list corresponds to each word of the sentence. Example: [an,apple,is,bought,by,him]. PassiveRe: the representation of the passive sentence PassiveS. Example: s(np(det(an),n(apple)),aux(is),v(bought),agent(by),np(pro(him))). Input will be either ActiveS or PassiveS for the case of converting from an active sentence to a passive sentence and the case of converting from a passive sentence to an active sentence, respectively. There are several cases of output: If the input is ActiveS and it is able to convert to the passive sentence, the outputs will be ActiveRe, PassiveS, and PassiveRe. If the input is PassiveS and it is able to convert to the active sentence, the outputs will be ActiveS, ActiveRe, and PassiveRe. If the input is either ActiveS or PassiveS but it is not able to convert to passive/active sentence, the output will be ‘false’. There are some cases which cannot be converted: ActiveS is the active sentence but is typed as a passive sentence; PassiveS is the passive sentence but is typed as an active sentence; ActiveS is an active sentence having no object. Example: the sentence “he goes" cannot be converted to the passive sentence. Especially, we can pose the query with no input, and the program will generate all possible cases of the active sentence and passive sentence. Some examples to make user interaction more clear will be presented in Section SECREF4. Design and Implementation ::: Auxiliary-based solution to handle 12 English tenses There are 12 tenses in English. Each tense has a specific structure for the sentence. If each tense is handled individually, it will be quite long and be not an optimal solution. Therefore, as my best observation, I found a solution which divides 12 English tenses into 4 groups (same color means same group) based on the number of auxiliary verbs in the active sentence. This solution is summarized in Figure FIGREF72, consisting of: Group 1: the number of auxiliary verbs in the active sentence is equal to 0. This group consists of the simple past tense and the simple present tense; Group 2: the number of auxiliary verbs in the active sentence is equal to 1. We have 5 tenses in this group, those are the simple future tense, the continuous past tense, the continuous present tense, the perfect past tense, and the perfect present tense; Group 3: the number of auxiliary verbs in the active sentence is equal to 2. This group consists of the continuous future tense, the perfect future tense, the perfect continuous past tense, and the perfect continuous present tense; Group 4: the number of auxiliary verbs in the active sentence is equal to 3. This group has only one tense which is the perfect continuous future tense. As we can easily see in Figure FIGREF72, tenses in the same group has the same structure of representation. For example, DCG rules for active sentence and passive sentence of group 3 are implemented as follows. Design and Implementation ::: Three-steps conversion The three-steps conversion consists of three steps: From the input sentence fed as a list, the program first finds the representation of the sentence. From the representation of active or passive sentence, the program then finds the representation of passive or active sentence, respectively. From the representation achieved in the 2nd step, the program returns the converted sentence as a list. The implementation of the three-steps conversion (written in convert.pl) is shown as follows. The 1st and 3rd steps are done by using DCG rules (implemented in convertible.pl). The 2nd step is easily done by the rule like: As you can see above, the 2nd step is easily done by doing the conversion between corresponding elements. More details for other groups are shown in convert.pl. Design and Implementation ::: Others All implementations above are for the positive form of the sentence. The negative form of the sentence can be easily done by inheriting the rules that are defined for the positive form. DCG rule for the negative form is implemented as follows. DCG rules for the negative form is almost similar to those of the positive form, except from pol/1 predicate. However, in the 2nd step for the negative form, it completely utilizes the rule for the positive form as follows. However, there is an exception of the 2nd step for group 1, it needs an extra rule like: As we can see above, the negative form of group 1 needs the extra rule lex(AUX_POL,pol,Tense ,Qs) because, in this negative form, an extra auxiliary verb is needed. For example, the positive sentence is “he buys an apple", but the corresponding negative sentence is “he does not buy an apple". Other implementations such as lexicon, modal verbs, etc. are carefully written in the source code. Results This work has been already done with three files: convertible.pl: implementing DCG rules for 1st and 3rd steps in the three-steps conversion, as well as other rules including lexicon. convert.pl: implementing the three-steps conversion and its 2nd step. testSuite.pl: providing commands for user interaction. Users do not need to type the input sentence as a list (like [the, man, buys, an, apple]) but can type the sentence in the common way (directly type: the man buys an apple) by using two commands: active and passive. Moreover, users can easily check the correctness of the program by using two test suite commands: activeTestSuite and passiveTestSuite. Some execution examples are shown as follows. It should be noted that if users use active or passive commands, everything they type has to be defined in the lexicon or users have to define them in the lexicon (implemented in convertible.pl). Conclusion I introduced an effort to solve the problem of active and passive sentences using Prolog in terms of computation linguistics. By observing the possibility of converting an active sentence to passive sentence, I proposed a compact version of the representation of the sentence (Figure FIGREF48 and Figure FIGREF50). I also introduced a solution called auxiliary-based solution (Section SECREF67) to deal with 12 tenses in English. The auxiliary-based solution helps to reduce the workload of defining DCG rules. Finally, I proposed the three-steps conversion (Section SECREF73) for converting between active sentence and passive sentence. In the future, this work should consider solving other cases of active and passive sentences as much as possible.
cases of singular/plural, subject pronoun/object pronoun, etc.
cf3af2b68648fa8695e7234b6928d014e3b141f1
cf3af2b68648fa8695e7234b6928d014e3b141f1_0
Q: What is used for evaluation of this approach? Text: Introduction Language plays a vital role in the human life. A language is a structured system of communication BIBREF2. There are various language systems in the world with the estimated number being between 5,000 and 7,000 BIBREF3. Natural Language Processing (NLP) which we commonly hear is a subfield of linguistics. NLP aims to provide interactions between computers and human languages. The performance of NLP is evaluated by how computers can process and analyze large amounts of natural language data BIBREF4. In terms of language processing, we cannot but mention Computational Linguistics BIBREF5. Computational Linguistics is the scientific study of language from a computational perspective, and thus an interdisciplinary field, involving linguistics, computer science, mathematics, logic, cognitive science, and cognitive psychology. One of the most useful tools for studying computational linguistics is Prolog programming language BIBREF0. Prolog is a logic programming language associated with artificial intelligence and computational linguistics. Prolog can help deal with issues related to not only logic puzzle (Cryptoarithmetic puzzles, Zebra Puzzle, etc.) but also natural language processing. In this work, I utilized Prolog along with Definite Clause Grammars (DCG) BIBREF1 to solve one of the critical aspects of English grammar, active sentence and passive sentence. DCG proves the efficiency in handling the grammar of the sentence. Basically, a sentence is built out of noun phrase and verb phrase, so the structure of sentence, noun phrase, and verb phrase will be both covered in this work. In terms of English grammar, we have lots of content to solve as shown in Figure FIGREF1. For example, there are 12 tenses in English such as the simple past tense, the simple present tense, the perfect present tense, etc. We also have more than three types of conditional clause, more than three types of comparative clause, and so on. This work covers the contents of active sentence and passive sentence. For instance, if an active sentence is “a man buys an apple in the supermarket", its corresponding passive sentence will be “an apple is bought by a man in the supermarket". The basic rules for rewriting an active sentence to passive sentence are shown clearly in Figure FIGREF2. As shown in Figure FIGREF2, basic rules are: The object of the active sentence becomes the subject of the passive sentence; The subject of the active sentence becomes the object of the passive sentence; The finite form of the verb is changed to “to be + past participle". As my best understanding so far, there are only a few works mentioning the problem of active sentence and passive sentence in terms of language processing and computational linguistics. The conversion between active sentence and passive sentence was early mentioned in BIBREF6 by using a transformation rule to express the relationship between active and passive sentences. According to this rule, a parse tree is produced to represent the deep structure and determine whether the given sentence is active or passive. Similarly, BIBREF7 also used a tree-to-tree mapping to represent the active/passive transformation rule. However, these works just stopped in introducing how to transform an active sentence to passive sentence and did not solve many cases of them. Actually, there are many cases of active and passive sentences, leading to extra rules for converting between them. It is not easy to handle all these cases, and this is the main challenge of this work. My contributions are shown as follows: As far as I know, this may be the first work utilizing Prolog and DCG to solve a variety of cases of converting between active sentence and passive sentence such as 12 English tenses, modal verbs, negative form, etc. I proposed a compact version of the representation of the sentence as shown in Figure FIGREF48 and Figure FIGREF50. In order to deal with 12 tenses in English, I proposed an auxiliary-based solution (is presented in Section SECREF67) for dividing 12 tenses into 4 groups. This is a very nice solution that reduces the workload of defining DCG rules. I also proposed a three-steps conversion (is presented in Section SECREF73) for doing the conversion between active sentence and passive sentence. Analysis and Discussion ::: Cases to be solved The main challenge of this work is how much it can handle cases. There are a variety of cases in terms of active sentence and passive sentence. The cases that I solved in this work are shown as follows. The possibility of the conversion: the prerequisite to convert an active sentence to a passive sentence is that the active sentence must have the object. For instance: The sentence “the man buys an apple" is converted to the passive form being “an apple is bought by the man"; However, the sentence “the man goes to school" cannot be converted to the passive form because of the lack of object. The tenses of the sentence: there are 12 tenses in English such as simple present tense, continuous past tense, perfect present tense, perfect continuous future tense, etc. With each tense, there is a specific way for converting between active sentence and passive sentence. For example (from active form to passive form): In the simple present tense: “the man buys an apple" is converted to “an apple is bought by the man"; In the perfect present tense: “the man has bought an apple" is converted to “an apple has been bought by the man". This work handles all these 12 tenses. The form of past participle: commonly, a verb is converted to past participle form by adding “ed" at the end (example: “add" becomes “added", “look" becomes “looked"). However, there are some exceptions such as “buy" becomes “bought", “see" becomes “seen", etc. The case of negative sentence. For example, the negative form of “the man buys an apple" is “the man does not buy an apple", and the corresponding passive sentence is “an apple is not bought by the man". The case of modal verb: modal verbs (also called modals, modal auxiliary verbs, modal auxiliaries) are special verbs which behave irregularly in English. They are different from normal verbs like “work", “play", “visit", etc. Modal verbs are always followed by an infinitive without “to". For example, the sentence “the boy should bring a pen to the class" is converted to the passive form being “a pen should be brought by the boy to the class" (Figure FIGREF2). Moreover, this work also handles the cases of singular/plural, subject pronoun/object pronoun, etc. For instance, the pronoun “he" is used for the subject as “he" but is used for the object as “him". Analysis and Discussion ::: Representation and Inference The objective of this work is sentences: active sentence and passive sentence, so I need to determine the representation of both active sentence and passive sentence. An active sentence is built out of a noun phrase and a verb phrase. Therefore basically, the representation of an active sentence is s(NP,VP). The noun phrase or verb phrase is built out of fundamental elements such as determiner, noun, adjective, verb, etc. Simply, the representation of fundamental elements are shown as follows: Determiner: det(X). Example: det(a), det(an), det(the), etc. Noun: n(X). Example: n(man), n(woman), n(apple), etc. Pronoun: pro(X). Example: pro(he), pro(she), pro(him), etc. Adjective: adj(X). Example: adj(small), adj(big), adj(beautiful), etc. Verb: v(X). Example: v(play), v(like), v(love), etc. Preposition: pre(X). Example: pre(on), pre(in), pre(by), etc. Auxiliary verb: aux(X). Example: aux(do), aux(does), aux(is), aux(be), etc. Actually, there are three types of auxiliary verbs are used in this work. For example, the sentence “you will have been loving them" (perfect continuous future tense) has three auxiliary verbs are “will", “have", “been" which are determined by three predicates aux/5, aux1/4, aux2/4 as shown in the source code (convertible.pl), respectively. Auxiliary verb for tense in the passive form: auxTense(X). There are three groups of auxTense: Group 1: including only simple future tense: auxTense(be). Example: “an apple will be bought buy the man". Group 2: consisting of continuous past tense, continuous present tense, continuous future tense, perfect continuous past tense, perfect continuous present tense, and perfect continuous future tense: auxTense(being). Example: “an apple was being bought by a man", “an apple will be being bought by him". Group 3: including perfect past tense, perfect present tense, and perfect future tense: auxTense(been). Example: “an apple has been bought by the man", “an apple will have been bought by the man". Modal verb: modal(X). Example: modal(should), modal(can), modal(may), etc. Moreover, this work also uses pol(not) for the negative form and agent(by) for the passive form. With a noun phrase, there are some ways to build the noun phrase such as: A noun phrase is built out of a determiner and a noun, so its representation is np(DET,N). Example: noun phrase “the man" has the representation is np(det(the),n(man)). A noun phrase is built out of pronoun such as “he", “she", “we", etc. In this case, the representation of the noun phrase is simply np(PRO). For example: np(pro(he)). A noun phrase is built out of a determiner, adjectives, and a noun. In this case, the representation of the noun phrase is np(DET,ADJ,N). For example, the noun phrase “a small beautiful girl" has the representation is np(det(a),adi([small, beautiful]), n(girl)). A noun phrase is built out of a noun phrase and a prepositional phrase. The representation of the noun phrase in this case is np(DET,N,PP), np(PRO,PP), or np(DET,ADJ,N,PP). For example, the noun phrase “a cat on the big table" has the representation is np(det(a),n(cat),pp(pre(on),det(the),adj([big]),n(table))). With a verb phrase, there are two ways to build the verb phrase: A verb phrase is built out of a verb and a noun phrase. In this case, the presentation of the verb phrase is vp(V,NP). For example, the verb phrase “love a beautiful woman" has the representation is vp(v(love), np(det(a), adj([beautiful]), n(woman))). A verb phrase is built out of only a verb, so its representation is simply vp(V). Example: vp(v(love)) or vp(v(eat)). In fact, as presented above, in order to be able to convert from an active sentence to a passive sentence, the active sentence has to have the object. Therefore, the case of verb phrase vp(V) will not be considered in this work. After having the representation of noun phrase and verb phrase, the representation of the sentence could be obtained. Originally, the active sentence “he buys an apple" has the representation is s(np(pro(he)),vp(v(buys),np(det(an),n(apple)))). However, as presented above, this work only considers the case of verb phrase vp(V,NP), so I proposed a compact version of the representation of the sentence as shown in Figure FIGREF48. Therefore, the active sentence “he buys an apple" has the representation is s(np(pro(he)), v(buys), np(det(an), n(apple))). The passive sentence “an apple is bought by him" has the representation is s(np(det(an), n(apple)), aux(is), v(bought), agent(by), np(pro( him))). As introduced in the DCG BIBREF1, the representation of the sentence is represented by “parse tree" as illustrated in Figure FIGREF48 (active sentence) and Figure FIGREF50 (passive sentence). Parse tree could be found with the help of advanced techniques like extra arguments and extra goals. “Inference" is the conversion between a sentence and its representation, or even the conversion between an active sentence and a passive sentence: Given a sentence, “inference" is the process of getting the representation of that sentence; Given a representation of a sentence, “inference" is the process of getting that sentence. The final purpose of this work is that: Given an active sentence, we will get the respective passive sentence; and vice versa, Given a passive sentence, we will get the respective active sentence. Design and Implementation ::: Scenario for user interaction User interacts with the program by posing the query with the form (Figure FIGREF56): convert(ActiveS, ActiveRe, PassiveS, PassiveRe). Where: ActiveS: the active sentence represented by a list where each element of the list corresponds to each word of the sentence. Example: [he,buys,an,apple]. ActiveRe: the representation of the active sentence ActiveS. Example: s(np(pro(he)),v(buys),np(det(an),n(apple))). PassiveS: the passive sentence represented by a list where each element of the list corresponds to each word of the sentence. Example: [an,apple,is,bought,by,him]. PassiveRe: the representation of the passive sentence PassiveS. Example: s(np(det(an),n(apple)),aux(is),v(bought),agent(by),np(pro(him))). Input will be either ActiveS or PassiveS for the case of converting from an active sentence to a passive sentence and the case of converting from a passive sentence to an active sentence, respectively. There are several cases of output: If the input is ActiveS and it is able to convert to the passive sentence, the outputs will be ActiveRe, PassiveS, and PassiveRe. If the input is PassiveS and it is able to convert to the active sentence, the outputs will be ActiveS, ActiveRe, and PassiveRe. If the input is either ActiveS or PassiveS but it is not able to convert to passive/active sentence, the output will be ‘false’. There are some cases which cannot be converted: ActiveS is the active sentence but is typed as a passive sentence; PassiveS is the passive sentence but is typed as an active sentence; ActiveS is an active sentence having no object. Example: the sentence “he goes" cannot be converted to the passive sentence. Especially, we can pose the query with no input, and the program will generate all possible cases of the active sentence and passive sentence. Some examples to make user interaction more clear will be presented in Section SECREF4. Design and Implementation ::: Auxiliary-based solution to handle 12 English tenses There are 12 tenses in English. Each tense has a specific structure for the sentence. If each tense is handled individually, it will be quite long and be not an optimal solution. Therefore, as my best observation, I found a solution which divides 12 English tenses into 4 groups (same color means same group) based on the number of auxiliary verbs in the active sentence. This solution is summarized in Figure FIGREF72, consisting of: Group 1: the number of auxiliary verbs in the active sentence is equal to 0. This group consists of the simple past tense and the simple present tense; Group 2: the number of auxiliary verbs in the active sentence is equal to 1. We have 5 tenses in this group, those are the simple future tense, the continuous past tense, the continuous present tense, the perfect past tense, and the perfect present tense; Group 3: the number of auxiliary verbs in the active sentence is equal to 2. This group consists of the continuous future tense, the perfect future tense, the perfect continuous past tense, and the perfect continuous present tense; Group 4: the number of auxiliary verbs in the active sentence is equal to 3. This group has only one tense which is the perfect continuous future tense. As we can easily see in Figure FIGREF72, tenses in the same group has the same structure of representation. For example, DCG rules for active sentence and passive sentence of group 3 are implemented as follows. Design and Implementation ::: Three-steps conversion The three-steps conversion consists of three steps: From the input sentence fed as a list, the program first finds the representation of the sentence. From the representation of active or passive sentence, the program then finds the representation of passive or active sentence, respectively. From the representation achieved in the 2nd step, the program returns the converted sentence as a list. The implementation of the three-steps conversion (written in convert.pl) is shown as follows. The 1st and 3rd steps are done by using DCG rules (implemented in convertible.pl). The 2nd step is easily done by the rule like: As you can see above, the 2nd step is easily done by doing the conversion between corresponding elements. More details for other groups are shown in convert.pl. Design and Implementation ::: Others All implementations above are for the positive form of the sentence. The negative form of the sentence can be easily done by inheriting the rules that are defined for the positive form. DCG rule for the negative form is implemented as follows. DCG rules for the negative form is almost similar to those of the positive form, except from pol/1 predicate. However, in the 2nd step for the negative form, it completely utilizes the rule for the positive form as follows. However, there is an exception of the 2nd step for group 1, it needs an extra rule like: As we can see above, the negative form of group 1 needs the extra rule lex(AUX_POL,pol,Tense ,Qs) because, in this negative form, an extra auxiliary verb is needed. For example, the positive sentence is “he buys an apple", but the corresponding negative sentence is “he does not buy an apple". Other implementations such as lexicon, modal verbs, etc. are carefully written in the source code. Results This work has been already done with three files: convertible.pl: implementing DCG rules for 1st and 3rd steps in the three-steps conversion, as well as other rules including lexicon. convert.pl: implementing the three-steps conversion and its 2nd step. testSuite.pl: providing commands for user interaction. Users do not need to type the input sentence as a list (like [the, man, buys, an, apple]) but can type the sentence in the common way (directly type: the man buys an apple) by using two commands: active and passive. Moreover, users can easily check the correctness of the program by using two test suite commands: activeTestSuite and passiveTestSuite. Some execution examples are shown as follows. It should be noted that if users use active or passive commands, everything they type has to be defined in the lexicon or users have to define them in the lexicon (implemented in convertible.pl). Conclusion I introduced an effort to solve the problem of active and passive sentences using Prolog in terms of computation linguistics. By observing the possibility of converting an active sentence to passive sentence, I proposed a compact version of the representation of the sentence (Figure FIGREF48 and Figure FIGREF50). I also introduced a solution called auxiliary-based solution (Section SECREF67) to deal with 12 tenses in English. The auxiliary-based solution helps to reduce the workload of defining DCG rules. Finally, I proposed the three-steps conversion (Section SECREF73) for converting between active sentence and passive sentence. In the future, this work should consider solving other cases of active and passive sentences as much as possible.
Unanswerable
7883a52f008f3c4aabfc9f71ce05d7c4107e79bb
7883a52f008f3c4aabfc9f71ce05d7c4107e79bb_0
Q: Is there information about performance of these conversion methods? Text: Introduction Language plays a vital role in the human life. A language is a structured system of communication BIBREF2. There are various language systems in the world with the estimated number being between 5,000 and 7,000 BIBREF3. Natural Language Processing (NLP) which we commonly hear is a subfield of linguistics. NLP aims to provide interactions between computers and human languages. The performance of NLP is evaluated by how computers can process and analyze large amounts of natural language data BIBREF4. In terms of language processing, we cannot but mention Computational Linguistics BIBREF5. Computational Linguistics is the scientific study of language from a computational perspective, and thus an interdisciplinary field, involving linguistics, computer science, mathematics, logic, cognitive science, and cognitive psychology. One of the most useful tools for studying computational linguistics is Prolog programming language BIBREF0. Prolog is a logic programming language associated with artificial intelligence and computational linguistics. Prolog can help deal with issues related to not only logic puzzle (Cryptoarithmetic puzzles, Zebra Puzzle, etc.) but also natural language processing. In this work, I utilized Prolog along with Definite Clause Grammars (DCG) BIBREF1 to solve one of the critical aspects of English grammar, active sentence and passive sentence. DCG proves the efficiency in handling the grammar of the sentence. Basically, a sentence is built out of noun phrase and verb phrase, so the structure of sentence, noun phrase, and verb phrase will be both covered in this work. In terms of English grammar, we have lots of content to solve as shown in Figure FIGREF1. For example, there are 12 tenses in English such as the simple past tense, the simple present tense, the perfect present tense, etc. We also have more than three types of conditional clause, more than three types of comparative clause, and so on. This work covers the contents of active sentence and passive sentence. For instance, if an active sentence is “a man buys an apple in the supermarket", its corresponding passive sentence will be “an apple is bought by a man in the supermarket". The basic rules for rewriting an active sentence to passive sentence are shown clearly in Figure FIGREF2. As shown in Figure FIGREF2, basic rules are: The object of the active sentence becomes the subject of the passive sentence; The subject of the active sentence becomes the object of the passive sentence; The finite form of the verb is changed to “to be + past participle". As my best understanding so far, there are only a few works mentioning the problem of active sentence and passive sentence in terms of language processing and computational linguistics. The conversion between active sentence and passive sentence was early mentioned in BIBREF6 by using a transformation rule to express the relationship between active and passive sentences. According to this rule, a parse tree is produced to represent the deep structure and determine whether the given sentence is active or passive. Similarly, BIBREF7 also used a tree-to-tree mapping to represent the active/passive transformation rule. However, these works just stopped in introducing how to transform an active sentence to passive sentence and did not solve many cases of them. Actually, there are many cases of active and passive sentences, leading to extra rules for converting between them. It is not easy to handle all these cases, and this is the main challenge of this work. My contributions are shown as follows: As far as I know, this may be the first work utilizing Prolog and DCG to solve a variety of cases of converting between active sentence and passive sentence such as 12 English tenses, modal verbs, negative form, etc. I proposed a compact version of the representation of the sentence as shown in Figure FIGREF48 and Figure FIGREF50. In order to deal with 12 tenses in English, I proposed an auxiliary-based solution (is presented in Section SECREF67) for dividing 12 tenses into 4 groups. This is a very nice solution that reduces the workload of defining DCG rules. I also proposed a three-steps conversion (is presented in Section SECREF73) for doing the conversion between active sentence and passive sentence. Analysis and Discussion ::: Cases to be solved The main challenge of this work is how much it can handle cases. There are a variety of cases in terms of active sentence and passive sentence. The cases that I solved in this work are shown as follows. The possibility of the conversion: the prerequisite to convert an active sentence to a passive sentence is that the active sentence must have the object. For instance: The sentence “the man buys an apple" is converted to the passive form being “an apple is bought by the man"; However, the sentence “the man goes to school" cannot be converted to the passive form because of the lack of object. The tenses of the sentence: there are 12 tenses in English such as simple present tense, continuous past tense, perfect present tense, perfect continuous future tense, etc. With each tense, there is a specific way for converting between active sentence and passive sentence. For example (from active form to passive form): In the simple present tense: “the man buys an apple" is converted to “an apple is bought by the man"; In the perfect present tense: “the man has bought an apple" is converted to “an apple has been bought by the man". This work handles all these 12 tenses. The form of past participle: commonly, a verb is converted to past participle form by adding “ed" at the end (example: “add" becomes “added", “look" becomes “looked"). However, there are some exceptions such as “buy" becomes “bought", “see" becomes “seen", etc. The case of negative sentence. For example, the negative form of “the man buys an apple" is “the man does not buy an apple", and the corresponding passive sentence is “an apple is not bought by the man". The case of modal verb: modal verbs (also called modals, modal auxiliary verbs, modal auxiliaries) are special verbs which behave irregularly in English. They are different from normal verbs like “work", “play", “visit", etc. Modal verbs are always followed by an infinitive without “to". For example, the sentence “the boy should bring a pen to the class" is converted to the passive form being “a pen should be brought by the boy to the class" (Figure FIGREF2). Moreover, this work also handles the cases of singular/plural, subject pronoun/object pronoun, etc. For instance, the pronoun “he" is used for the subject as “he" but is used for the object as “him". Analysis and Discussion ::: Representation and Inference The objective of this work is sentences: active sentence and passive sentence, so I need to determine the representation of both active sentence and passive sentence. An active sentence is built out of a noun phrase and a verb phrase. Therefore basically, the representation of an active sentence is s(NP,VP). The noun phrase or verb phrase is built out of fundamental elements such as determiner, noun, adjective, verb, etc. Simply, the representation of fundamental elements are shown as follows: Determiner: det(X). Example: det(a), det(an), det(the), etc. Noun: n(X). Example: n(man), n(woman), n(apple), etc. Pronoun: pro(X). Example: pro(he), pro(she), pro(him), etc. Adjective: adj(X). Example: adj(small), adj(big), adj(beautiful), etc. Verb: v(X). Example: v(play), v(like), v(love), etc. Preposition: pre(X). Example: pre(on), pre(in), pre(by), etc. Auxiliary verb: aux(X). Example: aux(do), aux(does), aux(is), aux(be), etc. Actually, there are three types of auxiliary verbs are used in this work. For example, the sentence “you will have been loving them" (perfect continuous future tense) has three auxiliary verbs are “will", “have", “been" which are determined by three predicates aux/5, aux1/4, aux2/4 as shown in the source code (convertible.pl), respectively. Auxiliary verb for tense in the passive form: auxTense(X). There are three groups of auxTense: Group 1: including only simple future tense: auxTense(be). Example: “an apple will be bought buy the man". Group 2: consisting of continuous past tense, continuous present tense, continuous future tense, perfect continuous past tense, perfect continuous present tense, and perfect continuous future tense: auxTense(being). Example: “an apple was being bought by a man", “an apple will be being bought by him". Group 3: including perfect past tense, perfect present tense, and perfect future tense: auxTense(been). Example: “an apple has been bought by the man", “an apple will have been bought by the man". Modal verb: modal(X). Example: modal(should), modal(can), modal(may), etc. Moreover, this work also uses pol(not) for the negative form and agent(by) for the passive form. With a noun phrase, there are some ways to build the noun phrase such as: A noun phrase is built out of a determiner and a noun, so its representation is np(DET,N). Example: noun phrase “the man" has the representation is np(det(the),n(man)). A noun phrase is built out of pronoun such as “he", “she", “we", etc. In this case, the representation of the noun phrase is simply np(PRO). For example: np(pro(he)). A noun phrase is built out of a determiner, adjectives, and a noun. In this case, the representation of the noun phrase is np(DET,ADJ,N). For example, the noun phrase “a small beautiful girl" has the representation is np(det(a),adi([small, beautiful]), n(girl)). A noun phrase is built out of a noun phrase and a prepositional phrase. The representation of the noun phrase in this case is np(DET,N,PP), np(PRO,PP), or np(DET,ADJ,N,PP). For example, the noun phrase “a cat on the big table" has the representation is np(det(a),n(cat),pp(pre(on),det(the),adj([big]),n(table))). With a verb phrase, there are two ways to build the verb phrase: A verb phrase is built out of a verb and a noun phrase. In this case, the presentation of the verb phrase is vp(V,NP). For example, the verb phrase “love a beautiful woman" has the representation is vp(v(love), np(det(a), adj([beautiful]), n(woman))). A verb phrase is built out of only a verb, so its representation is simply vp(V). Example: vp(v(love)) or vp(v(eat)). In fact, as presented above, in order to be able to convert from an active sentence to a passive sentence, the active sentence has to have the object. Therefore, the case of verb phrase vp(V) will not be considered in this work. After having the representation of noun phrase and verb phrase, the representation of the sentence could be obtained. Originally, the active sentence “he buys an apple" has the representation is s(np(pro(he)),vp(v(buys),np(det(an),n(apple)))). However, as presented above, this work only considers the case of verb phrase vp(V,NP), so I proposed a compact version of the representation of the sentence as shown in Figure FIGREF48. Therefore, the active sentence “he buys an apple" has the representation is s(np(pro(he)), v(buys), np(det(an), n(apple))). The passive sentence “an apple is bought by him" has the representation is s(np(det(an), n(apple)), aux(is), v(bought), agent(by), np(pro( him))). As introduced in the DCG BIBREF1, the representation of the sentence is represented by “parse tree" as illustrated in Figure FIGREF48 (active sentence) and Figure FIGREF50 (passive sentence). Parse tree could be found with the help of advanced techniques like extra arguments and extra goals. “Inference" is the conversion between a sentence and its representation, or even the conversion between an active sentence and a passive sentence: Given a sentence, “inference" is the process of getting the representation of that sentence; Given a representation of a sentence, “inference" is the process of getting that sentence. The final purpose of this work is that: Given an active sentence, we will get the respective passive sentence; and vice versa, Given a passive sentence, we will get the respective active sentence. Design and Implementation ::: Scenario for user interaction User interacts with the program by posing the query with the form (Figure FIGREF56): convert(ActiveS, ActiveRe, PassiveS, PassiveRe). Where: ActiveS: the active sentence represented by a list where each element of the list corresponds to each word of the sentence. Example: [he,buys,an,apple]. ActiveRe: the representation of the active sentence ActiveS. Example: s(np(pro(he)),v(buys),np(det(an),n(apple))). PassiveS: the passive sentence represented by a list where each element of the list corresponds to each word of the sentence. Example: [an,apple,is,bought,by,him]. PassiveRe: the representation of the passive sentence PassiveS. Example: s(np(det(an),n(apple)),aux(is),v(bought),agent(by),np(pro(him))). Input will be either ActiveS or PassiveS for the case of converting from an active sentence to a passive sentence and the case of converting from a passive sentence to an active sentence, respectively. There are several cases of output: If the input is ActiveS and it is able to convert to the passive sentence, the outputs will be ActiveRe, PassiveS, and PassiveRe. If the input is PassiveS and it is able to convert to the active sentence, the outputs will be ActiveS, ActiveRe, and PassiveRe. If the input is either ActiveS or PassiveS but it is not able to convert to passive/active sentence, the output will be ‘false’. There are some cases which cannot be converted: ActiveS is the active sentence but is typed as a passive sentence; PassiveS is the passive sentence but is typed as an active sentence; ActiveS is an active sentence having no object. Example: the sentence “he goes" cannot be converted to the passive sentence. Especially, we can pose the query with no input, and the program will generate all possible cases of the active sentence and passive sentence. Some examples to make user interaction more clear will be presented in Section SECREF4. Design and Implementation ::: Auxiliary-based solution to handle 12 English tenses There are 12 tenses in English. Each tense has a specific structure for the sentence. If each tense is handled individually, it will be quite long and be not an optimal solution. Therefore, as my best observation, I found a solution which divides 12 English tenses into 4 groups (same color means same group) based on the number of auxiliary verbs in the active sentence. This solution is summarized in Figure FIGREF72, consisting of: Group 1: the number of auxiliary verbs in the active sentence is equal to 0. This group consists of the simple past tense and the simple present tense; Group 2: the number of auxiliary verbs in the active sentence is equal to 1. We have 5 tenses in this group, those are the simple future tense, the continuous past tense, the continuous present tense, the perfect past tense, and the perfect present tense; Group 3: the number of auxiliary verbs in the active sentence is equal to 2. This group consists of the continuous future tense, the perfect future tense, the perfect continuous past tense, and the perfect continuous present tense; Group 4: the number of auxiliary verbs in the active sentence is equal to 3. This group has only one tense which is the perfect continuous future tense. As we can easily see in Figure FIGREF72, tenses in the same group has the same structure of representation. For example, DCG rules for active sentence and passive sentence of group 3 are implemented as follows. Design and Implementation ::: Three-steps conversion The three-steps conversion consists of three steps: From the input sentence fed as a list, the program first finds the representation of the sentence. From the representation of active or passive sentence, the program then finds the representation of passive or active sentence, respectively. From the representation achieved in the 2nd step, the program returns the converted sentence as a list. The implementation of the three-steps conversion (written in convert.pl) is shown as follows. The 1st and 3rd steps are done by using DCG rules (implemented in convertible.pl). The 2nd step is easily done by the rule like: As you can see above, the 2nd step is easily done by doing the conversion between corresponding elements. More details for other groups are shown in convert.pl. Design and Implementation ::: Others All implementations above are for the positive form of the sentence. The negative form of the sentence can be easily done by inheriting the rules that are defined for the positive form. DCG rule for the negative form is implemented as follows. DCG rules for the negative form is almost similar to those of the positive form, except from pol/1 predicate. However, in the 2nd step for the negative form, it completely utilizes the rule for the positive form as follows. However, there is an exception of the 2nd step for group 1, it needs an extra rule like: As we can see above, the negative form of group 1 needs the extra rule lex(AUX_POL,pol,Tense ,Qs) because, in this negative form, an extra auxiliary verb is needed. For example, the positive sentence is “he buys an apple", but the corresponding negative sentence is “he does not buy an apple". Other implementations such as lexicon, modal verbs, etc. are carefully written in the source code. Results This work has been already done with three files: convertible.pl: implementing DCG rules for 1st and 3rd steps in the three-steps conversion, as well as other rules including lexicon. convert.pl: implementing the three-steps conversion and its 2nd step. testSuite.pl: providing commands for user interaction. Users do not need to type the input sentence as a list (like [the, man, buys, an, apple]) but can type the sentence in the common way (directly type: the man buys an apple) by using two commands: active and passive. Moreover, users can easily check the correctness of the program by using two test suite commands: activeTestSuite and passiveTestSuite. Some execution examples are shown as follows. It should be noted that if users use active or passive commands, everything they type has to be defined in the lexicon or users have to define them in the lexicon (implemented in convertible.pl). Conclusion I introduced an effort to solve the problem of active and passive sentences using Prolog in terms of computation linguistics. By observing the possibility of converting an active sentence to passive sentence, I proposed a compact version of the representation of the sentence (Figure FIGREF48 and Figure FIGREF50). I also introduced a solution called auxiliary-based solution (Section SECREF67) to deal with 12 tenses in English. The auxiliary-based solution helps to reduce the workload of defining DCG rules. Finally, I proposed the three-steps conversion (Section SECREF73) for converting between active sentence and passive sentence. In the future, this work should consider solving other cases of active and passive sentences as much as possible.
No
cd9776d03fe48903e43e916385df12e1e798070a
cd9776d03fe48903e43e916385df12e1e798070a_0
Q: Are there some experiments performed in the paper? Text: Introduction Language plays a vital role in the human life. A language is a structured system of communication BIBREF2. There are various language systems in the world with the estimated number being between 5,000 and 7,000 BIBREF3. Natural Language Processing (NLP) which we commonly hear is a subfield of linguistics. NLP aims to provide interactions between computers and human languages. The performance of NLP is evaluated by how computers can process and analyze large amounts of natural language data BIBREF4. In terms of language processing, we cannot but mention Computational Linguistics BIBREF5. Computational Linguistics is the scientific study of language from a computational perspective, and thus an interdisciplinary field, involving linguistics, computer science, mathematics, logic, cognitive science, and cognitive psychology. One of the most useful tools for studying computational linguistics is Prolog programming language BIBREF0. Prolog is a logic programming language associated with artificial intelligence and computational linguistics. Prolog can help deal with issues related to not only logic puzzle (Cryptoarithmetic puzzles, Zebra Puzzle, etc.) but also natural language processing. In this work, I utilized Prolog along with Definite Clause Grammars (DCG) BIBREF1 to solve one of the critical aspects of English grammar, active sentence and passive sentence. DCG proves the efficiency in handling the grammar of the sentence. Basically, a sentence is built out of noun phrase and verb phrase, so the structure of sentence, noun phrase, and verb phrase will be both covered in this work. In terms of English grammar, we have lots of content to solve as shown in Figure FIGREF1. For example, there are 12 tenses in English such as the simple past tense, the simple present tense, the perfect present tense, etc. We also have more than three types of conditional clause, more than three types of comparative clause, and so on. This work covers the contents of active sentence and passive sentence. For instance, if an active sentence is “a man buys an apple in the supermarket", its corresponding passive sentence will be “an apple is bought by a man in the supermarket". The basic rules for rewriting an active sentence to passive sentence are shown clearly in Figure FIGREF2. As shown in Figure FIGREF2, basic rules are: The object of the active sentence becomes the subject of the passive sentence; The subject of the active sentence becomes the object of the passive sentence; The finite form of the verb is changed to “to be + past participle". As my best understanding so far, there are only a few works mentioning the problem of active sentence and passive sentence in terms of language processing and computational linguistics. The conversion between active sentence and passive sentence was early mentioned in BIBREF6 by using a transformation rule to express the relationship between active and passive sentences. According to this rule, a parse tree is produced to represent the deep structure and determine whether the given sentence is active or passive. Similarly, BIBREF7 also used a tree-to-tree mapping to represent the active/passive transformation rule. However, these works just stopped in introducing how to transform an active sentence to passive sentence and did not solve many cases of them. Actually, there are many cases of active and passive sentences, leading to extra rules for converting between them. It is not easy to handle all these cases, and this is the main challenge of this work. My contributions are shown as follows: As far as I know, this may be the first work utilizing Prolog and DCG to solve a variety of cases of converting between active sentence and passive sentence such as 12 English tenses, modal verbs, negative form, etc. I proposed a compact version of the representation of the sentence as shown in Figure FIGREF48 and Figure FIGREF50. In order to deal with 12 tenses in English, I proposed an auxiliary-based solution (is presented in Section SECREF67) for dividing 12 tenses into 4 groups. This is a very nice solution that reduces the workload of defining DCG rules. I also proposed a three-steps conversion (is presented in Section SECREF73) for doing the conversion between active sentence and passive sentence. Analysis and Discussion ::: Cases to be solved The main challenge of this work is how much it can handle cases. There are a variety of cases in terms of active sentence and passive sentence. The cases that I solved in this work are shown as follows. The possibility of the conversion: the prerequisite to convert an active sentence to a passive sentence is that the active sentence must have the object. For instance: The sentence “the man buys an apple" is converted to the passive form being “an apple is bought by the man"; However, the sentence “the man goes to school" cannot be converted to the passive form because of the lack of object. The tenses of the sentence: there are 12 tenses in English such as simple present tense, continuous past tense, perfect present tense, perfect continuous future tense, etc. With each tense, there is a specific way for converting between active sentence and passive sentence. For example (from active form to passive form): In the simple present tense: “the man buys an apple" is converted to “an apple is bought by the man"; In the perfect present tense: “the man has bought an apple" is converted to “an apple has been bought by the man". This work handles all these 12 tenses. The form of past participle: commonly, a verb is converted to past participle form by adding “ed" at the end (example: “add" becomes “added", “look" becomes “looked"). However, there are some exceptions such as “buy" becomes “bought", “see" becomes “seen", etc. The case of negative sentence. For example, the negative form of “the man buys an apple" is “the man does not buy an apple", and the corresponding passive sentence is “an apple is not bought by the man". The case of modal verb: modal verbs (also called modals, modal auxiliary verbs, modal auxiliaries) are special verbs which behave irregularly in English. They are different from normal verbs like “work", “play", “visit", etc. Modal verbs are always followed by an infinitive without “to". For example, the sentence “the boy should bring a pen to the class" is converted to the passive form being “a pen should be brought by the boy to the class" (Figure FIGREF2). Moreover, this work also handles the cases of singular/plural, subject pronoun/object pronoun, etc. For instance, the pronoun “he" is used for the subject as “he" but is used for the object as “him". Analysis and Discussion ::: Representation and Inference The objective of this work is sentences: active sentence and passive sentence, so I need to determine the representation of both active sentence and passive sentence. An active sentence is built out of a noun phrase and a verb phrase. Therefore basically, the representation of an active sentence is s(NP,VP). The noun phrase or verb phrase is built out of fundamental elements such as determiner, noun, adjective, verb, etc. Simply, the representation of fundamental elements are shown as follows: Determiner: det(X). Example: det(a), det(an), det(the), etc. Noun: n(X). Example: n(man), n(woman), n(apple), etc. Pronoun: pro(X). Example: pro(he), pro(she), pro(him), etc. Adjective: adj(X). Example: adj(small), adj(big), adj(beautiful), etc. Verb: v(X). Example: v(play), v(like), v(love), etc. Preposition: pre(X). Example: pre(on), pre(in), pre(by), etc. Auxiliary verb: aux(X). Example: aux(do), aux(does), aux(is), aux(be), etc. Actually, there are three types of auxiliary verbs are used in this work. For example, the sentence “you will have been loving them" (perfect continuous future tense) has three auxiliary verbs are “will", “have", “been" which are determined by three predicates aux/5, aux1/4, aux2/4 as shown in the source code (convertible.pl), respectively. Auxiliary verb for tense in the passive form: auxTense(X). There are three groups of auxTense: Group 1: including only simple future tense: auxTense(be). Example: “an apple will be bought buy the man". Group 2: consisting of continuous past tense, continuous present tense, continuous future tense, perfect continuous past tense, perfect continuous present tense, and perfect continuous future tense: auxTense(being). Example: “an apple was being bought by a man", “an apple will be being bought by him". Group 3: including perfect past tense, perfect present tense, and perfect future tense: auxTense(been). Example: “an apple has been bought by the man", “an apple will have been bought by the man". Modal verb: modal(X). Example: modal(should), modal(can), modal(may), etc. Moreover, this work also uses pol(not) for the negative form and agent(by) for the passive form. With a noun phrase, there are some ways to build the noun phrase such as: A noun phrase is built out of a determiner and a noun, so its representation is np(DET,N). Example: noun phrase “the man" has the representation is np(det(the),n(man)). A noun phrase is built out of pronoun such as “he", “she", “we", etc. In this case, the representation of the noun phrase is simply np(PRO). For example: np(pro(he)). A noun phrase is built out of a determiner, adjectives, and a noun. In this case, the representation of the noun phrase is np(DET,ADJ,N). For example, the noun phrase “a small beautiful girl" has the representation is np(det(a),adi([small, beautiful]), n(girl)). A noun phrase is built out of a noun phrase and a prepositional phrase. The representation of the noun phrase in this case is np(DET,N,PP), np(PRO,PP), or np(DET,ADJ,N,PP). For example, the noun phrase “a cat on the big table" has the representation is np(det(a),n(cat),pp(pre(on),det(the),adj([big]),n(table))). With a verb phrase, there are two ways to build the verb phrase: A verb phrase is built out of a verb and a noun phrase. In this case, the presentation of the verb phrase is vp(V,NP). For example, the verb phrase “love a beautiful woman" has the representation is vp(v(love), np(det(a), adj([beautiful]), n(woman))). A verb phrase is built out of only a verb, so its representation is simply vp(V). Example: vp(v(love)) or vp(v(eat)). In fact, as presented above, in order to be able to convert from an active sentence to a passive sentence, the active sentence has to have the object. Therefore, the case of verb phrase vp(V) will not be considered in this work. After having the representation of noun phrase and verb phrase, the representation of the sentence could be obtained. Originally, the active sentence “he buys an apple" has the representation is s(np(pro(he)),vp(v(buys),np(det(an),n(apple)))). However, as presented above, this work only considers the case of verb phrase vp(V,NP), so I proposed a compact version of the representation of the sentence as shown in Figure FIGREF48. Therefore, the active sentence “he buys an apple" has the representation is s(np(pro(he)), v(buys), np(det(an), n(apple))). The passive sentence “an apple is bought by him" has the representation is s(np(det(an), n(apple)), aux(is), v(bought), agent(by), np(pro( him))). As introduced in the DCG BIBREF1, the representation of the sentence is represented by “parse tree" as illustrated in Figure FIGREF48 (active sentence) and Figure FIGREF50 (passive sentence). Parse tree could be found with the help of advanced techniques like extra arguments and extra goals. “Inference" is the conversion between a sentence and its representation, or even the conversion between an active sentence and a passive sentence: Given a sentence, “inference" is the process of getting the representation of that sentence; Given a representation of a sentence, “inference" is the process of getting that sentence. The final purpose of this work is that: Given an active sentence, we will get the respective passive sentence; and vice versa, Given a passive sentence, we will get the respective active sentence. Design and Implementation ::: Scenario for user interaction User interacts with the program by posing the query with the form (Figure FIGREF56): convert(ActiveS, ActiveRe, PassiveS, PassiveRe). Where: ActiveS: the active sentence represented by a list where each element of the list corresponds to each word of the sentence. Example: [he,buys,an,apple]. ActiveRe: the representation of the active sentence ActiveS. Example: s(np(pro(he)),v(buys),np(det(an),n(apple))). PassiveS: the passive sentence represented by a list where each element of the list corresponds to each word of the sentence. Example: [an,apple,is,bought,by,him]. PassiveRe: the representation of the passive sentence PassiveS. Example: s(np(det(an),n(apple)),aux(is),v(bought),agent(by),np(pro(him))). Input will be either ActiveS or PassiveS for the case of converting from an active sentence to a passive sentence and the case of converting from a passive sentence to an active sentence, respectively. There are several cases of output: If the input is ActiveS and it is able to convert to the passive sentence, the outputs will be ActiveRe, PassiveS, and PassiveRe. If the input is PassiveS and it is able to convert to the active sentence, the outputs will be ActiveS, ActiveRe, and PassiveRe. If the input is either ActiveS or PassiveS but it is not able to convert to passive/active sentence, the output will be ‘false’. There are some cases which cannot be converted: ActiveS is the active sentence but is typed as a passive sentence; PassiveS is the passive sentence but is typed as an active sentence; ActiveS is an active sentence having no object. Example: the sentence “he goes" cannot be converted to the passive sentence. Especially, we can pose the query with no input, and the program will generate all possible cases of the active sentence and passive sentence. Some examples to make user interaction more clear will be presented in Section SECREF4. Design and Implementation ::: Auxiliary-based solution to handle 12 English tenses There are 12 tenses in English. Each tense has a specific structure for the sentence. If each tense is handled individually, it will be quite long and be not an optimal solution. Therefore, as my best observation, I found a solution which divides 12 English tenses into 4 groups (same color means same group) based on the number of auxiliary verbs in the active sentence. This solution is summarized in Figure FIGREF72, consisting of: Group 1: the number of auxiliary verbs in the active sentence is equal to 0. This group consists of the simple past tense and the simple present tense; Group 2: the number of auxiliary verbs in the active sentence is equal to 1. We have 5 tenses in this group, those are the simple future tense, the continuous past tense, the continuous present tense, the perfect past tense, and the perfect present tense; Group 3: the number of auxiliary verbs in the active sentence is equal to 2. This group consists of the continuous future tense, the perfect future tense, the perfect continuous past tense, and the perfect continuous present tense; Group 4: the number of auxiliary verbs in the active sentence is equal to 3. This group has only one tense which is the perfect continuous future tense. As we can easily see in Figure FIGREF72, tenses in the same group has the same structure of representation. For example, DCG rules for active sentence and passive sentence of group 3 are implemented as follows. Design and Implementation ::: Three-steps conversion The three-steps conversion consists of three steps: From the input sentence fed as a list, the program first finds the representation of the sentence. From the representation of active or passive sentence, the program then finds the representation of passive or active sentence, respectively. From the representation achieved in the 2nd step, the program returns the converted sentence as a list. The implementation of the three-steps conversion (written in convert.pl) is shown as follows. The 1st and 3rd steps are done by using DCG rules (implemented in convertible.pl). The 2nd step is easily done by the rule like: As you can see above, the 2nd step is easily done by doing the conversion between corresponding elements. More details for other groups are shown in convert.pl. Design and Implementation ::: Others All implementations above are for the positive form of the sentence. The negative form of the sentence can be easily done by inheriting the rules that are defined for the positive form. DCG rule for the negative form is implemented as follows. DCG rules for the negative form is almost similar to those of the positive form, except from pol/1 predicate. However, in the 2nd step for the negative form, it completely utilizes the rule for the positive form as follows. However, there is an exception of the 2nd step for group 1, it needs an extra rule like: As we can see above, the negative form of group 1 needs the extra rule lex(AUX_POL,pol,Tense ,Qs) because, in this negative form, an extra auxiliary verb is needed. For example, the positive sentence is “he buys an apple", but the corresponding negative sentence is “he does not buy an apple". Other implementations such as lexicon, modal verbs, etc. are carefully written in the source code. Results This work has been already done with three files: convertible.pl: implementing DCG rules for 1st and 3rd steps in the three-steps conversion, as well as other rules including lexicon. convert.pl: implementing the three-steps conversion and its 2nd step. testSuite.pl: providing commands for user interaction. Users do not need to type the input sentence as a list (like [the, man, buys, an, apple]) but can type the sentence in the common way (directly type: the man buys an apple) by using two commands: active and passive. Moreover, users can easily check the correctness of the program by using two test suite commands: activeTestSuite and passiveTestSuite. Some execution examples are shown as follows. It should be noted that if users use active or passive commands, everything they type has to be defined in the lexicon or users have to define them in the lexicon (implemented in convertible.pl). Conclusion I introduced an effort to solve the problem of active and passive sentences using Prolog in terms of computation linguistics. By observing the possibility of converting an active sentence to passive sentence, I proposed a compact version of the representation of the sentence (Figure FIGREF48 and Figure FIGREF50). I also introduced a solution called auxiliary-based solution (Section SECREF67) to deal with 12 tenses in English. The auxiliary-based solution helps to reduce the workload of defining DCG rules. Finally, I proposed the three-steps conversion (Section SECREF73) for converting between active sentence and passive sentence. In the future, this work should consider solving other cases of active and passive sentences as much as possible.
No
1a252ffeaebdb189317aefd6c606652ba9677112
1a252ffeaebdb189317aefd6c606652ba9677112_0
Q: How much is performance improved by disabling attention in certain heads? Text: Introduction Over the past year, models based on the Transformer architecture BIBREF0 have become the de-facto standard for state-of-the-art performance on many natural language processing (NLP) tasks BIBREF1, BIBREF2. Their key feature is the self-attention mechanism that provides an alternative to conventionally used recurrent neural networks (RNN). One of the most popular Transformer-based models is BERT, which learns text representations using a bi-directional Transformer encoder pre-trained on the language modeling task BIBREF2. BERT-based architectures have produced new state-of-the-art performance on a range of NLP tasks of different nature, domain, and complexity, including question answering, sequence tagging, sentiment analysis, and inference. State-of-the-art performance is usually obtained by fine-tuning the pre-trained model on the specific task. In particular, BERT-based models are currently dominating the leaderboards for SQuAD BIBREF3 and GLUE benchmarks BIBREF4. However, the exact mechanisms that contribute to the BERT's outstanding performance still remain unclear. We address this problem through selecting a set of linguistic features of interest and conducting a series of experiments that aim to provide insights about how well these features are captured by BERT. This paper makes the following contributions: We propose the methodology and offer the first detailed analysis of BERT's capacity to capture different kinds of linguistic information by encoding it in its self-attention weights. We present the evidence of BERT's overparametrization and suggest a counter-intuitive yet frustratingly simple way of improving its performance, showing absolute gains of up to 3.2%. Related work There have been several recent attempts to assess BERT's ability to capture structural properties of language. BIBREF5 demonstrated that BERT consistently assigns higher scores to the correct verb forms as opposed to the incorrect one in a masked language modeling task, suggesting some ability to model subject-verb agreement. BIBREF6 extended this work to using multiple layers and tasks, supporting the claim that BERT's intermediate layers capture rich linguistic information. On the other hand, BIBREF7 concluded that LSTMs generalize to longer sequences better, and are more robust with respect to agreement distractors, compared to Transformers. BIBREF8 investigated the transferability of contextualized word representations to a number of probing tasks requiring linguistic knowledge. Their findings suggest that (a) the middle layers of Transformer-based architectures are the most transferable to other tasks, and (b) higher layers of Transformers are not as task specific as the ones of RNNs. BIBREF9 argued that models using self-attention outperform CNN- and RNN-based models on a word sense disambiguation task due to their ability to extract semantic features from text. Our work contributes to the above discussion, but rather than examining representations extracted from different layers, we focus on the understanding of the self-attention mechanism itself, since it is the key feature of Transformer-based models. Another research direction that is relevant to our work is neural network pruning. BIBREF10 showed that widely used complex architectures suffer from overparameterization, and can be significantly reduced in size without a loss in performance. BIBREF5 observed that the smaller version of BERT achieves better scores on a number of syntax-testing experiments than the larger one. BIBREF11 questioned the necessity of computation-heavy neural networks, proving that a simple yet carefully tuned BiLSTM without attention achieves the best or at least competitive results compared to more complex architectures on the document classification task. BIBREF12 presented more evidence of unnecessary complexity of the self-attention mechanism, and proposed a more lightweight and scalable dynamic convolution-based architecture that outperforms the self-attention baseline. These studies suggest a potential direction for future research, and are in good accordance with our observations. Methodology We pose the following research questions: What are the common attention patterns, how do they change during fine-tuning, and how does that impact the performance on a given task? (Sec. SECREF17, SECREF30) What linguistic knowledge is encoded in self-attention weights of the fine-tuned models and what portion of it comes from the pre-trained BERT? (Sec. SECREF25, SECREF34, SECREF36) How different are the self-attention patterns of different heads, and how important are they for a given task? (Sec. SECREF39) The answers to these questions come from a series of experiments with the basic pre-trained or the fine-tuned BERT models, as will be discussed below. All the experiments with the pre-trained BERT were conducted using the model provided with the PyTorch implementation of BERT (bert-base-uncased, 12-layer, 768-hidden, 12-heads, 110M parameters). We chose this smaller version of BERT because it shows competitive, if not better, performance while having fewer layers and heads, which makes it more interpretable. We use the following subset of GLUE tasks BIBREF4 for fine-tuning: MRPC: the Microsoft Research Paraphrase Corpus BIBREF13 STS-B: the Semantic Textual Similarity Benchmark BIBREF14 SST-2: the Stanford Sentiment Treebank, two-way classification BIBREF15 QQP: the Quora Question Pairs dataset RTE: the Recognizing Textual Entailment datasets QNLI: Question-answering NLI based on the Stanford Question Answering Dataset BIBREF3 MNLI: the Multi-Genre Natural Language Inference Corpus, matched section BIBREF16 Please refer to the original GLUE paper for details on the QQP and RTE datasets BIBREF4. We excluded two tasks: CoLa and the Winograd Schema Challenge. The latter is excluded due to the small size of the dataset. As for CoLa (the task of predicting linguistic acceptability judgments), GLUE authors report that the human performance is only 66.4, which is explained by the problems with the underlying methodology BIBREF17. Note also that CoLa is not included in the upcoming version of GLUE BIBREF18. All fine-tuning experiments follow the parameters reported in the original study (a batch size of 32 and 3 epochs, see devlin2018bert). In all these experiments, for a given input, we extract self-attention weights for each head in every layer. This results in a 2D float array of shape $L\times L$, where $L$ is the length of an input sequence. We will refer to such arrays as self-attention maps. Analysis of individual self-attention maps allows us to determine which target tokens are attended to the most as the input is processed token by token. We use these experiments to analyze how BERT processes different kinds of linguistic information, including the processing of different parts of speech (nouns, pronouns, and verbs), syntactic roles (objects, subjects), semantic relations, and negation tokens. Experiments In this section, we present the experiments conducted to address the above research questions. Experiments ::: BERT's self-attention patterns Manual inspection of self-attention maps for both basic pre-trained and fine-tuned BERT models suggested that there is a limited set of self-attention maps types that are repeatedly encoded across different heads. Consistently with previous observations, we identified five frequently occurring patterns, examples of which are shown in fig:atttypes: Vertical: mainly corresponds to attention to special BERT tokens [CLS] and [SEP]; Diagonal: formed by the attention to the previous/following tokens; Vertical+Diagonal: a mix of the previous two types, Block: intra-sentence attention for the tasks with two distinct sentences (such as, for example, RTE or MRPC), Heterogeneous: highly variable depending on the specific input and cannot be characterized by a distinct structure. Whereas the attention to the special tokens is important for cross-sentence reasoning, and the attention to the previous/following token comes from language model pre-training, we hypothesize that the last of the listed types is more likely to capture interpretable linguistic features, necessary for language understanding. To get a rough estimate of the percentage of attention heads that may capture linguistically interpretable information, we manually annotated around 400 sample self-attention maps as belonging to one of the five classes. The self-attention maps were obtained by feeding random input examples from selected tasks into the corresponding fine-tuned BERT model. This produced a somewhat unbalanced dataset, in which the “Vertical” class accounted for 30% of all samples. We then trained a convolutional neural network with 8 convolutional layers and ReLU activation functions to classify input maps into one of these classes. This model achieved the F1 score of 0.86 on the annotated dataset. We used this classifier to estimate the proportion of different self-attention patterns for the target GLUE tasks using up to 1000 examples (where available) from each validation set. Experiments ::: BERT's self-attention patterns ::: Results fig:attentionbydataset shows that the self-attention map types described above are consistently repeated across different heads and tasks. While a large portion of encoded information corresponds to attention to the previous/following token, to the special tokens, or a mixture of the two (the first three classes), the estimated upper bound on all heads in the “Heterogeneous” category (i.e. the ones that could be informative) varies from 32% (MRPC) to 61% (QQP) depending on the task. We would like to emphasize that this only gives the upper bound on the percentage of attention heads that could potentially capture meaningful structural information beyond adjacency and separator tokens. Experiments ::: Relation-specific heads in BERT In this experiment, our goal was to understand whether different syntactic and semantic relations are captured by self-attention patterns. While a large number of such relations could be investigated, we chose to examine semantic role relations defined in frame semantics, since they can be viewed as being at the intersection of syntax and semantics. Specifically, we focused on whether BERT captures FrameNet's relations between frame-evoking lexical units (predicates) and core frame elements BIBREF19, and whether the links between them produce higher attention weights in certain specific heads. We used pre-trained BERT in these experiments. The data for this experiment comes from FrameNet BIBREF19, a database that contains frame annotations for example sentences for different lexical units. Frame elements correspond to semantic roles for a given frame, for example, “buyer", “seller", and “goods” for the “Commercial_transaction" frame evoked by the words “sell” and “spend” or “topic” and “text” for the “Scrutiny” semantic frame evoked by the verb “address”. fig:framenet shows an example of such annotation. We extracted sample sentences for every lexical unit in the database and identified the corresponding core frame elements. Annotated elements in FrameNet may be rather long, so we considered only the sentences with frame elements of 3 tokens or less. Since each sentences is annotated only for one frame, semantic links from other frames can exist between unmarked elements. We therefore filter out all the sentences longer than 12 tokens, since shorter sentences are less likely to evoke multiple frames. To establish whether BERT attention captures semantic relations that do not simply correspond to the previous/following token, we exclude sentences where the linked objects are less than two tokens apart. This leaves us with 473 annotated sentences. For each of these sentences, we obtain pre-trained BERT's attention weights for each of the 144 heads. For every head, we return the maximum absolute attention weight among those token pairs that correspond to the annotated semantic link contained within a given sentence. We then average the derived scores over all the collected examples. This strategy allows us to identify the heads that prioritize the features correlated with frame-semantic relations within a sentence. Experiments ::: Relation-specific heads in BERT ::: Results The heatmap of averaged attention scores over all collected examples (fig:framenetresults) suggests that 2 out of 144 heads tend to attend to the parts of the sentence that FrameNet annotators identified as core elements of the same frame. fig:framenetresults shows an example of this attention pattern for these two heads. Both show high attention weight for “he” while processing “agitated” in the sentence “He was becoming agitated" (the frame “Emotion_directed”). Experiments ::: Change in self-attention patterns after fine-tuning Fine-tuning has a huge effect on performance, and this section attempts to find out why. To study how attention per head changes on average for each of the target GLUE tasks, we calculate cosine similarity between pre-trained and fine-tuned BERT's flattened arrays of attention weights. We average the derived similarities over all the development set examples. To evaluate contribution of pre-trained BERT to overall performance on the tasks, we consider two configurations of weights initialization, namely, pre-trained BERT weights and weights randomly sampled from normal distribution. Experiments ::: Change in self-attention patterns after fine-tuning ::: Results fig:cosine shows that for all the tasks except QQP, it is the last two layers that undergo the largest changes compared to the pre-trained BERT model. At the same time, tab:glue-results shows that fine-tuned BERT outperforms pre-trained BERT by a significant margin on all the tasks (with an average of 35.9 points of absolute difference). This leads us to conclude that the last two layers encode task-specific features that are attributed to the gain of scores, while earlier layers capture more fundamental and low-level information used in fine-tuned models. Randomly initialized BERT consistently produces lower scores than the ones achieved with pre-trained BERT. In fact, for some tasks (STS-B and QNLI), initialization with random weights gives worse performance that that of pre-trained BERT alone without fine-tuning. This suggests that pre-trained BERT does indeed contain linguistic knowledge that is helpful for solving these GLUE tasks. These results are consistent with similar studies, e.g., BIBREF20's results on fine-tuning a convolutional neural network pre-trained on ImageNet or BIBREF21's results on transfer learning for medical natural language inference. Experiments ::: Attention to linguistic features In this experiment, we investigate whether fine-tuning BERT for a given task creates self-attention patterns which emphasize specific linguistic features. In this case, certain kinds of tokens may get high attention weights from all the other tokens in the sentence, producing vertical stripes on the corresponding attention maps (fig:atttypes). We tested this hypothesis by checking whether there are vertical stripe patterns corresponding to certain linguistically interpretable features, and to what extent such features are relevant for solving a given task. In particular, we investigated attention to nouns, verbs, pronouns, subjects, objects, and negation words, and special BERT tokens across the tasks. For every head, we compute the sum of self-attention weights assigned to the token of interest from each input token. Since the weights depend on the number of tokens in the input sequence, this sum is normalized by sequence length. This allows us to aggregate the weights for this feature across different examples. If there are multiple tokens of the same type (e.g. several nouns or negations), we take the maximum value. We disregard input sentences that do not contain a given feature. For each investigated feature, we calculate this aggregated attention score for each head in every layer and build a map in order to detect the heads potentially responsible for this feature. We then compare the obtained maps to the ones derived using the pre-trained BERT model. This comparison enables us to determine if a particular feature is important for a specific task and whether it contributes to some tasks more than to others. Experiments ::: Attention to linguistic features ::: Results Contrary to our initial hypothesis that the vertical attention pattern may be motivated by linguistically meaningful features, we found that it is associated predominantly, if not exclusively, with attention to [CLS] and [SEP] tokens (see Figure FIGREF32. Note that the absolute [SEP] weights for the SST-2 sentiment analysis task are greater than for other tasks, which is explained by the fact that there is only one sentence in the model inputs, i.e. only one [SEP] token instead of two. There is also a clear tendency for earlier layers to pay attention to [CLS] and for later layers to [SEP], and this trend is consistent across all the tasks. We did detect heads that paid increased (compared to the pre-trained BERT) attention to nouns and direct objects of the main predicates (on the MRPC, RTE and QQP tasks), and negation tokens (on the QNLI task), but the attention weights of such tokens were negligible compared to [CLS] and [SEP]. Therefore, we believe that the striped attention maps generally come from BERT pre-training tasks rather than from task-specific linguistic reasoning. Experiments ::: Token-to-token attention To complement the experiments in Sec. SECREF34 and SECREF25, in this section, we investigate the attention patterns between tokens in the same sentence, i.e. whether any of the tokens are particularly important while a given token is being processed. We were interested specifically in the verb-subject relation and the noun-pronoun relation. Also, since BERT uses the representation of the [CLS] token in the last layer to make the prediction, we used the features from the experiment in Sec. SECREF34 in order to check if they get higher attention weights while the model is processing the [CLS] token. Experiments ::: Token-to-token attention ::: Results Our token-to-token attention experiments for detecting heads that prioritize noun-pronoun and verb-subject links resulted in a set of potential head candidates that coincided with diagonally structured attention maps. We believe that this happened due to the inherent property of English syntax where the dependent elements frequently appear close to each other, so it is difficult to distinguish such relations from the previous/following token attention coming from language model pre-training. Our investigation of attention distribution for the [CLS] token in the output layer suggests that for most tasks, with the exception of STS-B, RTE and QNLI, the [SEP] gets attended the most, as shown in fig:cls. Based on manual inspection, for the mentioned remaining tasks, the greatest attention weights correspond to the punctuation tokens, which are in a sense similar to [SEP]. Experiments ::: Disabling self-attention heads Since there does seem to be a certain degree of specialization for different heads, we investigated the effects of disabling different heads in BERT and the resulting effects on task performance. Since BERT relies heavily on the learned attention weights, we define disabling a head as modifying the attention values of a head to be constant $a = \frac{1}{L}$ for every token in the input sentence, where $L$ is the length of the sentence. Thus, every token receives the same attention, effectively disabling the learned attention patterns while maintaining the information flow of the original model. Note that by using this framework, we can disable an arbitrary number of heads, ranging from a single head per model to the whole layer or multiple layers. Experiments ::: Disabling self-attention heads ::: Results Our experiments suggest that certain heads have a detrimental effect on the overall performance of BERT, and this trend holds for all the chosen tasks. Unexpectedly, disabling some heads leads not to a drop in accuracy, as one would expect, but to an increase in performance. This is effect is different across tasks and datasets. While disabling some heads improves the results, disabling the others hurts the results. However, it is important to note that across all tasks and datasets, disabling some heads leads to an increase in performance. The gain from disabling a single head is different for different tasks, ranging from the minimum absolute gain of 0.1% for STS-B, to the maximum of 1.2% for MRPC (see fig:disableheadsall). In fact, for some tasks, such as MRPC and RTE, disabling a random head gives, on average, an increase in performance. Furthermore, disabling a whole layer, that is, all 12 heads in a given layer, also improves the results. fig:disablelayers shows the resulting model performance on the target GLUE tasks when different layers are disabled. Notably, disabling the first layer in the RTE task gives a significant boost, resulting in an absolute performance gain of 3.2%. However, effects of this operation vary across tasks, and for QNLI and MNLI, it produces a performance drop of up to -0.2%. Discussion In general, our results suggest that even the smaller base BERT model is significantly overparametrized. This is supported by the discovery of repeated self-attention patterns in different heads, as well as the fact that disabling both single and multiple heads is not detrimental to model performance and in some cases even improves it. We found no evidence that attention patterns that are mappable onto core frame-semantic relations actually improve BERT's performance. 2 out of 144 heads that seem to be “responsible" for these relations (see Section SECREF25) do not appear to be important in any of the GLUE tasks: disabling of either one does not lead to a drop of accuracy. This implies that fine-tuned BERT does not rely on this piece of semantic information and prioritizes other features instead. For instance, we noticed that both STS-B and RTE fine-tuned models rely on attention in the same pair of heads (head 1 in the fourth layer, and head 12 in the second layer), as shown in Figure FIGREF37. We manually checked the attention maps in those heads for a set of random inputs, and established that both of them have high weights for words that appear in both sentences of the input examples. This most likely means that word-by-word comparison of the two sentences provides a solid strategy of making a classification prediction for STS-B and RTE. Unfortunately, we were not able to provide a conceptually similar interpretation of heads important for other tasks. Conclusion In this work, we proposed a set of methods for analyzing self-attention mechanisms of BERT, comparing attention patterns for the pre-trained and fine-tuned versions of BERT. Our most surprising finding is that, although attention is the key BERT's underlying mechanism, the model can benefit from attention "disabling". Moreover, we demonstrated that there is redundancy in the information encoded by different heads and the same patterns get consistently repeated regardless of the target task. We believe that these two findings together suggest a further direction for research on BERT interpretation, namely, model pruning and finding an optimal sub-architecture reducing data repetition. Another direction for future work is to study self-attention patterns in a different language. We think that it would allow to disentangle attention maps potentially encoding linguistic information and heads that use simple heuristics like attending to the following/previous tokens.
disabling the first layer in the RTE task gives a significant boost, resulting in an absolute performance gain of 3.2%, this operation vary across tasks
da4d25dd9de09d16168788bb02ad600f5b0b3ba4
da4d25dd9de09d16168788bb02ad600f5b0b3ba4_0
Q: In which certain heads was attention disabled in experiments? Text: Introduction Over the past year, models based on the Transformer architecture BIBREF0 have become the de-facto standard for state-of-the-art performance on many natural language processing (NLP) tasks BIBREF1, BIBREF2. Their key feature is the self-attention mechanism that provides an alternative to conventionally used recurrent neural networks (RNN). One of the most popular Transformer-based models is BERT, which learns text representations using a bi-directional Transformer encoder pre-trained on the language modeling task BIBREF2. BERT-based architectures have produced new state-of-the-art performance on a range of NLP tasks of different nature, domain, and complexity, including question answering, sequence tagging, sentiment analysis, and inference. State-of-the-art performance is usually obtained by fine-tuning the pre-trained model on the specific task. In particular, BERT-based models are currently dominating the leaderboards for SQuAD BIBREF3 and GLUE benchmarks BIBREF4. However, the exact mechanisms that contribute to the BERT's outstanding performance still remain unclear. We address this problem through selecting a set of linguistic features of interest and conducting a series of experiments that aim to provide insights about how well these features are captured by BERT. This paper makes the following contributions: We propose the methodology and offer the first detailed analysis of BERT's capacity to capture different kinds of linguistic information by encoding it in its self-attention weights. We present the evidence of BERT's overparametrization and suggest a counter-intuitive yet frustratingly simple way of improving its performance, showing absolute gains of up to 3.2%. Related work There have been several recent attempts to assess BERT's ability to capture structural properties of language. BIBREF5 demonstrated that BERT consistently assigns higher scores to the correct verb forms as opposed to the incorrect one in a masked language modeling task, suggesting some ability to model subject-verb agreement. BIBREF6 extended this work to using multiple layers and tasks, supporting the claim that BERT's intermediate layers capture rich linguistic information. On the other hand, BIBREF7 concluded that LSTMs generalize to longer sequences better, and are more robust with respect to agreement distractors, compared to Transformers. BIBREF8 investigated the transferability of contextualized word representations to a number of probing tasks requiring linguistic knowledge. Their findings suggest that (a) the middle layers of Transformer-based architectures are the most transferable to other tasks, and (b) higher layers of Transformers are not as task specific as the ones of RNNs. BIBREF9 argued that models using self-attention outperform CNN- and RNN-based models on a word sense disambiguation task due to their ability to extract semantic features from text. Our work contributes to the above discussion, but rather than examining representations extracted from different layers, we focus on the understanding of the self-attention mechanism itself, since it is the key feature of Transformer-based models. Another research direction that is relevant to our work is neural network pruning. BIBREF10 showed that widely used complex architectures suffer from overparameterization, and can be significantly reduced in size without a loss in performance. BIBREF5 observed that the smaller version of BERT achieves better scores on a number of syntax-testing experiments than the larger one. BIBREF11 questioned the necessity of computation-heavy neural networks, proving that a simple yet carefully tuned BiLSTM without attention achieves the best or at least competitive results compared to more complex architectures on the document classification task. BIBREF12 presented more evidence of unnecessary complexity of the self-attention mechanism, and proposed a more lightweight and scalable dynamic convolution-based architecture that outperforms the self-attention baseline. These studies suggest a potential direction for future research, and are in good accordance with our observations. Methodology We pose the following research questions: What are the common attention patterns, how do they change during fine-tuning, and how does that impact the performance on a given task? (Sec. SECREF17, SECREF30) What linguistic knowledge is encoded in self-attention weights of the fine-tuned models and what portion of it comes from the pre-trained BERT? (Sec. SECREF25, SECREF34, SECREF36) How different are the self-attention patterns of different heads, and how important are they for a given task? (Sec. SECREF39) The answers to these questions come from a series of experiments with the basic pre-trained or the fine-tuned BERT models, as will be discussed below. All the experiments with the pre-trained BERT were conducted using the model provided with the PyTorch implementation of BERT (bert-base-uncased, 12-layer, 768-hidden, 12-heads, 110M parameters). We chose this smaller version of BERT because it shows competitive, if not better, performance while having fewer layers and heads, which makes it more interpretable. We use the following subset of GLUE tasks BIBREF4 for fine-tuning: MRPC: the Microsoft Research Paraphrase Corpus BIBREF13 STS-B: the Semantic Textual Similarity Benchmark BIBREF14 SST-2: the Stanford Sentiment Treebank, two-way classification BIBREF15 QQP: the Quora Question Pairs dataset RTE: the Recognizing Textual Entailment datasets QNLI: Question-answering NLI based on the Stanford Question Answering Dataset BIBREF3 MNLI: the Multi-Genre Natural Language Inference Corpus, matched section BIBREF16 Please refer to the original GLUE paper for details on the QQP and RTE datasets BIBREF4. We excluded two tasks: CoLa and the Winograd Schema Challenge. The latter is excluded due to the small size of the dataset. As for CoLa (the task of predicting linguistic acceptability judgments), GLUE authors report that the human performance is only 66.4, which is explained by the problems with the underlying methodology BIBREF17. Note also that CoLa is not included in the upcoming version of GLUE BIBREF18. All fine-tuning experiments follow the parameters reported in the original study (a batch size of 32 and 3 epochs, see devlin2018bert). In all these experiments, for a given input, we extract self-attention weights for each head in every layer. This results in a 2D float array of shape $L\times L$, where $L$ is the length of an input sequence. We will refer to such arrays as self-attention maps. Analysis of individual self-attention maps allows us to determine which target tokens are attended to the most as the input is processed token by token. We use these experiments to analyze how BERT processes different kinds of linguistic information, including the processing of different parts of speech (nouns, pronouns, and verbs), syntactic roles (objects, subjects), semantic relations, and negation tokens. Experiments In this section, we present the experiments conducted to address the above research questions. Experiments ::: BERT's self-attention patterns Manual inspection of self-attention maps for both basic pre-trained and fine-tuned BERT models suggested that there is a limited set of self-attention maps types that are repeatedly encoded across different heads. Consistently with previous observations, we identified five frequently occurring patterns, examples of which are shown in fig:atttypes: Vertical: mainly corresponds to attention to special BERT tokens [CLS] and [SEP]; Diagonal: formed by the attention to the previous/following tokens; Vertical+Diagonal: a mix of the previous two types, Block: intra-sentence attention for the tasks with two distinct sentences (such as, for example, RTE or MRPC), Heterogeneous: highly variable depending on the specific input and cannot be characterized by a distinct structure. Whereas the attention to the special tokens is important for cross-sentence reasoning, and the attention to the previous/following token comes from language model pre-training, we hypothesize that the last of the listed types is more likely to capture interpretable linguistic features, necessary for language understanding. To get a rough estimate of the percentage of attention heads that may capture linguistically interpretable information, we manually annotated around 400 sample self-attention maps as belonging to one of the five classes. The self-attention maps were obtained by feeding random input examples from selected tasks into the corresponding fine-tuned BERT model. This produced a somewhat unbalanced dataset, in which the “Vertical” class accounted for 30% of all samples. We then trained a convolutional neural network with 8 convolutional layers and ReLU activation functions to classify input maps into one of these classes. This model achieved the F1 score of 0.86 on the annotated dataset. We used this classifier to estimate the proportion of different self-attention patterns for the target GLUE tasks using up to 1000 examples (where available) from each validation set. Experiments ::: BERT's self-attention patterns ::: Results fig:attentionbydataset shows that the self-attention map types described above are consistently repeated across different heads and tasks. While a large portion of encoded information corresponds to attention to the previous/following token, to the special tokens, or a mixture of the two (the first three classes), the estimated upper bound on all heads in the “Heterogeneous” category (i.e. the ones that could be informative) varies from 32% (MRPC) to 61% (QQP) depending on the task. We would like to emphasize that this only gives the upper bound on the percentage of attention heads that could potentially capture meaningful structural information beyond adjacency and separator tokens. Experiments ::: Relation-specific heads in BERT In this experiment, our goal was to understand whether different syntactic and semantic relations are captured by self-attention patterns. While a large number of such relations could be investigated, we chose to examine semantic role relations defined in frame semantics, since they can be viewed as being at the intersection of syntax and semantics. Specifically, we focused on whether BERT captures FrameNet's relations between frame-evoking lexical units (predicates) and core frame elements BIBREF19, and whether the links between them produce higher attention weights in certain specific heads. We used pre-trained BERT in these experiments. The data for this experiment comes from FrameNet BIBREF19, a database that contains frame annotations for example sentences for different lexical units. Frame elements correspond to semantic roles for a given frame, for example, “buyer", “seller", and “goods” for the “Commercial_transaction" frame evoked by the words “sell” and “spend” or “topic” and “text” for the “Scrutiny” semantic frame evoked by the verb “address”. fig:framenet shows an example of such annotation. We extracted sample sentences for every lexical unit in the database and identified the corresponding core frame elements. Annotated elements in FrameNet may be rather long, so we considered only the sentences with frame elements of 3 tokens or less. Since each sentences is annotated only for one frame, semantic links from other frames can exist between unmarked elements. We therefore filter out all the sentences longer than 12 tokens, since shorter sentences are less likely to evoke multiple frames. To establish whether BERT attention captures semantic relations that do not simply correspond to the previous/following token, we exclude sentences where the linked objects are less than two tokens apart. This leaves us with 473 annotated sentences. For each of these sentences, we obtain pre-trained BERT's attention weights for each of the 144 heads. For every head, we return the maximum absolute attention weight among those token pairs that correspond to the annotated semantic link contained within a given sentence. We then average the derived scores over all the collected examples. This strategy allows us to identify the heads that prioritize the features correlated with frame-semantic relations within a sentence. Experiments ::: Relation-specific heads in BERT ::: Results The heatmap of averaged attention scores over all collected examples (fig:framenetresults) suggests that 2 out of 144 heads tend to attend to the parts of the sentence that FrameNet annotators identified as core elements of the same frame. fig:framenetresults shows an example of this attention pattern for these two heads. Both show high attention weight for “he” while processing “agitated” in the sentence “He was becoming agitated" (the frame “Emotion_directed”). Experiments ::: Change in self-attention patterns after fine-tuning Fine-tuning has a huge effect on performance, and this section attempts to find out why. To study how attention per head changes on average for each of the target GLUE tasks, we calculate cosine similarity between pre-trained and fine-tuned BERT's flattened arrays of attention weights. We average the derived similarities over all the development set examples. To evaluate contribution of pre-trained BERT to overall performance on the tasks, we consider two configurations of weights initialization, namely, pre-trained BERT weights and weights randomly sampled from normal distribution. Experiments ::: Change in self-attention patterns after fine-tuning ::: Results fig:cosine shows that for all the tasks except QQP, it is the last two layers that undergo the largest changes compared to the pre-trained BERT model. At the same time, tab:glue-results shows that fine-tuned BERT outperforms pre-trained BERT by a significant margin on all the tasks (with an average of 35.9 points of absolute difference). This leads us to conclude that the last two layers encode task-specific features that are attributed to the gain of scores, while earlier layers capture more fundamental and low-level information used in fine-tuned models. Randomly initialized BERT consistently produces lower scores than the ones achieved with pre-trained BERT. In fact, for some tasks (STS-B and QNLI), initialization with random weights gives worse performance that that of pre-trained BERT alone without fine-tuning. This suggests that pre-trained BERT does indeed contain linguistic knowledge that is helpful for solving these GLUE tasks. These results are consistent with similar studies, e.g., BIBREF20's results on fine-tuning a convolutional neural network pre-trained on ImageNet or BIBREF21's results on transfer learning for medical natural language inference. Experiments ::: Attention to linguistic features In this experiment, we investigate whether fine-tuning BERT for a given task creates self-attention patterns which emphasize specific linguistic features. In this case, certain kinds of tokens may get high attention weights from all the other tokens in the sentence, producing vertical stripes on the corresponding attention maps (fig:atttypes). We tested this hypothesis by checking whether there are vertical stripe patterns corresponding to certain linguistically interpretable features, and to what extent such features are relevant for solving a given task. In particular, we investigated attention to nouns, verbs, pronouns, subjects, objects, and negation words, and special BERT tokens across the tasks. For every head, we compute the sum of self-attention weights assigned to the token of interest from each input token. Since the weights depend on the number of tokens in the input sequence, this sum is normalized by sequence length. This allows us to aggregate the weights for this feature across different examples. If there are multiple tokens of the same type (e.g. several nouns or negations), we take the maximum value. We disregard input sentences that do not contain a given feature. For each investigated feature, we calculate this aggregated attention score for each head in every layer and build a map in order to detect the heads potentially responsible for this feature. We then compare the obtained maps to the ones derived using the pre-trained BERT model. This comparison enables us to determine if a particular feature is important for a specific task and whether it contributes to some tasks more than to others. Experiments ::: Attention to linguistic features ::: Results Contrary to our initial hypothesis that the vertical attention pattern may be motivated by linguistically meaningful features, we found that it is associated predominantly, if not exclusively, with attention to [CLS] and [SEP] tokens (see Figure FIGREF32. Note that the absolute [SEP] weights for the SST-2 sentiment analysis task are greater than for other tasks, which is explained by the fact that there is only one sentence in the model inputs, i.e. only one [SEP] token instead of two. There is also a clear tendency for earlier layers to pay attention to [CLS] and for later layers to [SEP], and this trend is consistent across all the tasks. We did detect heads that paid increased (compared to the pre-trained BERT) attention to nouns and direct objects of the main predicates (on the MRPC, RTE and QQP tasks), and negation tokens (on the QNLI task), but the attention weights of such tokens were negligible compared to [CLS] and [SEP]. Therefore, we believe that the striped attention maps generally come from BERT pre-training tasks rather than from task-specific linguistic reasoning. Experiments ::: Token-to-token attention To complement the experiments in Sec. SECREF34 and SECREF25, in this section, we investigate the attention patterns between tokens in the same sentence, i.e. whether any of the tokens are particularly important while a given token is being processed. We were interested specifically in the verb-subject relation and the noun-pronoun relation. Also, since BERT uses the representation of the [CLS] token in the last layer to make the prediction, we used the features from the experiment in Sec. SECREF34 in order to check if they get higher attention weights while the model is processing the [CLS] token. Experiments ::: Token-to-token attention ::: Results Our token-to-token attention experiments for detecting heads that prioritize noun-pronoun and verb-subject links resulted in a set of potential head candidates that coincided with diagonally structured attention maps. We believe that this happened due to the inherent property of English syntax where the dependent elements frequently appear close to each other, so it is difficult to distinguish such relations from the previous/following token attention coming from language model pre-training. Our investigation of attention distribution for the [CLS] token in the output layer suggests that for most tasks, with the exception of STS-B, RTE and QNLI, the [SEP] gets attended the most, as shown in fig:cls. Based on manual inspection, for the mentioned remaining tasks, the greatest attention weights correspond to the punctuation tokens, which are in a sense similar to [SEP]. Experiments ::: Disabling self-attention heads Since there does seem to be a certain degree of specialization for different heads, we investigated the effects of disabling different heads in BERT and the resulting effects on task performance. Since BERT relies heavily on the learned attention weights, we define disabling a head as modifying the attention values of a head to be constant $a = \frac{1}{L}$ for every token in the input sentence, where $L$ is the length of the sentence. Thus, every token receives the same attention, effectively disabling the learned attention patterns while maintaining the information flow of the original model. Note that by using this framework, we can disable an arbitrary number of heads, ranging from a single head per model to the whole layer or multiple layers. Experiments ::: Disabling self-attention heads ::: Results Our experiments suggest that certain heads have a detrimental effect on the overall performance of BERT, and this trend holds for all the chosen tasks. Unexpectedly, disabling some heads leads not to a drop in accuracy, as one would expect, but to an increase in performance. This is effect is different across tasks and datasets. While disabling some heads improves the results, disabling the others hurts the results. However, it is important to note that across all tasks and datasets, disabling some heads leads to an increase in performance. The gain from disabling a single head is different for different tasks, ranging from the minimum absolute gain of 0.1% for STS-B, to the maximum of 1.2% for MRPC (see fig:disableheadsall). In fact, for some tasks, such as MRPC and RTE, disabling a random head gives, on average, an increase in performance. Furthermore, disabling a whole layer, that is, all 12 heads in a given layer, also improves the results. fig:disablelayers shows the resulting model performance on the target GLUE tasks when different layers are disabled. Notably, disabling the first layer in the RTE task gives a significant boost, resulting in an absolute performance gain of 3.2%. However, effects of this operation vary across tasks, and for QNLI and MNLI, it produces a performance drop of up to -0.2%. Discussion In general, our results suggest that even the smaller base BERT model is significantly overparametrized. This is supported by the discovery of repeated self-attention patterns in different heads, as well as the fact that disabling both single and multiple heads is not detrimental to model performance and in some cases even improves it. We found no evidence that attention patterns that are mappable onto core frame-semantic relations actually improve BERT's performance. 2 out of 144 heads that seem to be “responsible" for these relations (see Section SECREF25) do not appear to be important in any of the GLUE tasks: disabling of either one does not lead to a drop of accuracy. This implies that fine-tuned BERT does not rely on this piece of semantic information and prioritizes other features instead. For instance, we noticed that both STS-B and RTE fine-tuned models rely on attention in the same pair of heads (head 1 in the fourth layer, and head 12 in the second layer), as shown in Figure FIGREF37. We manually checked the attention maps in those heads for a set of random inputs, and established that both of them have high weights for words that appear in both sentences of the input examples. This most likely means that word-by-word comparison of the two sentences provides a solid strategy of making a classification prediction for STS-B and RTE. Unfortunately, we were not able to provide a conceptually similar interpretation of heads important for other tasks. Conclusion In this work, we proposed a set of methods for analyzing self-attention mechanisms of BERT, comparing attention patterns for the pre-trained and fine-tuned versions of BERT. Our most surprising finding is that, although attention is the key BERT's underlying mechanism, the model can benefit from attention "disabling". Moreover, we demonstrated that there is redundancy in the information encoded by different heads and the same patterns get consistently repeated regardless of the target task. We believe that these two findings together suggest a further direction for research on BERT interpretation, namely, model pruning and finding an optimal sub-architecture reducing data repetition. Another direction for future work is to study self-attention patterns in a different language. We think that it would allow to disentangle attention maps potentially encoding linguistic information and heads that use simple heuristics like attending to the following/previous tokens.
single head, disabling a whole layer, that is, all 12 heads in a given layer
2870fbce43a3cf6daf982f720137c008b30c60dc
2870fbce43a3cf6daf982f720137c008b30c60dc_0
Q: What handcrafter features-of-interest are used? Text: Introduction Over the past year, models based on the Transformer architecture BIBREF0 have become the de-facto standard for state-of-the-art performance on many natural language processing (NLP) tasks BIBREF1, BIBREF2. Their key feature is the self-attention mechanism that provides an alternative to conventionally used recurrent neural networks (RNN). One of the most popular Transformer-based models is BERT, which learns text representations using a bi-directional Transformer encoder pre-trained on the language modeling task BIBREF2. BERT-based architectures have produced new state-of-the-art performance on a range of NLP tasks of different nature, domain, and complexity, including question answering, sequence tagging, sentiment analysis, and inference. State-of-the-art performance is usually obtained by fine-tuning the pre-trained model on the specific task. In particular, BERT-based models are currently dominating the leaderboards for SQuAD BIBREF3 and GLUE benchmarks BIBREF4. However, the exact mechanisms that contribute to the BERT's outstanding performance still remain unclear. We address this problem through selecting a set of linguistic features of interest and conducting a series of experiments that aim to provide insights about how well these features are captured by BERT. This paper makes the following contributions: We propose the methodology and offer the first detailed analysis of BERT's capacity to capture different kinds of linguistic information by encoding it in its self-attention weights. We present the evidence of BERT's overparametrization and suggest a counter-intuitive yet frustratingly simple way of improving its performance, showing absolute gains of up to 3.2%. Related work There have been several recent attempts to assess BERT's ability to capture structural properties of language. BIBREF5 demonstrated that BERT consistently assigns higher scores to the correct verb forms as opposed to the incorrect one in a masked language modeling task, suggesting some ability to model subject-verb agreement. BIBREF6 extended this work to using multiple layers and tasks, supporting the claim that BERT's intermediate layers capture rich linguistic information. On the other hand, BIBREF7 concluded that LSTMs generalize to longer sequences better, and are more robust with respect to agreement distractors, compared to Transformers. BIBREF8 investigated the transferability of contextualized word representations to a number of probing tasks requiring linguistic knowledge. Their findings suggest that (a) the middle layers of Transformer-based architectures are the most transferable to other tasks, and (b) higher layers of Transformers are not as task specific as the ones of RNNs. BIBREF9 argued that models using self-attention outperform CNN- and RNN-based models on a word sense disambiguation task due to their ability to extract semantic features from text. Our work contributes to the above discussion, but rather than examining representations extracted from different layers, we focus on the understanding of the self-attention mechanism itself, since it is the key feature of Transformer-based models. Another research direction that is relevant to our work is neural network pruning. BIBREF10 showed that widely used complex architectures suffer from overparameterization, and can be significantly reduced in size without a loss in performance. BIBREF5 observed that the smaller version of BERT achieves better scores on a number of syntax-testing experiments than the larger one. BIBREF11 questioned the necessity of computation-heavy neural networks, proving that a simple yet carefully tuned BiLSTM without attention achieves the best or at least competitive results compared to more complex architectures on the document classification task. BIBREF12 presented more evidence of unnecessary complexity of the self-attention mechanism, and proposed a more lightweight and scalable dynamic convolution-based architecture that outperforms the self-attention baseline. These studies suggest a potential direction for future research, and are in good accordance with our observations. Methodology We pose the following research questions: What are the common attention patterns, how do they change during fine-tuning, and how does that impact the performance on a given task? (Sec. SECREF17, SECREF30) What linguistic knowledge is encoded in self-attention weights of the fine-tuned models and what portion of it comes from the pre-trained BERT? (Sec. SECREF25, SECREF34, SECREF36) How different are the self-attention patterns of different heads, and how important are they for a given task? (Sec. SECREF39) The answers to these questions come from a series of experiments with the basic pre-trained or the fine-tuned BERT models, as will be discussed below. All the experiments with the pre-trained BERT were conducted using the model provided with the PyTorch implementation of BERT (bert-base-uncased, 12-layer, 768-hidden, 12-heads, 110M parameters). We chose this smaller version of BERT because it shows competitive, if not better, performance while having fewer layers and heads, which makes it more interpretable. We use the following subset of GLUE tasks BIBREF4 for fine-tuning: MRPC: the Microsoft Research Paraphrase Corpus BIBREF13 STS-B: the Semantic Textual Similarity Benchmark BIBREF14 SST-2: the Stanford Sentiment Treebank, two-way classification BIBREF15 QQP: the Quora Question Pairs dataset RTE: the Recognizing Textual Entailment datasets QNLI: Question-answering NLI based on the Stanford Question Answering Dataset BIBREF3 MNLI: the Multi-Genre Natural Language Inference Corpus, matched section BIBREF16 Please refer to the original GLUE paper for details on the QQP and RTE datasets BIBREF4. We excluded two tasks: CoLa and the Winograd Schema Challenge. The latter is excluded due to the small size of the dataset. As for CoLa (the task of predicting linguistic acceptability judgments), GLUE authors report that the human performance is only 66.4, which is explained by the problems with the underlying methodology BIBREF17. Note also that CoLa is not included in the upcoming version of GLUE BIBREF18. All fine-tuning experiments follow the parameters reported in the original study (a batch size of 32 and 3 epochs, see devlin2018bert). In all these experiments, for a given input, we extract self-attention weights for each head in every layer. This results in a 2D float array of shape $L\times L$, where $L$ is the length of an input sequence. We will refer to such arrays as self-attention maps. Analysis of individual self-attention maps allows us to determine which target tokens are attended to the most as the input is processed token by token. We use these experiments to analyze how BERT processes different kinds of linguistic information, including the processing of different parts of speech (nouns, pronouns, and verbs), syntactic roles (objects, subjects), semantic relations, and negation tokens. Experiments In this section, we present the experiments conducted to address the above research questions. Experiments ::: BERT's self-attention patterns Manual inspection of self-attention maps for both basic pre-trained and fine-tuned BERT models suggested that there is a limited set of self-attention maps types that are repeatedly encoded across different heads. Consistently with previous observations, we identified five frequently occurring patterns, examples of which are shown in fig:atttypes: Vertical: mainly corresponds to attention to special BERT tokens [CLS] and [SEP]; Diagonal: formed by the attention to the previous/following tokens; Vertical+Diagonal: a mix of the previous two types, Block: intra-sentence attention for the tasks with two distinct sentences (such as, for example, RTE or MRPC), Heterogeneous: highly variable depending on the specific input and cannot be characterized by a distinct structure. Whereas the attention to the special tokens is important for cross-sentence reasoning, and the attention to the previous/following token comes from language model pre-training, we hypothesize that the last of the listed types is more likely to capture interpretable linguistic features, necessary for language understanding. To get a rough estimate of the percentage of attention heads that may capture linguistically interpretable information, we manually annotated around 400 sample self-attention maps as belonging to one of the five classes. The self-attention maps were obtained by feeding random input examples from selected tasks into the corresponding fine-tuned BERT model. This produced a somewhat unbalanced dataset, in which the “Vertical” class accounted for 30% of all samples. We then trained a convolutional neural network with 8 convolutional layers and ReLU activation functions to classify input maps into one of these classes. This model achieved the F1 score of 0.86 on the annotated dataset. We used this classifier to estimate the proportion of different self-attention patterns for the target GLUE tasks using up to 1000 examples (where available) from each validation set. Experiments ::: BERT's self-attention patterns ::: Results fig:attentionbydataset shows that the self-attention map types described above are consistently repeated across different heads and tasks. While a large portion of encoded information corresponds to attention to the previous/following token, to the special tokens, or a mixture of the two (the first three classes), the estimated upper bound on all heads in the “Heterogeneous” category (i.e. the ones that could be informative) varies from 32% (MRPC) to 61% (QQP) depending on the task. We would like to emphasize that this only gives the upper bound on the percentage of attention heads that could potentially capture meaningful structural information beyond adjacency and separator tokens. Experiments ::: Relation-specific heads in BERT In this experiment, our goal was to understand whether different syntactic and semantic relations are captured by self-attention patterns. While a large number of such relations could be investigated, we chose to examine semantic role relations defined in frame semantics, since they can be viewed as being at the intersection of syntax and semantics. Specifically, we focused on whether BERT captures FrameNet's relations between frame-evoking lexical units (predicates) and core frame elements BIBREF19, and whether the links between them produce higher attention weights in certain specific heads. We used pre-trained BERT in these experiments. The data for this experiment comes from FrameNet BIBREF19, a database that contains frame annotations for example sentences for different lexical units. Frame elements correspond to semantic roles for a given frame, for example, “buyer", “seller", and “goods” for the “Commercial_transaction" frame evoked by the words “sell” and “spend” or “topic” and “text” for the “Scrutiny” semantic frame evoked by the verb “address”. fig:framenet shows an example of such annotation. We extracted sample sentences for every lexical unit in the database and identified the corresponding core frame elements. Annotated elements in FrameNet may be rather long, so we considered only the sentences with frame elements of 3 tokens or less. Since each sentences is annotated only for one frame, semantic links from other frames can exist between unmarked elements. We therefore filter out all the sentences longer than 12 tokens, since shorter sentences are less likely to evoke multiple frames. To establish whether BERT attention captures semantic relations that do not simply correspond to the previous/following token, we exclude sentences where the linked objects are less than two tokens apart. This leaves us with 473 annotated sentences. For each of these sentences, we obtain pre-trained BERT's attention weights for each of the 144 heads. For every head, we return the maximum absolute attention weight among those token pairs that correspond to the annotated semantic link contained within a given sentence. We then average the derived scores over all the collected examples. This strategy allows us to identify the heads that prioritize the features correlated with frame-semantic relations within a sentence. Experiments ::: Relation-specific heads in BERT ::: Results The heatmap of averaged attention scores over all collected examples (fig:framenetresults) suggests that 2 out of 144 heads tend to attend to the parts of the sentence that FrameNet annotators identified as core elements of the same frame. fig:framenetresults shows an example of this attention pattern for these two heads. Both show high attention weight for “he” while processing “agitated” in the sentence “He was becoming agitated" (the frame “Emotion_directed”). Experiments ::: Change in self-attention patterns after fine-tuning Fine-tuning has a huge effect on performance, and this section attempts to find out why. To study how attention per head changes on average for each of the target GLUE tasks, we calculate cosine similarity between pre-trained and fine-tuned BERT's flattened arrays of attention weights. We average the derived similarities over all the development set examples. To evaluate contribution of pre-trained BERT to overall performance on the tasks, we consider two configurations of weights initialization, namely, pre-trained BERT weights and weights randomly sampled from normal distribution. Experiments ::: Change in self-attention patterns after fine-tuning ::: Results fig:cosine shows that for all the tasks except QQP, it is the last two layers that undergo the largest changes compared to the pre-trained BERT model. At the same time, tab:glue-results shows that fine-tuned BERT outperforms pre-trained BERT by a significant margin on all the tasks (with an average of 35.9 points of absolute difference). This leads us to conclude that the last two layers encode task-specific features that are attributed to the gain of scores, while earlier layers capture more fundamental and low-level information used in fine-tuned models. Randomly initialized BERT consistently produces lower scores than the ones achieved with pre-trained BERT. In fact, for some tasks (STS-B and QNLI), initialization with random weights gives worse performance that that of pre-trained BERT alone without fine-tuning. This suggests that pre-trained BERT does indeed contain linguistic knowledge that is helpful for solving these GLUE tasks. These results are consistent with similar studies, e.g., BIBREF20's results on fine-tuning a convolutional neural network pre-trained on ImageNet or BIBREF21's results on transfer learning for medical natural language inference. Experiments ::: Attention to linguistic features In this experiment, we investigate whether fine-tuning BERT for a given task creates self-attention patterns which emphasize specific linguistic features. In this case, certain kinds of tokens may get high attention weights from all the other tokens in the sentence, producing vertical stripes on the corresponding attention maps (fig:atttypes). We tested this hypothesis by checking whether there are vertical stripe patterns corresponding to certain linguistically interpretable features, and to what extent such features are relevant for solving a given task. In particular, we investigated attention to nouns, verbs, pronouns, subjects, objects, and negation words, and special BERT tokens across the tasks. For every head, we compute the sum of self-attention weights assigned to the token of interest from each input token. Since the weights depend on the number of tokens in the input sequence, this sum is normalized by sequence length. This allows us to aggregate the weights for this feature across different examples. If there are multiple tokens of the same type (e.g. several nouns or negations), we take the maximum value. We disregard input sentences that do not contain a given feature. For each investigated feature, we calculate this aggregated attention score for each head in every layer and build a map in order to detect the heads potentially responsible for this feature. We then compare the obtained maps to the ones derived using the pre-trained BERT model. This comparison enables us to determine if a particular feature is important for a specific task and whether it contributes to some tasks more than to others. Experiments ::: Attention to linguistic features ::: Results Contrary to our initial hypothesis that the vertical attention pattern may be motivated by linguistically meaningful features, we found that it is associated predominantly, if not exclusively, with attention to [CLS] and [SEP] tokens (see Figure FIGREF32. Note that the absolute [SEP] weights for the SST-2 sentiment analysis task are greater than for other tasks, which is explained by the fact that there is only one sentence in the model inputs, i.e. only one [SEP] token instead of two. There is also a clear tendency for earlier layers to pay attention to [CLS] and for later layers to [SEP], and this trend is consistent across all the tasks. We did detect heads that paid increased (compared to the pre-trained BERT) attention to nouns and direct objects of the main predicates (on the MRPC, RTE and QQP tasks), and negation tokens (on the QNLI task), but the attention weights of such tokens were negligible compared to [CLS] and [SEP]. Therefore, we believe that the striped attention maps generally come from BERT pre-training tasks rather than from task-specific linguistic reasoning. Experiments ::: Token-to-token attention To complement the experiments in Sec. SECREF34 and SECREF25, in this section, we investigate the attention patterns between tokens in the same sentence, i.e. whether any of the tokens are particularly important while a given token is being processed. We were interested specifically in the verb-subject relation and the noun-pronoun relation. Also, since BERT uses the representation of the [CLS] token in the last layer to make the prediction, we used the features from the experiment in Sec. SECREF34 in order to check if they get higher attention weights while the model is processing the [CLS] token. Experiments ::: Token-to-token attention ::: Results Our token-to-token attention experiments for detecting heads that prioritize noun-pronoun and verb-subject links resulted in a set of potential head candidates that coincided with diagonally structured attention maps. We believe that this happened due to the inherent property of English syntax where the dependent elements frequently appear close to each other, so it is difficult to distinguish such relations from the previous/following token attention coming from language model pre-training. Our investigation of attention distribution for the [CLS] token in the output layer suggests that for most tasks, with the exception of STS-B, RTE and QNLI, the [SEP] gets attended the most, as shown in fig:cls. Based on manual inspection, for the mentioned remaining tasks, the greatest attention weights correspond to the punctuation tokens, which are in a sense similar to [SEP]. Experiments ::: Disabling self-attention heads Since there does seem to be a certain degree of specialization for different heads, we investigated the effects of disabling different heads in BERT and the resulting effects on task performance. Since BERT relies heavily on the learned attention weights, we define disabling a head as modifying the attention values of a head to be constant $a = \frac{1}{L}$ for every token in the input sentence, where $L$ is the length of the sentence. Thus, every token receives the same attention, effectively disabling the learned attention patterns while maintaining the information flow of the original model. Note that by using this framework, we can disable an arbitrary number of heads, ranging from a single head per model to the whole layer or multiple layers. Experiments ::: Disabling self-attention heads ::: Results Our experiments suggest that certain heads have a detrimental effect on the overall performance of BERT, and this trend holds for all the chosen tasks. Unexpectedly, disabling some heads leads not to a drop in accuracy, as one would expect, but to an increase in performance. This is effect is different across tasks and datasets. While disabling some heads improves the results, disabling the others hurts the results. However, it is important to note that across all tasks and datasets, disabling some heads leads to an increase in performance. The gain from disabling a single head is different for different tasks, ranging from the minimum absolute gain of 0.1% for STS-B, to the maximum of 1.2% for MRPC (see fig:disableheadsall). In fact, for some tasks, such as MRPC and RTE, disabling a random head gives, on average, an increase in performance. Furthermore, disabling a whole layer, that is, all 12 heads in a given layer, also improves the results. fig:disablelayers shows the resulting model performance on the target GLUE tasks when different layers are disabled. Notably, disabling the first layer in the RTE task gives a significant boost, resulting in an absolute performance gain of 3.2%. However, effects of this operation vary across tasks, and for QNLI and MNLI, it produces a performance drop of up to -0.2%. Discussion In general, our results suggest that even the smaller base BERT model is significantly overparametrized. This is supported by the discovery of repeated self-attention patterns in different heads, as well as the fact that disabling both single and multiple heads is not detrimental to model performance and in some cases even improves it. We found no evidence that attention patterns that are mappable onto core frame-semantic relations actually improve BERT's performance. 2 out of 144 heads that seem to be “responsible" for these relations (see Section SECREF25) do not appear to be important in any of the GLUE tasks: disabling of either one does not lead to a drop of accuracy. This implies that fine-tuned BERT does not rely on this piece of semantic information and prioritizes other features instead. For instance, we noticed that both STS-B and RTE fine-tuned models rely on attention in the same pair of heads (head 1 in the fourth layer, and head 12 in the second layer), as shown in Figure FIGREF37. We manually checked the attention maps in those heads for a set of random inputs, and established that both of them have high weights for words that appear in both sentences of the input examples. This most likely means that word-by-word comparison of the two sentences provides a solid strategy of making a classification prediction for STS-B and RTE. Unfortunately, we were not able to provide a conceptually similar interpretation of heads important for other tasks. Conclusion In this work, we proposed a set of methods for analyzing self-attention mechanisms of BERT, comparing attention patterns for the pre-trained and fine-tuned versions of BERT. Our most surprising finding is that, although attention is the key BERT's underlying mechanism, the model can benefit from attention "disabling". Moreover, we demonstrated that there is redundancy in the information encoded by different heads and the same patterns get consistently repeated regardless of the target task. We believe that these two findings together suggest a further direction for research on BERT interpretation, namely, model pruning and finding an optimal sub-architecture reducing data repetition. Another direction for future work is to study self-attention patterns in a different language. We think that it would allow to disentangle attention maps potentially encoding linguistic information and heads that use simple heuristics like attending to the following/previous tokens.
nouns, verbs, pronouns, subjects, objects, negation words, special BERT tokens
65b579b2c62982e2ff154c8160288c2950d509f2
65b579b2c62982e2ff154c8160288c2950d509f2_0
Q: What subset of GLUE tasks is used? Text: Introduction Over the past year, models based on the Transformer architecture BIBREF0 have become the de-facto standard for state-of-the-art performance on many natural language processing (NLP) tasks BIBREF1, BIBREF2. Their key feature is the self-attention mechanism that provides an alternative to conventionally used recurrent neural networks (RNN). One of the most popular Transformer-based models is BERT, which learns text representations using a bi-directional Transformer encoder pre-trained on the language modeling task BIBREF2. BERT-based architectures have produced new state-of-the-art performance on a range of NLP tasks of different nature, domain, and complexity, including question answering, sequence tagging, sentiment analysis, and inference. State-of-the-art performance is usually obtained by fine-tuning the pre-trained model on the specific task. In particular, BERT-based models are currently dominating the leaderboards for SQuAD BIBREF3 and GLUE benchmarks BIBREF4. However, the exact mechanisms that contribute to the BERT's outstanding performance still remain unclear. We address this problem through selecting a set of linguistic features of interest and conducting a series of experiments that aim to provide insights about how well these features are captured by BERT. This paper makes the following contributions: We propose the methodology and offer the first detailed analysis of BERT's capacity to capture different kinds of linguistic information by encoding it in its self-attention weights. We present the evidence of BERT's overparametrization and suggest a counter-intuitive yet frustratingly simple way of improving its performance, showing absolute gains of up to 3.2%. Related work There have been several recent attempts to assess BERT's ability to capture structural properties of language. BIBREF5 demonstrated that BERT consistently assigns higher scores to the correct verb forms as opposed to the incorrect one in a masked language modeling task, suggesting some ability to model subject-verb agreement. BIBREF6 extended this work to using multiple layers and tasks, supporting the claim that BERT's intermediate layers capture rich linguistic information. On the other hand, BIBREF7 concluded that LSTMs generalize to longer sequences better, and are more robust with respect to agreement distractors, compared to Transformers. BIBREF8 investigated the transferability of contextualized word representations to a number of probing tasks requiring linguistic knowledge. Their findings suggest that (a) the middle layers of Transformer-based architectures are the most transferable to other tasks, and (b) higher layers of Transformers are not as task specific as the ones of RNNs. BIBREF9 argued that models using self-attention outperform CNN- and RNN-based models on a word sense disambiguation task due to their ability to extract semantic features from text. Our work contributes to the above discussion, but rather than examining representations extracted from different layers, we focus on the understanding of the self-attention mechanism itself, since it is the key feature of Transformer-based models. Another research direction that is relevant to our work is neural network pruning. BIBREF10 showed that widely used complex architectures suffer from overparameterization, and can be significantly reduced in size without a loss in performance. BIBREF5 observed that the smaller version of BERT achieves better scores on a number of syntax-testing experiments than the larger one. BIBREF11 questioned the necessity of computation-heavy neural networks, proving that a simple yet carefully tuned BiLSTM without attention achieves the best or at least competitive results compared to more complex architectures on the document classification task. BIBREF12 presented more evidence of unnecessary complexity of the self-attention mechanism, and proposed a more lightweight and scalable dynamic convolution-based architecture that outperforms the self-attention baseline. These studies suggest a potential direction for future research, and are in good accordance with our observations. Methodology We pose the following research questions: What are the common attention patterns, how do they change during fine-tuning, and how does that impact the performance on a given task? (Sec. SECREF17, SECREF30) What linguistic knowledge is encoded in self-attention weights of the fine-tuned models and what portion of it comes from the pre-trained BERT? (Sec. SECREF25, SECREF34, SECREF36) How different are the self-attention patterns of different heads, and how important are they for a given task? (Sec. SECREF39) The answers to these questions come from a series of experiments with the basic pre-trained or the fine-tuned BERT models, as will be discussed below. All the experiments with the pre-trained BERT were conducted using the model provided with the PyTorch implementation of BERT (bert-base-uncased, 12-layer, 768-hidden, 12-heads, 110M parameters). We chose this smaller version of BERT because it shows competitive, if not better, performance while having fewer layers and heads, which makes it more interpretable. We use the following subset of GLUE tasks BIBREF4 for fine-tuning: MRPC: the Microsoft Research Paraphrase Corpus BIBREF13 STS-B: the Semantic Textual Similarity Benchmark BIBREF14 SST-2: the Stanford Sentiment Treebank, two-way classification BIBREF15 QQP: the Quora Question Pairs dataset RTE: the Recognizing Textual Entailment datasets QNLI: Question-answering NLI based on the Stanford Question Answering Dataset BIBREF3 MNLI: the Multi-Genre Natural Language Inference Corpus, matched section BIBREF16 Please refer to the original GLUE paper for details on the QQP and RTE datasets BIBREF4. We excluded two tasks: CoLa and the Winograd Schema Challenge. The latter is excluded due to the small size of the dataset. As for CoLa (the task of predicting linguistic acceptability judgments), GLUE authors report that the human performance is only 66.4, which is explained by the problems with the underlying methodology BIBREF17. Note also that CoLa is not included in the upcoming version of GLUE BIBREF18. All fine-tuning experiments follow the parameters reported in the original study (a batch size of 32 and 3 epochs, see devlin2018bert). In all these experiments, for a given input, we extract self-attention weights for each head in every layer. This results in a 2D float array of shape $L\times L$, where $L$ is the length of an input sequence. We will refer to such arrays as self-attention maps. Analysis of individual self-attention maps allows us to determine which target tokens are attended to the most as the input is processed token by token. We use these experiments to analyze how BERT processes different kinds of linguistic information, including the processing of different parts of speech (nouns, pronouns, and verbs), syntactic roles (objects, subjects), semantic relations, and negation tokens. Experiments In this section, we present the experiments conducted to address the above research questions. Experiments ::: BERT's self-attention patterns Manual inspection of self-attention maps for both basic pre-trained and fine-tuned BERT models suggested that there is a limited set of self-attention maps types that are repeatedly encoded across different heads. Consistently with previous observations, we identified five frequently occurring patterns, examples of which are shown in fig:atttypes: Vertical: mainly corresponds to attention to special BERT tokens [CLS] and [SEP]; Diagonal: formed by the attention to the previous/following tokens; Vertical+Diagonal: a mix of the previous two types, Block: intra-sentence attention for the tasks with two distinct sentences (such as, for example, RTE or MRPC), Heterogeneous: highly variable depending on the specific input and cannot be characterized by a distinct structure. Whereas the attention to the special tokens is important for cross-sentence reasoning, and the attention to the previous/following token comes from language model pre-training, we hypothesize that the last of the listed types is more likely to capture interpretable linguistic features, necessary for language understanding. To get a rough estimate of the percentage of attention heads that may capture linguistically interpretable information, we manually annotated around 400 sample self-attention maps as belonging to one of the five classes. The self-attention maps were obtained by feeding random input examples from selected tasks into the corresponding fine-tuned BERT model. This produced a somewhat unbalanced dataset, in which the “Vertical” class accounted for 30% of all samples. We then trained a convolutional neural network with 8 convolutional layers and ReLU activation functions to classify input maps into one of these classes. This model achieved the F1 score of 0.86 on the annotated dataset. We used this classifier to estimate the proportion of different self-attention patterns for the target GLUE tasks using up to 1000 examples (where available) from each validation set. Experiments ::: BERT's self-attention patterns ::: Results fig:attentionbydataset shows that the self-attention map types described above are consistently repeated across different heads and tasks. While a large portion of encoded information corresponds to attention to the previous/following token, to the special tokens, or a mixture of the two (the first three classes), the estimated upper bound on all heads in the “Heterogeneous” category (i.e. the ones that could be informative) varies from 32% (MRPC) to 61% (QQP) depending on the task. We would like to emphasize that this only gives the upper bound on the percentage of attention heads that could potentially capture meaningful structural information beyond adjacency and separator tokens. Experiments ::: Relation-specific heads in BERT In this experiment, our goal was to understand whether different syntactic and semantic relations are captured by self-attention patterns. While a large number of such relations could be investigated, we chose to examine semantic role relations defined in frame semantics, since they can be viewed as being at the intersection of syntax and semantics. Specifically, we focused on whether BERT captures FrameNet's relations between frame-evoking lexical units (predicates) and core frame elements BIBREF19, and whether the links between them produce higher attention weights in certain specific heads. We used pre-trained BERT in these experiments. The data for this experiment comes from FrameNet BIBREF19, a database that contains frame annotations for example sentences for different lexical units. Frame elements correspond to semantic roles for a given frame, for example, “buyer", “seller", and “goods” for the “Commercial_transaction" frame evoked by the words “sell” and “spend” or “topic” and “text” for the “Scrutiny” semantic frame evoked by the verb “address”. fig:framenet shows an example of such annotation. We extracted sample sentences for every lexical unit in the database and identified the corresponding core frame elements. Annotated elements in FrameNet may be rather long, so we considered only the sentences with frame elements of 3 tokens or less. Since each sentences is annotated only for one frame, semantic links from other frames can exist between unmarked elements. We therefore filter out all the sentences longer than 12 tokens, since shorter sentences are less likely to evoke multiple frames. To establish whether BERT attention captures semantic relations that do not simply correspond to the previous/following token, we exclude sentences where the linked objects are less than two tokens apart. This leaves us with 473 annotated sentences. For each of these sentences, we obtain pre-trained BERT's attention weights for each of the 144 heads. For every head, we return the maximum absolute attention weight among those token pairs that correspond to the annotated semantic link contained within a given sentence. We then average the derived scores over all the collected examples. This strategy allows us to identify the heads that prioritize the features correlated with frame-semantic relations within a sentence. Experiments ::: Relation-specific heads in BERT ::: Results The heatmap of averaged attention scores over all collected examples (fig:framenetresults) suggests that 2 out of 144 heads tend to attend to the parts of the sentence that FrameNet annotators identified as core elements of the same frame. fig:framenetresults shows an example of this attention pattern for these two heads. Both show high attention weight for “he” while processing “agitated” in the sentence “He was becoming agitated" (the frame “Emotion_directed”). Experiments ::: Change in self-attention patterns after fine-tuning Fine-tuning has a huge effect on performance, and this section attempts to find out why. To study how attention per head changes on average for each of the target GLUE tasks, we calculate cosine similarity between pre-trained and fine-tuned BERT's flattened arrays of attention weights. We average the derived similarities over all the development set examples. To evaluate contribution of pre-trained BERT to overall performance on the tasks, we consider two configurations of weights initialization, namely, pre-trained BERT weights and weights randomly sampled from normal distribution. Experiments ::: Change in self-attention patterns after fine-tuning ::: Results fig:cosine shows that for all the tasks except QQP, it is the last two layers that undergo the largest changes compared to the pre-trained BERT model. At the same time, tab:glue-results shows that fine-tuned BERT outperforms pre-trained BERT by a significant margin on all the tasks (with an average of 35.9 points of absolute difference). This leads us to conclude that the last two layers encode task-specific features that are attributed to the gain of scores, while earlier layers capture more fundamental and low-level information used in fine-tuned models. Randomly initialized BERT consistently produces lower scores than the ones achieved with pre-trained BERT. In fact, for some tasks (STS-B and QNLI), initialization with random weights gives worse performance that that of pre-trained BERT alone without fine-tuning. This suggests that pre-trained BERT does indeed contain linguistic knowledge that is helpful for solving these GLUE tasks. These results are consistent with similar studies, e.g., BIBREF20's results on fine-tuning a convolutional neural network pre-trained on ImageNet or BIBREF21's results on transfer learning for medical natural language inference. Experiments ::: Attention to linguistic features In this experiment, we investigate whether fine-tuning BERT for a given task creates self-attention patterns which emphasize specific linguistic features. In this case, certain kinds of tokens may get high attention weights from all the other tokens in the sentence, producing vertical stripes on the corresponding attention maps (fig:atttypes). We tested this hypothesis by checking whether there are vertical stripe patterns corresponding to certain linguistically interpretable features, and to what extent such features are relevant for solving a given task. In particular, we investigated attention to nouns, verbs, pronouns, subjects, objects, and negation words, and special BERT tokens across the tasks. For every head, we compute the sum of self-attention weights assigned to the token of interest from each input token. Since the weights depend on the number of tokens in the input sequence, this sum is normalized by sequence length. This allows us to aggregate the weights for this feature across different examples. If there are multiple tokens of the same type (e.g. several nouns or negations), we take the maximum value. We disregard input sentences that do not contain a given feature. For each investigated feature, we calculate this aggregated attention score for each head in every layer and build a map in order to detect the heads potentially responsible for this feature. We then compare the obtained maps to the ones derived using the pre-trained BERT model. This comparison enables us to determine if a particular feature is important for a specific task and whether it contributes to some tasks more than to others. Experiments ::: Attention to linguistic features ::: Results Contrary to our initial hypothesis that the vertical attention pattern may be motivated by linguistically meaningful features, we found that it is associated predominantly, if not exclusively, with attention to [CLS] and [SEP] tokens (see Figure FIGREF32. Note that the absolute [SEP] weights for the SST-2 sentiment analysis task are greater than for other tasks, which is explained by the fact that there is only one sentence in the model inputs, i.e. only one [SEP] token instead of two. There is also a clear tendency for earlier layers to pay attention to [CLS] and for later layers to [SEP], and this trend is consistent across all the tasks. We did detect heads that paid increased (compared to the pre-trained BERT) attention to nouns and direct objects of the main predicates (on the MRPC, RTE and QQP tasks), and negation tokens (on the QNLI task), but the attention weights of such tokens were negligible compared to [CLS] and [SEP]. Therefore, we believe that the striped attention maps generally come from BERT pre-training tasks rather than from task-specific linguistic reasoning. Experiments ::: Token-to-token attention To complement the experiments in Sec. SECREF34 and SECREF25, in this section, we investigate the attention patterns between tokens in the same sentence, i.e. whether any of the tokens are particularly important while a given token is being processed. We were interested specifically in the verb-subject relation and the noun-pronoun relation. Also, since BERT uses the representation of the [CLS] token in the last layer to make the prediction, we used the features from the experiment in Sec. SECREF34 in order to check if they get higher attention weights while the model is processing the [CLS] token. Experiments ::: Token-to-token attention ::: Results Our token-to-token attention experiments for detecting heads that prioritize noun-pronoun and verb-subject links resulted in a set of potential head candidates that coincided with diagonally structured attention maps. We believe that this happened due to the inherent property of English syntax where the dependent elements frequently appear close to each other, so it is difficult to distinguish such relations from the previous/following token attention coming from language model pre-training. Our investigation of attention distribution for the [CLS] token in the output layer suggests that for most tasks, with the exception of STS-B, RTE and QNLI, the [SEP] gets attended the most, as shown in fig:cls. Based on manual inspection, for the mentioned remaining tasks, the greatest attention weights correspond to the punctuation tokens, which are in a sense similar to [SEP]. Experiments ::: Disabling self-attention heads Since there does seem to be a certain degree of specialization for different heads, we investigated the effects of disabling different heads in BERT and the resulting effects on task performance. Since BERT relies heavily on the learned attention weights, we define disabling a head as modifying the attention values of a head to be constant $a = \frac{1}{L}$ for every token in the input sentence, where $L$ is the length of the sentence. Thus, every token receives the same attention, effectively disabling the learned attention patterns while maintaining the information flow of the original model. Note that by using this framework, we can disable an arbitrary number of heads, ranging from a single head per model to the whole layer or multiple layers. Experiments ::: Disabling self-attention heads ::: Results Our experiments suggest that certain heads have a detrimental effect on the overall performance of BERT, and this trend holds for all the chosen tasks. Unexpectedly, disabling some heads leads not to a drop in accuracy, as one would expect, but to an increase in performance. This is effect is different across tasks and datasets. While disabling some heads improves the results, disabling the others hurts the results. However, it is important to note that across all tasks and datasets, disabling some heads leads to an increase in performance. The gain from disabling a single head is different for different tasks, ranging from the minimum absolute gain of 0.1% for STS-B, to the maximum of 1.2% for MRPC (see fig:disableheadsall). In fact, for some tasks, such as MRPC and RTE, disabling a random head gives, on average, an increase in performance. Furthermore, disabling a whole layer, that is, all 12 heads in a given layer, also improves the results. fig:disablelayers shows the resulting model performance on the target GLUE tasks when different layers are disabled. Notably, disabling the first layer in the RTE task gives a significant boost, resulting in an absolute performance gain of 3.2%. However, effects of this operation vary across tasks, and for QNLI and MNLI, it produces a performance drop of up to -0.2%. Discussion In general, our results suggest that even the smaller base BERT model is significantly overparametrized. This is supported by the discovery of repeated self-attention patterns in different heads, as well as the fact that disabling both single and multiple heads is not detrimental to model performance and in some cases even improves it. We found no evidence that attention patterns that are mappable onto core frame-semantic relations actually improve BERT's performance. 2 out of 144 heads that seem to be “responsible" for these relations (see Section SECREF25) do not appear to be important in any of the GLUE tasks: disabling of either one does not lead to a drop of accuracy. This implies that fine-tuned BERT does not rely on this piece of semantic information and prioritizes other features instead. For instance, we noticed that both STS-B and RTE fine-tuned models rely on attention in the same pair of heads (head 1 in the fourth layer, and head 12 in the second layer), as shown in Figure FIGREF37. We manually checked the attention maps in those heads for a set of random inputs, and established that both of them have high weights for words that appear in both sentences of the input examples. This most likely means that word-by-word comparison of the two sentences provides a solid strategy of making a classification prediction for STS-B and RTE. Unfortunately, we were not able to provide a conceptually similar interpretation of heads important for other tasks. Conclusion In this work, we proposed a set of methods for analyzing self-attention mechanisms of BERT, comparing attention patterns for the pre-trained and fine-tuned versions of BERT. Our most surprising finding is that, although attention is the key BERT's underlying mechanism, the model can benefit from attention "disabling". Moreover, we demonstrated that there is redundancy in the information encoded by different heads and the same patterns get consistently repeated regardless of the target task. We believe that these two findings together suggest a further direction for research on BERT interpretation, namely, model pruning and finding an optimal sub-architecture reducing data repetition. Another direction for future work is to study self-attention patterns in a different language. We think that it would allow to disentangle attention maps potentially encoding linguistic information and heads that use simple heuristics like attending to the following/previous tokens.
MRPC, STS-B, SST-2, QQP, RTE, QNLI, MNLI
b2c8c90041064183159cc825847c142b1309a849
b2c8c90041064183159cc825847c142b1309a849_0
Q: Do they predict the sentiment of the review summary? Text: Introduction Sentiment analysis BIBREF0, BIBREF1 is a fundamental task in natural language processing. In particular, sentiment analysis of user reviews has wide applicationsBIBREF2, BIBREF3, BIBREF4, BIBREF5. In many review websites such as Amazon and IMDb, the user is allowed to give a summary in addition to their review. Summaries usually contain more abstract information about the review. As shown in Figure FIGREF3, two screenshots of reviews were taken from Amazon and IMDb websites, respectively. The user-written summaries of these reviews can be highly indicative of the final polarity. As a result, it is worth considering them together with the review itself for making sentiment classification. To this end, some recent work BIBREF6, BIBREF7 exploits joint modeling. The model structure can be illustrated by Figure FIGREF4. In particular, given a review input, a model is trained to simultaneously predict the sentiment and summary. As a result, both summary information and review information are integrated in the review encoder through back-propagation training. However, one limitation of this method is that it does not explicitly encode a summary during test time. One solution, as shown in Figure FIGREF4, is to train a separate summary generator, which learns to predict a summary given a review. This allows a sentiment classifier to simultaneously encode the review and its summary, before making a prediction using both representations. One further advantage of this model is that it can make use of a user-given summary if it is available with the review, which is the case for the review websites shown in Figure 1. We therefore investigate such a model. One limitation of this method, however, is that it does not capture interaction of review and summary information as thoroughly as the method shown in Figure FIGREF4, since the review and the summary are encoded using two separate encoders. To address this issue, we further investigate a joint encoder for review and summary, which is demonstrated in Figure FIGREF4. The model works by jointly encoding the review and the summary in a multi-layer structure, incrementally updating the representation of the review by consulting the summary representation at each layer. As shown in Figure FIGREF5, our model consists of a summary encoder, a hierarchically-refined review encoder and an output layer. The review encoder is composed of multiple attention layers, each consisting of a sequence encoding layer and an attention inference layer. Summary information is integrated into the representation of the review content at each attention layer, thus, a more abstract review representation is learned in subsequent layers based on a lower-layer representation. This mechanism allows the summary to better guide the representation of the review in a bottom-up manner for improved sentiment classification. We evaluate our proposed model on the SNAP (Stanford Network Analysis Project) Amazon review datasets BIBREF8, which contain not only reviews and ratings, but also golden summaries. In scenarios where there is no user-written summary for a review, we use pointer-generator network BIBREF9 to generate abstractive summaries. Empirical results show that our model significantly outperforms all strong baselines, including joint modeling, separate encoder and joint encoder methods. In addition, our model achieves new state-of-the-art performance, attaining 2.1% (with generated summary) and 4.8% (with golden summary) absolutely improvements compared to the previous best method on SNAP Amazon review benchmark. Related Work The majority of recent sentiment analysis models are based on either convolutional or recurrent neural networks to encode sequences BIBREF10, BIBREF11. In particular, attention-based models have been widely explored, which assign attention weights to hidden states to generate a representation of the input sequence. A hierarchical model with two levels of attention mechanisms was proposed for document classification BIBREF12. Self-attention mechanism has also been used in sentiment analysis BIBREF13, BIBREF14. However, BIBREF15 empirically showed that self-attention mechanism does not consistently agree with the most salient features, which means that self-attention models may suffer from attending on explicit but irrelevant sentimental words. Rationales were also introduced to sentiment analysis task. BIBREF16 proposed a unsupervised latent model that selects a rationale and then uses the rationale for sentiment analysis. A rationale-augmented CNN model BIBREF17 was proposed, which regards golden rationales as additional input and uses the probability as rationale-level attention weights to generate the final representation for text classification. There has also been work focusing on joint summarization and sentiment classification BIBREF6, BIBREF7, whose general structures are illustrated in Figure FIGREF4. These models can predict sentiment label and summary simultaneously. However, they do not encode summaries explicitly during test time, which makes their performance be limited to some extent. Method In this section, we introduce our proposed model in details. We first give the problem formulation, followed by an overview of the proposed model, and explain each layer of our model in details, before finally giving the loss function and training methods. Method ::: Problem Formulation The input to our task is a pair $(X^w, X^s)$, where $X^w = x^w_1, x^w_2, ..., x^w_n$ is a summary and $X^s = x^s_1, x^s_2,...,x^s_m$ is a review, the task is to predict the sentiment label $y \in [1, 5]$, where 1 denotes the most negative sentiment and 5 denotes the most positive sentiment. $n$ and $m$ denote the size of the review and summary in the number of words, respectively. The training set is $D=\lbrace (X^w_i, X^s_i, y_i)\rbrace |_{i=1}^M$ where $M$ is the total number of training examples. Method ::: Model Overview Figure FIGREF5 gives the architecture of the proposed model, which consists of three modules: a summary encoder, a hierarchically-refined review encoder and an output layer. The summary encoder encodes the summary into a hidden state matrix. The review encoder consists of several layers for representing $\mathbf {x}^w$, each containing a sequence encoding sublayer and an attention inference sublayer. The sequence encoding sublayer encodes the review text as a word sequence. The attention inference layer acts as a key component, which takes the hidden states from both the original review and the summary as input calculating dot-product attention weights for original review under additional supervision from summary information. Multi-head attention BIBREF18 as well as residual connection are also adopted. The output layer predicts the potential sentiment label according to hidden states from the previous layer. Method ::: Summary Encoder Input for the summary encoder is a sequence of summary word representations $\mathbf {x}^s = \mathbf {x}^s_1, \mathbf {x}^s_2, ..., \mathbf {x}^s_m = \lbrace emb(x_1^s), ..., emb(x_m^s)\rbrace $, where $emb$ denotes a word embedding lookup table. Word representations are fed into a standard BiLSTM. We adopt a standard LSTM formulation, where a sequence of hidden states $\mathbf {h}_t$ are calculated from a sequence of $\mathbf {x}_t$($t \in [1,...,m]$). A forward left-to-right LSTM layer and a backward right-to-left LSTM yield a sequence of forward hidden states $\lbrace {\stackrel{\rightarrow }{\mathbf {h}_1^s}},...,{\stackrel{\rightarrow }{\mathbf {h}_n^s}}\rbrace $ and a sequence of backward hidden states $\lbrace {\stackrel{\leftarrow }{\mathbf {h}_1^s}},...,{\stackrel{\leftarrow }{\mathbf {h}_n^s}}\rbrace $, respectively. The two hidden states are concatenated to form a final representation: We then apply an average-pooling operation over the hidden and take $\mathbf {h}^s = avg\_pooling(\mathbf {h}^s_1, \mathbf {h}^s_2,...,\mathbf {h}^s_n)$ as the final representation of summary text. Method ::: Hierarchically-Refined Review Encoder The hierarchically-refined review encoder consists of several review encoder layers, each of which is composed of a sequence encoding layer and an attention inference layer. Method ::: Hierarchically-Refined Review Encoder ::: Sequence Encoding Layer Given a review $\mathbf {x}^w = \lbrace emb(x_1^w),...,emb(x_n^w)\rbrace $, another BiLSTM is adopted (the same equation with different parameters compared to the one used in the summary encoder), deriving a sequence of review hidden states $\mathbf {H}^w=\lbrace \mathbf {h}^w_1, \mathbf {h}^w_2,...,\mathbf {h}^s_n \rbrace $. Method ::: Hierarchically-Refined Review Encoder ::: Attention Inference Layer In the attention inference layer, we model the dependencies between the original review and the summary with multi-head dot-product attention.Each head produces an attention matrix $\mathbf {\alpha } \in \mathbb {R}^{d_h \times 1}$ consisting of a set of similarity scores between the hidden state of each token of the review text and the summary representation. The hidden state outputs are calculated by where $\mathbf {W}_i^Q \in \mathbb {R}^{d_{h} \times \frac{d_{h}}{k}}$, $\mathbf {W}_i^K \in \mathbb {R}^{d_{h} \times \frac{d_{h}}{k}}$ and $\mathbf {W}_i^V \in \mathbb {R}^{d_{h} \times \frac{d_{h}}{k}}$ are model parameters. $Q$, $K$ and $V$ represent Query, Key and Value, respectively. $k$ is the number of parallel heads and $i \in [1,k]$ indicates which head is being processed. Following BIBREF18, we adopt a residual connection around each attention inference layer, followed by layer normalization BIBREF19 : $\mathbf {H}$ is then fed to the subsequent sequence encoding layer as input, if any. According to the equations of standard LSTM and Equation DISPLAY_FORM13, tokens of the original review that are the most relevant to the summary are focused on more by consulting summary representation. The hidden states $\mathbf {H}^{w,s}$ are thus a representation matrix of the review text that encompass key features of summary representation. Multi-head attention mechanism ensures that multi-faced semantic dependency features can be captured during the process, which is beneficial for scenarios where several key points exist in one review. Note also that our design of the review encoding part of the hierarchically-refined attention network is similar to the Transformer architecture in the use of multi-head attention, residual connection and layer normalization BIBREF18. However, our experiments show that bi-directional LSTM works better compared to self-attention network as a basic layer structure. This may result from the fact that Transformer requires a larger amount of training data for the most effectiveness. Method ::: Output Layer Finally, global average pooling is applied after the previous layer, and then followed by a classifier layer: where $\hat{y}$ is the predicted sentiment label; $\mathbf {W}$ and $\mathbf {b}$ are parameters to be learned. Method ::: Training Given a dataset $D={\lbrace (X^w_t,X^s_t,y_t)\rbrace }|^{|T|}_{t=1}$, our model can be trained by minimizing the cross-entropy loss between where $\mathbf {p}^{y_t}$ denotes the value of the label in $\mathbf {p}$ that corresponds to $y_t$. Experiments We compare our model with several strong baselines and previous state-of-the-art methods, investigating its main effects. Experiments ::: Datasets We empirically compare different methods using Amazon SNAP Review Dataset BIBREF20, which is a part of Stanford Network Analysis Project. The raw dataset consists of around 34 millions Amazon reviews in different domains, such as books, games, sports and movies. Each review mainly contains a product ID, a piece of user information, a plain text review, a review summary and an overall sentiment rating which ranges from 1 to 5. The statistics of our adopted dataset is shown in Table TABREF20. For fair comparison with previous work, we adopt the same partitions used by previous work BIBREF6, BIBREF7, which is, for each domain, the first 1000 samples are taken as the development set, the following 1000 samples as the test set, and the rest as the training set. Experiments ::: Experimental Settings We use GloVe BIBREF22 300-dimensional embeddings as pretrained word vectors. A LSTM hidden size of 256 and four heads for multi-head attention mechanism are adopted. We use Adam BIBREF23 to optimize our model, with an initial learning rate of 0.0003, a decay rate of 0.97, momentum parameters $\beta _1 = 0.9$, $\beta _2 = 0.999$, and $\epsilon = 1 \times 10^{-8}$. The dropout rate is set depending on the size of each dataset, which is 0.5 for both Toys & Games and Sports & Outdoors and 0.2 for Movies & TV. We conduct experiments with both golden summaries and generated summaries. For generating automatic-decoded summaries, we train a pointer-generator network (PG-Net) with coverage mechanism BIBREF9, which is a specially designed sequence-to-sequence attention-based model that can generate the summary by copying words from the text document or generating words from a fixed vocabulary set at the same time. We generally follow the experimental settings in the original paper except for some minor adjustments specially made for our datasets. Noted that in our work PG-Net can be replaced by any other summarization model. Experiments ::: Baselines ::: HSSC @!START@BIBREF6@!END@. This model adopts encoder parameter sharing for jointly sentiment classification and summarization. It predicts the sentiment label using a highway layer, concatenating the hidden state in summary decoder and the original text representation in encoder. Experiments ::: Baselines ::: SAHSSC @!START@BIBREF7@!END@. This work also adopts encoder parameter sharing for jointly sentiment classification and summarization. They use two separate BiLSTMs with self-attention mechanism for generating review and summary representations. Experiments ::: Baselines ::: BiLSTM+Pooling. For this baseline, we use a BiLSTM with hidden sizes of 256 in both directions, and average pooling across all hidden states to form the representation. This method serves as a naive baseline for making use of both review and summary in sentiment classification. It can also be used to compare the effectiveness of the review itself, the summary itself and the combination of both when used as inputs to the problem. Experiments ::: Baselines ::: BiLSTM+Self-attention @!START@BIBREF13@!END@. This baseline uses a BiLSTM with hidden size of 256 in both directions. On the top of BiLSTM, self-attention is used to provide a set of summation weight vectors for the final representation. This method is conceptually simple yet gives the state-of-the-art results for many classification and text matching tasks. Its main difference to our model lies in the fact that attention is performed only in the top hidden layer in this method, yet in every layer in ours. Experiments ::: Baselines ::: BiLSTM+Hard Attention To demonstrate the efficiency of our model structure, we also adopt hard attention BIBREF24 for comparison, which is supervised using an extractive summarization objective. In particular, words in the original review that match to the corresponding summary are treated as the summary in their original order. In the case of Figure FIGREF3, the extractive summaries for the review are “James Cameron's Titanic is easily the most overrated film in history”, which corresponds to the user-written summary “James Cameron's 1997 Titanic is easily the most overrated film in history!”. The model also calculates another loss between attention weights and extractive summary labels, so that the hard attention weights are trained to strictly follow the extractive summary. For baselines that adopt the separate encoder structure, we generally calculate the representations of review and summary separately with two encoders that hold their own parameters, and then concatenate the two representations alongside the hidden-size dimension. For the joint encoder baselines, we first concatenate the review and summary text, and then encode the concatenated text with one single encoder. Experiments ::: Development Experiments We use the Toys & Games development set to investigate different key configurations of our model. The results are shown in Table TABREF29. Experiments ::: Development Experiments ::: Self-attention Baseline We compare different numbers of BiLSTM layers and hidden sizes in BiLSTM self-attention. As can be seen, with more layers a stacked BiLSTM with larger hidden sizes does not give better results compared to a hidden size of 256 either. Experiments ::: Development Experiments ::: Hidden Size We see an evident improvement of our model when the hidden size increases from 128 to 256. However, the improvement becomes relatively small compared to a large increase in the number of parameters when the hidden size is further increased to 360. Therefore, we adopt 256 as the hidden size in our experiments. Experiments ::: Development Experiments ::: Number of Layers As Table TABREF29 shows, the accuracy increases when increasing layer numbers from 1 to 2. More layers do not increase the accuracy on development set. We thus set 2 as the number of review encoder layers in the experiments. The best performing model size is comparable to that of the BiLSTM self-attention, demonstrating that the number of parameters is not the key factor to models' performance. Experiments ::: Results Table TABREF34 and Table TABREF35 show the final results. Our model outperforms all the baseline models and the top-performing models with both generated summary and golden summary, for all the three datasets. In the scenario where golden summaries are used, BiLSTM+self-attention performs the best among all the baselines, which shows that attention is a useful way to integrate summary and review information. Hard-attention receives more supervision information compared with soft-attention, by supervision signals from extractive summaries. However, it underperforms the soft attention model, which indicates that the most salient words for making sentiment classification may not strictly overlap with extractive summaries. This justifies the importance of user written or automatic-generated summary. A comparison between models that use summary information and those that do not use summary information shows that the review summary is useful for sentiment classification. In addition, the same models work consistently better when the user written gold summary is used compared to a system generated summary, which is intuitively reasonable since the current state-of-the-art abstractive summarization models are far from perfect. Interestingly, as shown in the second section of the table, the gold summary itself does not lead to better sentiment accuracy compared with the review itself, which shows that summaries better serve as auxiliary information sources to review contents. With both gold summaries and automatic-generated summaries, our model gives better results as compared to BiLSTM+self-attention. The latter integrates information from reviews and summaries only in the top representation layer, which is also the standard practice in question answering BIBREF25 and machine translation BIBREF26 models. In contrast, our model integrates summary information into the review representation in each layer, thereby allowing the integrated representation to be hierarchically refined, leading to more abstract hidden states. Finally, the fact that with gold summary, our baseline and final models outperforms the state-of-the-art methods by jointly training shows the importance of making use of user written summaries when they are available. Even with system summary, out models still outperforms HSSC and SAHSSC, showing that our network is more effective than parameter sharing under the same setting without input summaries. Experiments ::: Results ::: Review Length Figure FIGREF37 consists of line graphs on the accuracy of BiLSTM+self-attention, BiLSTM+pooling and our model against the review length. As the review length increases, the performance of all models decreases. BiLSTM+self-attention does not outperform BiLSTM+pooling on long text. Our method gives better results compared to two baseline models for long reviews, demonstrating that our model is effective for capturing long-term dependency. This is likely because hierarchically-refined attention maintains the most salient information while ignoring the redundant parts of the original review text. Our model can thus be more robust when review has irrelevant sentimental words, which usually exists in larger reviews such as the example in Figure FIGREF3. The hierarchical architecture allows the lower layers to encode local information, while the higher layers can capture long-term dependency and thus better encode global information. Experiments ::: Results ::: Case Study Our model has a natural advantage of interpretability thanks to the use of attention inference layer. We visualize the hierarchically-refined attention of two samples from the test set of Toys & Games. We also visualize self-attention distribution for fair comparison. To make the visualizations clear and to avoid confusion, we choose to visualize the most salient parts, by rescaling all attention weights into an interval of $[0, 100]$ and adopting 50 as a threshold for attention visualization, showing only attention weights $\ge 50$. As shown in Figure FIGREF38, the example with generated summary has 5 stars as its golden rating score. The summary text is “fun for the whole new game in all ages ! ! ! fun ! ! !", which suggests that the game is (1) fun (from word “fun") and (2) not difficult to learn (from phrase “all ages"). It can be seen that both the self-attention model and the first layer of our model attend to the strongly positive phrase “quite fun", which is relevant to the word “fun" in the summary. In comparisons the second layer attends to the phrase “much easier", which is relevant to the phrase “in all ages" in the summary. This verifies our model's effectiveness of leveraging abstractive summary information. Figure FIGREF38 illustrates a 5-star-rating example with golden summary. The summary text is “Favorite Game to Teach to Newbies". As shown in the heatmap, self-attention can only attend to some general sentimental words, such as “hard", “fun", “immensely" and “most", which deviates from the main idea of the document text. In comparison, the first layer of our model attends to phrases like “easy to teach", which is a perfect match of the phrase “teach to newbies" in the summary. This shows that the shallow sequence inference layer can learn direct similarity matching information under the supervision of summarization. In addition, the second layer of our model attends to phrases including “would recommend this to anyone", which links to “easy to teach" and “Teach to Newbies", showing that the deeper sequence inference layer of our model can learn potential connections between the review and the summary. Conclusion We investigated a hierarchically-refined attention network for better sentiment prediction. Our model allows multi-interaction between summary and review representation in a hierarchical manner. Empirical results show that the proposed method outperforms all strong baselines and previous work and achieves new state-of-the-art performance on SNAP Amazon Review dataset.
No
68e3f3908687505cb63b538e521756390c321a1c
68e3f3908687505cb63b538e521756390c321a1c_0
Q: What is the performance difference of using a generated summary vs. a user-written one? Text: Introduction Sentiment analysis BIBREF0, BIBREF1 is a fundamental task in natural language processing. In particular, sentiment analysis of user reviews has wide applicationsBIBREF2, BIBREF3, BIBREF4, BIBREF5. In many review websites such as Amazon and IMDb, the user is allowed to give a summary in addition to their review. Summaries usually contain more abstract information about the review. As shown in Figure FIGREF3, two screenshots of reviews were taken from Amazon and IMDb websites, respectively. The user-written summaries of these reviews can be highly indicative of the final polarity. As a result, it is worth considering them together with the review itself for making sentiment classification. To this end, some recent work BIBREF6, BIBREF7 exploits joint modeling. The model structure can be illustrated by Figure FIGREF4. In particular, given a review input, a model is trained to simultaneously predict the sentiment and summary. As a result, both summary information and review information are integrated in the review encoder through back-propagation training. However, one limitation of this method is that it does not explicitly encode a summary during test time. One solution, as shown in Figure FIGREF4, is to train a separate summary generator, which learns to predict a summary given a review. This allows a sentiment classifier to simultaneously encode the review and its summary, before making a prediction using both representations. One further advantage of this model is that it can make use of a user-given summary if it is available with the review, which is the case for the review websites shown in Figure 1. We therefore investigate such a model. One limitation of this method, however, is that it does not capture interaction of review and summary information as thoroughly as the method shown in Figure FIGREF4, since the review and the summary are encoded using two separate encoders. To address this issue, we further investigate a joint encoder for review and summary, which is demonstrated in Figure FIGREF4. The model works by jointly encoding the review and the summary in a multi-layer structure, incrementally updating the representation of the review by consulting the summary representation at each layer. As shown in Figure FIGREF5, our model consists of a summary encoder, a hierarchically-refined review encoder and an output layer. The review encoder is composed of multiple attention layers, each consisting of a sequence encoding layer and an attention inference layer. Summary information is integrated into the representation of the review content at each attention layer, thus, a more abstract review representation is learned in subsequent layers based on a lower-layer representation. This mechanism allows the summary to better guide the representation of the review in a bottom-up manner for improved sentiment classification. We evaluate our proposed model on the SNAP (Stanford Network Analysis Project) Amazon review datasets BIBREF8, which contain not only reviews and ratings, but also golden summaries. In scenarios where there is no user-written summary for a review, we use pointer-generator network BIBREF9 to generate abstractive summaries. Empirical results show that our model significantly outperforms all strong baselines, including joint modeling, separate encoder and joint encoder methods. In addition, our model achieves new state-of-the-art performance, attaining 2.1% (with generated summary) and 4.8% (with golden summary) absolutely improvements compared to the previous best method on SNAP Amazon review benchmark. Related Work The majority of recent sentiment analysis models are based on either convolutional or recurrent neural networks to encode sequences BIBREF10, BIBREF11. In particular, attention-based models have been widely explored, which assign attention weights to hidden states to generate a representation of the input sequence. A hierarchical model with two levels of attention mechanisms was proposed for document classification BIBREF12. Self-attention mechanism has also been used in sentiment analysis BIBREF13, BIBREF14. However, BIBREF15 empirically showed that self-attention mechanism does not consistently agree with the most salient features, which means that self-attention models may suffer from attending on explicit but irrelevant sentimental words. Rationales were also introduced to sentiment analysis task. BIBREF16 proposed a unsupervised latent model that selects a rationale and then uses the rationale for sentiment analysis. A rationale-augmented CNN model BIBREF17 was proposed, which regards golden rationales as additional input and uses the probability as rationale-level attention weights to generate the final representation for text classification. There has also been work focusing on joint summarization and sentiment classification BIBREF6, BIBREF7, whose general structures are illustrated in Figure FIGREF4. These models can predict sentiment label and summary simultaneously. However, they do not encode summaries explicitly during test time, which makes their performance be limited to some extent. Method In this section, we introduce our proposed model in details. We first give the problem formulation, followed by an overview of the proposed model, and explain each layer of our model in details, before finally giving the loss function and training methods. Method ::: Problem Formulation The input to our task is a pair $(X^w, X^s)$, where $X^w = x^w_1, x^w_2, ..., x^w_n$ is a summary and $X^s = x^s_1, x^s_2,...,x^s_m$ is a review, the task is to predict the sentiment label $y \in [1, 5]$, where 1 denotes the most negative sentiment and 5 denotes the most positive sentiment. $n$ and $m$ denote the size of the review and summary in the number of words, respectively. The training set is $D=\lbrace (X^w_i, X^s_i, y_i)\rbrace |_{i=1}^M$ where $M$ is the total number of training examples. Method ::: Model Overview Figure FIGREF5 gives the architecture of the proposed model, which consists of three modules: a summary encoder, a hierarchically-refined review encoder and an output layer. The summary encoder encodes the summary into a hidden state matrix. The review encoder consists of several layers for representing $\mathbf {x}^w$, each containing a sequence encoding sublayer and an attention inference sublayer. The sequence encoding sublayer encodes the review text as a word sequence. The attention inference layer acts as a key component, which takes the hidden states from both the original review and the summary as input calculating dot-product attention weights for original review under additional supervision from summary information. Multi-head attention BIBREF18 as well as residual connection are also adopted. The output layer predicts the potential sentiment label according to hidden states from the previous layer. Method ::: Summary Encoder Input for the summary encoder is a sequence of summary word representations $\mathbf {x}^s = \mathbf {x}^s_1, \mathbf {x}^s_2, ..., \mathbf {x}^s_m = \lbrace emb(x_1^s), ..., emb(x_m^s)\rbrace $, where $emb$ denotes a word embedding lookup table. Word representations are fed into a standard BiLSTM. We adopt a standard LSTM formulation, where a sequence of hidden states $\mathbf {h}_t$ are calculated from a sequence of $\mathbf {x}_t$($t \in [1,...,m]$). A forward left-to-right LSTM layer and a backward right-to-left LSTM yield a sequence of forward hidden states $\lbrace {\stackrel{\rightarrow }{\mathbf {h}_1^s}},...,{\stackrel{\rightarrow }{\mathbf {h}_n^s}}\rbrace $ and a sequence of backward hidden states $\lbrace {\stackrel{\leftarrow }{\mathbf {h}_1^s}},...,{\stackrel{\leftarrow }{\mathbf {h}_n^s}}\rbrace $, respectively. The two hidden states are concatenated to form a final representation: We then apply an average-pooling operation over the hidden and take $\mathbf {h}^s = avg\_pooling(\mathbf {h}^s_1, \mathbf {h}^s_2,...,\mathbf {h}^s_n)$ as the final representation of summary text. Method ::: Hierarchically-Refined Review Encoder The hierarchically-refined review encoder consists of several review encoder layers, each of which is composed of a sequence encoding layer and an attention inference layer. Method ::: Hierarchically-Refined Review Encoder ::: Sequence Encoding Layer Given a review $\mathbf {x}^w = \lbrace emb(x_1^w),...,emb(x_n^w)\rbrace $, another BiLSTM is adopted (the same equation with different parameters compared to the one used in the summary encoder), deriving a sequence of review hidden states $\mathbf {H}^w=\lbrace \mathbf {h}^w_1, \mathbf {h}^w_2,...,\mathbf {h}^s_n \rbrace $. Method ::: Hierarchically-Refined Review Encoder ::: Attention Inference Layer In the attention inference layer, we model the dependencies between the original review and the summary with multi-head dot-product attention.Each head produces an attention matrix $\mathbf {\alpha } \in \mathbb {R}^{d_h \times 1}$ consisting of a set of similarity scores between the hidden state of each token of the review text and the summary representation. The hidden state outputs are calculated by where $\mathbf {W}_i^Q \in \mathbb {R}^{d_{h} \times \frac{d_{h}}{k}}$, $\mathbf {W}_i^K \in \mathbb {R}^{d_{h} \times \frac{d_{h}}{k}}$ and $\mathbf {W}_i^V \in \mathbb {R}^{d_{h} \times \frac{d_{h}}{k}}$ are model parameters. $Q$, $K$ and $V$ represent Query, Key and Value, respectively. $k$ is the number of parallel heads and $i \in [1,k]$ indicates which head is being processed. Following BIBREF18, we adopt a residual connection around each attention inference layer, followed by layer normalization BIBREF19 : $\mathbf {H}$ is then fed to the subsequent sequence encoding layer as input, if any. According to the equations of standard LSTM and Equation DISPLAY_FORM13, tokens of the original review that are the most relevant to the summary are focused on more by consulting summary representation. The hidden states $\mathbf {H}^{w,s}$ are thus a representation matrix of the review text that encompass key features of summary representation. Multi-head attention mechanism ensures that multi-faced semantic dependency features can be captured during the process, which is beneficial for scenarios where several key points exist in one review. Note also that our design of the review encoding part of the hierarchically-refined attention network is similar to the Transformer architecture in the use of multi-head attention, residual connection and layer normalization BIBREF18. However, our experiments show that bi-directional LSTM works better compared to self-attention network as a basic layer structure. This may result from the fact that Transformer requires a larger amount of training data for the most effectiveness. Method ::: Output Layer Finally, global average pooling is applied after the previous layer, and then followed by a classifier layer: where $\hat{y}$ is the predicted sentiment label; $\mathbf {W}$ and $\mathbf {b}$ are parameters to be learned. Method ::: Training Given a dataset $D={\lbrace (X^w_t,X^s_t,y_t)\rbrace }|^{|T|}_{t=1}$, our model can be trained by minimizing the cross-entropy loss between where $\mathbf {p}^{y_t}$ denotes the value of the label in $\mathbf {p}$ that corresponds to $y_t$. Experiments We compare our model with several strong baselines and previous state-of-the-art methods, investigating its main effects. Experiments ::: Datasets We empirically compare different methods using Amazon SNAP Review Dataset BIBREF20, which is a part of Stanford Network Analysis Project. The raw dataset consists of around 34 millions Amazon reviews in different domains, such as books, games, sports and movies. Each review mainly contains a product ID, a piece of user information, a plain text review, a review summary and an overall sentiment rating which ranges from 1 to 5. The statistics of our adopted dataset is shown in Table TABREF20. For fair comparison with previous work, we adopt the same partitions used by previous work BIBREF6, BIBREF7, which is, for each domain, the first 1000 samples are taken as the development set, the following 1000 samples as the test set, and the rest as the training set. Experiments ::: Experimental Settings We use GloVe BIBREF22 300-dimensional embeddings as pretrained word vectors. A LSTM hidden size of 256 and four heads for multi-head attention mechanism are adopted. We use Adam BIBREF23 to optimize our model, with an initial learning rate of 0.0003, a decay rate of 0.97, momentum parameters $\beta _1 = 0.9$, $\beta _2 = 0.999$, and $\epsilon = 1 \times 10^{-8}$. The dropout rate is set depending on the size of each dataset, which is 0.5 for both Toys & Games and Sports & Outdoors and 0.2 for Movies & TV. We conduct experiments with both golden summaries and generated summaries. For generating automatic-decoded summaries, we train a pointer-generator network (PG-Net) with coverage mechanism BIBREF9, which is a specially designed sequence-to-sequence attention-based model that can generate the summary by copying words from the text document or generating words from a fixed vocabulary set at the same time. We generally follow the experimental settings in the original paper except for some minor adjustments specially made for our datasets. Noted that in our work PG-Net can be replaced by any other summarization model. Experiments ::: Baselines ::: HSSC @!START@BIBREF6@!END@. This model adopts encoder parameter sharing for jointly sentiment classification and summarization. It predicts the sentiment label using a highway layer, concatenating the hidden state in summary decoder and the original text representation in encoder. Experiments ::: Baselines ::: SAHSSC @!START@BIBREF7@!END@. This work also adopts encoder parameter sharing for jointly sentiment classification and summarization. They use two separate BiLSTMs with self-attention mechanism for generating review and summary representations. Experiments ::: Baselines ::: BiLSTM+Pooling. For this baseline, we use a BiLSTM with hidden sizes of 256 in both directions, and average pooling across all hidden states to form the representation. This method serves as a naive baseline for making use of both review and summary in sentiment classification. It can also be used to compare the effectiveness of the review itself, the summary itself and the combination of both when used as inputs to the problem. Experiments ::: Baselines ::: BiLSTM+Self-attention @!START@BIBREF13@!END@. This baseline uses a BiLSTM with hidden size of 256 in both directions. On the top of BiLSTM, self-attention is used to provide a set of summation weight vectors for the final representation. This method is conceptually simple yet gives the state-of-the-art results for many classification and text matching tasks. Its main difference to our model lies in the fact that attention is performed only in the top hidden layer in this method, yet in every layer in ours. Experiments ::: Baselines ::: BiLSTM+Hard Attention To demonstrate the efficiency of our model structure, we also adopt hard attention BIBREF24 for comparison, which is supervised using an extractive summarization objective. In particular, words in the original review that match to the corresponding summary are treated as the summary in their original order. In the case of Figure FIGREF3, the extractive summaries for the review are “James Cameron's Titanic is easily the most overrated film in history”, which corresponds to the user-written summary “James Cameron's 1997 Titanic is easily the most overrated film in history!”. The model also calculates another loss between attention weights and extractive summary labels, so that the hard attention weights are trained to strictly follow the extractive summary. For baselines that adopt the separate encoder structure, we generally calculate the representations of review and summary separately with two encoders that hold their own parameters, and then concatenate the two representations alongside the hidden-size dimension. For the joint encoder baselines, we first concatenate the review and summary text, and then encode the concatenated text with one single encoder. Experiments ::: Development Experiments We use the Toys & Games development set to investigate different key configurations of our model. The results are shown in Table TABREF29. Experiments ::: Development Experiments ::: Self-attention Baseline We compare different numbers of BiLSTM layers and hidden sizes in BiLSTM self-attention. As can be seen, with more layers a stacked BiLSTM with larger hidden sizes does not give better results compared to a hidden size of 256 either. Experiments ::: Development Experiments ::: Hidden Size We see an evident improvement of our model when the hidden size increases from 128 to 256. However, the improvement becomes relatively small compared to a large increase in the number of parameters when the hidden size is further increased to 360. Therefore, we adopt 256 as the hidden size in our experiments. Experiments ::: Development Experiments ::: Number of Layers As Table TABREF29 shows, the accuracy increases when increasing layer numbers from 1 to 2. More layers do not increase the accuracy on development set. We thus set 2 as the number of review encoder layers in the experiments. The best performing model size is comparable to that of the BiLSTM self-attention, demonstrating that the number of parameters is not the key factor to models' performance. Experiments ::: Results Table TABREF34 and Table TABREF35 show the final results. Our model outperforms all the baseline models and the top-performing models with both generated summary and golden summary, for all the three datasets. In the scenario where golden summaries are used, BiLSTM+self-attention performs the best among all the baselines, which shows that attention is a useful way to integrate summary and review information. Hard-attention receives more supervision information compared with soft-attention, by supervision signals from extractive summaries. However, it underperforms the soft attention model, which indicates that the most salient words for making sentiment classification may not strictly overlap with extractive summaries. This justifies the importance of user written or automatic-generated summary. A comparison between models that use summary information and those that do not use summary information shows that the review summary is useful for sentiment classification. In addition, the same models work consistently better when the user written gold summary is used compared to a system generated summary, which is intuitively reasonable since the current state-of-the-art abstractive summarization models are far from perfect. Interestingly, as shown in the second section of the table, the gold summary itself does not lead to better sentiment accuracy compared with the review itself, which shows that summaries better serve as auxiliary information sources to review contents. With both gold summaries and automatic-generated summaries, our model gives better results as compared to BiLSTM+self-attention. The latter integrates information from reviews and summaries only in the top representation layer, which is also the standard practice in question answering BIBREF25 and machine translation BIBREF26 models. In contrast, our model integrates summary information into the review representation in each layer, thereby allowing the integrated representation to be hierarchically refined, leading to more abstract hidden states. Finally, the fact that with gold summary, our baseline and final models outperforms the state-of-the-art methods by jointly training shows the importance of making use of user written summaries when they are available. Even with system summary, out models still outperforms HSSC and SAHSSC, showing that our network is more effective than parameter sharing under the same setting without input summaries. Experiments ::: Results ::: Review Length Figure FIGREF37 consists of line graphs on the accuracy of BiLSTM+self-attention, BiLSTM+pooling and our model against the review length. As the review length increases, the performance of all models decreases. BiLSTM+self-attention does not outperform BiLSTM+pooling on long text. Our method gives better results compared to two baseline models for long reviews, demonstrating that our model is effective for capturing long-term dependency. This is likely because hierarchically-refined attention maintains the most salient information while ignoring the redundant parts of the original review text. Our model can thus be more robust when review has irrelevant sentimental words, which usually exists in larger reviews such as the example in Figure FIGREF3. The hierarchical architecture allows the lower layers to encode local information, while the higher layers can capture long-term dependency and thus better encode global information. Experiments ::: Results ::: Case Study Our model has a natural advantage of interpretability thanks to the use of attention inference layer. We visualize the hierarchically-refined attention of two samples from the test set of Toys & Games. We also visualize self-attention distribution for fair comparison. To make the visualizations clear and to avoid confusion, we choose to visualize the most salient parts, by rescaling all attention weights into an interval of $[0, 100]$ and adopting 50 as a threshold for attention visualization, showing only attention weights $\ge 50$. As shown in Figure FIGREF38, the example with generated summary has 5 stars as its golden rating score. The summary text is “fun for the whole new game in all ages ! ! ! fun ! ! !", which suggests that the game is (1) fun (from word “fun") and (2) not difficult to learn (from phrase “all ages"). It can be seen that both the self-attention model and the first layer of our model attend to the strongly positive phrase “quite fun", which is relevant to the word “fun" in the summary. In comparisons the second layer attends to the phrase “much easier", which is relevant to the phrase “in all ages" in the summary. This verifies our model's effectiveness of leveraging abstractive summary information. Figure FIGREF38 illustrates a 5-star-rating example with golden summary. The summary text is “Favorite Game to Teach to Newbies". As shown in the heatmap, self-attention can only attend to some general sentimental words, such as “hard", “fun", “immensely" and “most", which deviates from the main idea of the document text. In comparison, the first layer of our model attends to phrases like “easy to teach", which is a perfect match of the phrase “teach to newbies" in the summary. This shows that the shallow sequence inference layer can learn direct similarity matching information under the supervision of summarization. In addition, the second layer of our model attends to phrases including “would recommend this to anyone", which links to “easy to teach" and “Teach to Newbies", showing that the deeper sequence inference layer of our model can learn potential connections between the review and the summary. Conclusion We investigated a hierarchically-refined attention network for better sentiment prediction. Our model allows multi-interaction between summary and review representation in a hierarchical manner. Empirical results show that the proposed method outperforms all strong baselines and previous work and achieves new state-of-the-art performance on SNAP Amazon Review dataset.
2.7 accuracy points
2f9d30e10323cf3a6c9804ecdc7d5872d8ae35e4
2f9d30e10323cf3a6c9804ecdc7d5872d8ae35e4_0
Q: Which review dataset do they use? Text: Introduction Sentiment analysis BIBREF0, BIBREF1 is a fundamental task in natural language processing. In particular, sentiment analysis of user reviews has wide applicationsBIBREF2, BIBREF3, BIBREF4, BIBREF5. In many review websites such as Amazon and IMDb, the user is allowed to give a summary in addition to their review. Summaries usually contain more abstract information about the review. As shown in Figure FIGREF3, two screenshots of reviews were taken from Amazon and IMDb websites, respectively. The user-written summaries of these reviews can be highly indicative of the final polarity. As a result, it is worth considering them together with the review itself for making sentiment classification. To this end, some recent work BIBREF6, BIBREF7 exploits joint modeling. The model structure can be illustrated by Figure FIGREF4. In particular, given a review input, a model is trained to simultaneously predict the sentiment and summary. As a result, both summary information and review information are integrated in the review encoder through back-propagation training. However, one limitation of this method is that it does not explicitly encode a summary during test time. One solution, as shown in Figure FIGREF4, is to train a separate summary generator, which learns to predict a summary given a review. This allows a sentiment classifier to simultaneously encode the review and its summary, before making a prediction using both representations. One further advantage of this model is that it can make use of a user-given summary if it is available with the review, which is the case for the review websites shown in Figure 1. We therefore investigate such a model. One limitation of this method, however, is that it does not capture interaction of review and summary information as thoroughly as the method shown in Figure FIGREF4, since the review and the summary are encoded using two separate encoders. To address this issue, we further investigate a joint encoder for review and summary, which is demonstrated in Figure FIGREF4. The model works by jointly encoding the review and the summary in a multi-layer structure, incrementally updating the representation of the review by consulting the summary representation at each layer. As shown in Figure FIGREF5, our model consists of a summary encoder, a hierarchically-refined review encoder and an output layer. The review encoder is composed of multiple attention layers, each consisting of a sequence encoding layer and an attention inference layer. Summary information is integrated into the representation of the review content at each attention layer, thus, a more abstract review representation is learned in subsequent layers based on a lower-layer representation. This mechanism allows the summary to better guide the representation of the review in a bottom-up manner for improved sentiment classification. We evaluate our proposed model on the SNAP (Stanford Network Analysis Project) Amazon review datasets BIBREF8, which contain not only reviews and ratings, but also golden summaries. In scenarios where there is no user-written summary for a review, we use pointer-generator network BIBREF9 to generate abstractive summaries. Empirical results show that our model significantly outperforms all strong baselines, including joint modeling, separate encoder and joint encoder methods. In addition, our model achieves new state-of-the-art performance, attaining 2.1% (with generated summary) and 4.8% (with golden summary) absolutely improvements compared to the previous best method on SNAP Amazon review benchmark. Related Work The majority of recent sentiment analysis models are based on either convolutional or recurrent neural networks to encode sequences BIBREF10, BIBREF11. In particular, attention-based models have been widely explored, which assign attention weights to hidden states to generate a representation of the input sequence. A hierarchical model with two levels of attention mechanisms was proposed for document classification BIBREF12. Self-attention mechanism has also been used in sentiment analysis BIBREF13, BIBREF14. However, BIBREF15 empirically showed that self-attention mechanism does not consistently agree with the most salient features, which means that self-attention models may suffer from attending on explicit but irrelevant sentimental words. Rationales were also introduced to sentiment analysis task. BIBREF16 proposed a unsupervised latent model that selects a rationale and then uses the rationale for sentiment analysis. A rationale-augmented CNN model BIBREF17 was proposed, which regards golden rationales as additional input and uses the probability as rationale-level attention weights to generate the final representation for text classification. There has also been work focusing on joint summarization and sentiment classification BIBREF6, BIBREF7, whose general structures are illustrated in Figure FIGREF4. These models can predict sentiment label and summary simultaneously. However, they do not encode summaries explicitly during test time, which makes their performance be limited to some extent. Method In this section, we introduce our proposed model in details. We first give the problem formulation, followed by an overview of the proposed model, and explain each layer of our model in details, before finally giving the loss function and training methods. Method ::: Problem Formulation The input to our task is a pair $(X^w, X^s)$, where $X^w = x^w_1, x^w_2, ..., x^w_n$ is a summary and $X^s = x^s_1, x^s_2,...,x^s_m$ is a review, the task is to predict the sentiment label $y \in [1, 5]$, where 1 denotes the most negative sentiment and 5 denotes the most positive sentiment. $n$ and $m$ denote the size of the review and summary in the number of words, respectively. The training set is $D=\lbrace (X^w_i, X^s_i, y_i)\rbrace |_{i=1}^M$ where $M$ is the total number of training examples. Method ::: Model Overview Figure FIGREF5 gives the architecture of the proposed model, which consists of three modules: a summary encoder, a hierarchically-refined review encoder and an output layer. The summary encoder encodes the summary into a hidden state matrix. The review encoder consists of several layers for representing $\mathbf {x}^w$, each containing a sequence encoding sublayer and an attention inference sublayer. The sequence encoding sublayer encodes the review text as a word sequence. The attention inference layer acts as a key component, which takes the hidden states from both the original review and the summary as input calculating dot-product attention weights for original review under additional supervision from summary information. Multi-head attention BIBREF18 as well as residual connection are also adopted. The output layer predicts the potential sentiment label according to hidden states from the previous layer. Method ::: Summary Encoder Input for the summary encoder is a sequence of summary word representations $\mathbf {x}^s = \mathbf {x}^s_1, \mathbf {x}^s_2, ..., \mathbf {x}^s_m = \lbrace emb(x_1^s), ..., emb(x_m^s)\rbrace $, where $emb$ denotes a word embedding lookup table. Word representations are fed into a standard BiLSTM. We adopt a standard LSTM formulation, where a sequence of hidden states $\mathbf {h}_t$ are calculated from a sequence of $\mathbf {x}_t$($t \in [1,...,m]$). A forward left-to-right LSTM layer and a backward right-to-left LSTM yield a sequence of forward hidden states $\lbrace {\stackrel{\rightarrow }{\mathbf {h}_1^s}},...,{\stackrel{\rightarrow }{\mathbf {h}_n^s}}\rbrace $ and a sequence of backward hidden states $\lbrace {\stackrel{\leftarrow }{\mathbf {h}_1^s}},...,{\stackrel{\leftarrow }{\mathbf {h}_n^s}}\rbrace $, respectively. The two hidden states are concatenated to form a final representation: We then apply an average-pooling operation over the hidden and take $\mathbf {h}^s = avg\_pooling(\mathbf {h}^s_1, \mathbf {h}^s_2,...,\mathbf {h}^s_n)$ as the final representation of summary text. Method ::: Hierarchically-Refined Review Encoder The hierarchically-refined review encoder consists of several review encoder layers, each of which is composed of a sequence encoding layer and an attention inference layer. Method ::: Hierarchically-Refined Review Encoder ::: Sequence Encoding Layer Given a review $\mathbf {x}^w = \lbrace emb(x_1^w),...,emb(x_n^w)\rbrace $, another BiLSTM is adopted (the same equation with different parameters compared to the one used in the summary encoder), deriving a sequence of review hidden states $\mathbf {H}^w=\lbrace \mathbf {h}^w_1, \mathbf {h}^w_2,...,\mathbf {h}^s_n \rbrace $. Method ::: Hierarchically-Refined Review Encoder ::: Attention Inference Layer In the attention inference layer, we model the dependencies between the original review and the summary with multi-head dot-product attention.Each head produces an attention matrix $\mathbf {\alpha } \in \mathbb {R}^{d_h \times 1}$ consisting of a set of similarity scores between the hidden state of each token of the review text and the summary representation. The hidden state outputs are calculated by where $\mathbf {W}_i^Q \in \mathbb {R}^{d_{h} \times \frac{d_{h}}{k}}$, $\mathbf {W}_i^K \in \mathbb {R}^{d_{h} \times \frac{d_{h}}{k}}$ and $\mathbf {W}_i^V \in \mathbb {R}^{d_{h} \times \frac{d_{h}}{k}}$ are model parameters. $Q$, $K$ and $V$ represent Query, Key and Value, respectively. $k$ is the number of parallel heads and $i \in [1,k]$ indicates which head is being processed. Following BIBREF18, we adopt a residual connection around each attention inference layer, followed by layer normalization BIBREF19 : $\mathbf {H}$ is then fed to the subsequent sequence encoding layer as input, if any. According to the equations of standard LSTM and Equation DISPLAY_FORM13, tokens of the original review that are the most relevant to the summary are focused on more by consulting summary representation. The hidden states $\mathbf {H}^{w,s}$ are thus a representation matrix of the review text that encompass key features of summary representation. Multi-head attention mechanism ensures that multi-faced semantic dependency features can be captured during the process, which is beneficial for scenarios where several key points exist in one review. Note also that our design of the review encoding part of the hierarchically-refined attention network is similar to the Transformer architecture in the use of multi-head attention, residual connection and layer normalization BIBREF18. However, our experiments show that bi-directional LSTM works better compared to self-attention network as a basic layer structure. This may result from the fact that Transformer requires a larger amount of training data for the most effectiveness. Method ::: Output Layer Finally, global average pooling is applied after the previous layer, and then followed by a classifier layer: where $\hat{y}$ is the predicted sentiment label; $\mathbf {W}$ and $\mathbf {b}$ are parameters to be learned. Method ::: Training Given a dataset $D={\lbrace (X^w_t,X^s_t,y_t)\rbrace }|^{|T|}_{t=1}$, our model can be trained by minimizing the cross-entropy loss between where $\mathbf {p}^{y_t}$ denotes the value of the label in $\mathbf {p}$ that corresponds to $y_t$. Experiments We compare our model with several strong baselines and previous state-of-the-art methods, investigating its main effects. Experiments ::: Datasets We empirically compare different methods using Amazon SNAP Review Dataset BIBREF20, which is a part of Stanford Network Analysis Project. The raw dataset consists of around 34 millions Amazon reviews in different domains, such as books, games, sports and movies. Each review mainly contains a product ID, a piece of user information, a plain text review, a review summary and an overall sentiment rating which ranges from 1 to 5. The statistics of our adopted dataset is shown in Table TABREF20. For fair comparison with previous work, we adopt the same partitions used by previous work BIBREF6, BIBREF7, which is, for each domain, the first 1000 samples are taken as the development set, the following 1000 samples as the test set, and the rest as the training set. Experiments ::: Experimental Settings We use GloVe BIBREF22 300-dimensional embeddings as pretrained word vectors. A LSTM hidden size of 256 and four heads for multi-head attention mechanism are adopted. We use Adam BIBREF23 to optimize our model, with an initial learning rate of 0.0003, a decay rate of 0.97, momentum parameters $\beta _1 = 0.9$, $\beta _2 = 0.999$, and $\epsilon = 1 \times 10^{-8}$. The dropout rate is set depending on the size of each dataset, which is 0.5 for both Toys & Games and Sports & Outdoors and 0.2 for Movies & TV. We conduct experiments with both golden summaries and generated summaries. For generating automatic-decoded summaries, we train a pointer-generator network (PG-Net) with coverage mechanism BIBREF9, which is a specially designed sequence-to-sequence attention-based model that can generate the summary by copying words from the text document or generating words from a fixed vocabulary set at the same time. We generally follow the experimental settings in the original paper except for some minor adjustments specially made for our datasets. Noted that in our work PG-Net can be replaced by any other summarization model. Experiments ::: Baselines ::: HSSC @!START@BIBREF6@!END@. This model adopts encoder parameter sharing for jointly sentiment classification and summarization. It predicts the sentiment label using a highway layer, concatenating the hidden state in summary decoder and the original text representation in encoder. Experiments ::: Baselines ::: SAHSSC @!START@BIBREF7@!END@. This work also adopts encoder parameter sharing for jointly sentiment classification and summarization. They use two separate BiLSTMs with self-attention mechanism for generating review and summary representations. Experiments ::: Baselines ::: BiLSTM+Pooling. For this baseline, we use a BiLSTM with hidden sizes of 256 in both directions, and average pooling across all hidden states to form the representation. This method serves as a naive baseline for making use of both review and summary in sentiment classification. It can also be used to compare the effectiveness of the review itself, the summary itself and the combination of both when used as inputs to the problem. Experiments ::: Baselines ::: BiLSTM+Self-attention @!START@BIBREF13@!END@. This baseline uses a BiLSTM with hidden size of 256 in both directions. On the top of BiLSTM, self-attention is used to provide a set of summation weight vectors for the final representation. This method is conceptually simple yet gives the state-of-the-art results for many classification and text matching tasks. Its main difference to our model lies in the fact that attention is performed only in the top hidden layer in this method, yet in every layer in ours. Experiments ::: Baselines ::: BiLSTM+Hard Attention To demonstrate the efficiency of our model structure, we also adopt hard attention BIBREF24 for comparison, which is supervised using an extractive summarization objective. In particular, words in the original review that match to the corresponding summary are treated as the summary in their original order. In the case of Figure FIGREF3, the extractive summaries for the review are “James Cameron's Titanic is easily the most overrated film in history”, which corresponds to the user-written summary “James Cameron's 1997 Titanic is easily the most overrated film in history!”. The model also calculates another loss between attention weights and extractive summary labels, so that the hard attention weights are trained to strictly follow the extractive summary. For baselines that adopt the separate encoder structure, we generally calculate the representations of review and summary separately with two encoders that hold their own parameters, and then concatenate the two representations alongside the hidden-size dimension. For the joint encoder baselines, we first concatenate the review and summary text, and then encode the concatenated text with one single encoder. Experiments ::: Development Experiments We use the Toys & Games development set to investigate different key configurations of our model. The results are shown in Table TABREF29. Experiments ::: Development Experiments ::: Self-attention Baseline We compare different numbers of BiLSTM layers and hidden sizes in BiLSTM self-attention. As can be seen, with more layers a stacked BiLSTM with larger hidden sizes does not give better results compared to a hidden size of 256 either. Experiments ::: Development Experiments ::: Hidden Size We see an evident improvement of our model when the hidden size increases from 128 to 256. However, the improvement becomes relatively small compared to a large increase in the number of parameters when the hidden size is further increased to 360. Therefore, we adopt 256 as the hidden size in our experiments. Experiments ::: Development Experiments ::: Number of Layers As Table TABREF29 shows, the accuracy increases when increasing layer numbers from 1 to 2. More layers do not increase the accuracy on development set. We thus set 2 as the number of review encoder layers in the experiments. The best performing model size is comparable to that of the BiLSTM self-attention, demonstrating that the number of parameters is not the key factor to models' performance. Experiments ::: Results Table TABREF34 and Table TABREF35 show the final results. Our model outperforms all the baseline models and the top-performing models with both generated summary and golden summary, for all the three datasets. In the scenario where golden summaries are used, BiLSTM+self-attention performs the best among all the baselines, which shows that attention is a useful way to integrate summary and review information. Hard-attention receives more supervision information compared with soft-attention, by supervision signals from extractive summaries. However, it underperforms the soft attention model, which indicates that the most salient words for making sentiment classification may not strictly overlap with extractive summaries. This justifies the importance of user written or automatic-generated summary. A comparison between models that use summary information and those that do not use summary information shows that the review summary is useful for sentiment classification. In addition, the same models work consistently better when the user written gold summary is used compared to a system generated summary, which is intuitively reasonable since the current state-of-the-art abstractive summarization models are far from perfect. Interestingly, as shown in the second section of the table, the gold summary itself does not lead to better sentiment accuracy compared with the review itself, which shows that summaries better serve as auxiliary information sources to review contents. With both gold summaries and automatic-generated summaries, our model gives better results as compared to BiLSTM+self-attention. The latter integrates information from reviews and summaries only in the top representation layer, which is also the standard practice in question answering BIBREF25 and machine translation BIBREF26 models. In contrast, our model integrates summary information into the review representation in each layer, thereby allowing the integrated representation to be hierarchically refined, leading to more abstract hidden states. Finally, the fact that with gold summary, our baseline and final models outperforms the state-of-the-art methods by jointly training shows the importance of making use of user written summaries when they are available. Even with system summary, out models still outperforms HSSC and SAHSSC, showing that our network is more effective than parameter sharing under the same setting without input summaries. Experiments ::: Results ::: Review Length Figure FIGREF37 consists of line graphs on the accuracy of BiLSTM+self-attention, BiLSTM+pooling and our model against the review length. As the review length increases, the performance of all models decreases. BiLSTM+self-attention does not outperform BiLSTM+pooling on long text. Our method gives better results compared to two baseline models for long reviews, demonstrating that our model is effective for capturing long-term dependency. This is likely because hierarchically-refined attention maintains the most salient information while ignoring the redundant parts of the original review text. Our model can thus be more robust when review has irrelevant sentimental words, which usually exists in larger reviews such as the example in Figure FIGREF3. The hierarchical architecture allows the lower layers to encode local information, while the higher layers can capture long-term dependency and thus better encode global information. Experiments ::: Results ::: Case Study Our model has a natural advantage of interpretability thanks to the use of attention inference layer. We visualize the hierarchically-refined attention of two samples from the test set of Toys & Games. We also visualize self-attention distribution for fair comparison. To make the visualizations clear and to avoid confusion, we choose to visualize the most salient parts, by rescaling all attention weights into an interval of $[0, 100]$ and adopting 50 as a threshold for attention visualization, showing only attention weights $\ge 50$. As shown in Figure FIGREF38, the example with generated summary has 5 stars as its golden rating score. The summary text is “fun for the whole new game in all ages ! ! ! fun ! ! !", which suggests that the game is (1) fun (from word “fun") and (2) not difficult to learn (from phrase “all ages"). It can be seen that both the self-attention model and the first layer of our model attend to the strongly positive phrase “quite fun", which is relevant to the word “fun" in the summary. In comparisons the second layer attends to the phrase “much easier", which is relevant to the phrase “in all ages" in the summary. This verifies our model's effectiveness of leveraging abstractive summary information. Figure FIGREF38 illustrates a 5-star-rating example with golden summary. The summary text is “Favorite Game to Teach to Newbies". As shown in the heatmap, self-attention can only attend to some general sentimental words, such as “hard", “fun", “immensely" and “most", which deviates from the main idea of the document text. In comparison, the first layer of our model attends to phrases like “easy to teach", which is a perfect match of the phrase “teach to newbies" in the summary. This shows that the shallow sequence inference layer can learn direct similarity matching information under the supervision of summarization. In addition, the second layer of our model attends to phrases including “would recommend this to anyone", which links to “easy to teach" and “Teach to Newbies", showing that the deeper sequence inference layer of our model can learn potential connections between the review and the summary. Conclusion We investigated a hierarchically-refined attention network for better sentiment prediction. Our model allows multi-interaction between summary and review representation in a hierarchical manner. Empirical results show that the proposed method outperforms all strong baselines and previous work and achieves new state-of-the-art performance on SNAP Amazon Review dataset.
SNAP (Stanford Network Analysis Project)
327e06e2ce09cf4c6cc521101d0aecfc745b1738
327e06e2ce09cf4c6cc521101d0aecfc745b1738_0
Q: What evaluation metrics did they look at? Text: Introducción Los investigadores en Procesamiento de Lenguaje Natural (PLN) durante mucho tiempo han utilizado corpus constituidos por documentos enciclopédicos (notablemente Wikipedia), periodísticos (periódicos o revistas) o especializados (documentos legales, científicos o técnicos) para el desarrollo y pruebas de sus modelos BIBREF0, BIBREF1, BIBREF2. La utilización y estudios de corpora literarios sistemáticamente han sido dejados a un lado por varias razones. En primer lugar, el nivel de discurso literario es más complejo que los otros géneros. En segundo lugar, a menudo, los documentos literarios hacen referencia a mundos o situaciones imaginarias o alegóricas, a diferencia de los otros géneros que describen sobre todo situaciones o hechos factuales. Estas y otras características presentes en los textos literarios, vuelven sumamente compleja la tarea de análisis automático de este tipo de textos. En este trabajo nos proponemos utilizar corpora literarios, a fin de generar realizaciones literarias (frases nuevas) no presentes en dichos corpora. La producción de textos literarios es el resultado de un proceso donde una persona hace uso de aptitudes creativas. Este proceso, denominado “proceso creativo”, ha sido analizado por BIBREF3, quien propone tres tipos básicos de creatividad: la primera, Creatividad Combinatoria (CCO), donde se fusionan elementos conocidos para la generación de nuevos elementos. La segunda, Creatividad Exploratoria (CE), donde la generación ocurre a partir de la observación o exploración. La tercera, Creatividad Transformacional (CT), donde los elementos generados son producto de alteraciones o experimentaciones aplicadas al dominio de la CE. Sin embargo, cuando se pretende automatizar el proceso creativo, la tarea debe ser adaptada a métodos formales que puedan ser realizados en un algoritmo. Este proceso automatizado da lugar a un nuevo concepto denominado Creatividad Computacional (CC), introducido por BIBREF4, quien retoma para ello la CT y la CE propuestas por BIBREF3. La definición de literatura no tiene un consenso universal, y muchas variantes de la definición pueden ser encontradas. En este trabajo optaremos por introducir una definición pragmática de frase literaria, que servirá para nuestros modelos y experimentos. Definición. Una frase literaria es una frase que se diferencia de las frases en lengua general, porque contiene elementos (nombres, verbos, adjetivos, adverbios) que son percibidos como elegantes o menos coloquiales que sus equivalentes en lengua general. En particular, proponemos crear artificialmente frases literarias utilizando modelos generativos y aproximaciones semánticas basados en corpus de lengua literaria. La combinación de esos modelos da lugar a una homosintaxis, es decir, la producción de texto nuevo a partir de formas de discurso de diversos autores. La homosintaxis no tiene el mismo contenido semántico, ni siquiera las mismas palabras, aunque guarda la misma estructura sintáctica. En este trabajo proponemos estudiar el problema de la generación de texto literario original en forma de frases aisladas, no a nivel de párrafos. La generación de párrafos puede ser objeto de trabajos futuros. Una evaluación de la calidad de las frases generadas por nuestro sistema será presentada. Este artículo está estructurado como sigue. En la Sección SECREF2 presentamos un estado del arte de la creatividad computacional. En la Sección SECREF3 describimos los corpus utilizados. Nuestros modelos son descritos en la Sección SECREF4. Los resultados y su interpretación se encuentran en la Sección SECREF5. Finalmente la Sección SECREF6 presenta algunas ideas de trabajos futuros antes de concluir. Trabajos previos La generación de texto es una tarea relativamente clásica, que ha sido estudiada en diversos trabajos. Por ejemplo, BIBREF5 presentan un modelo basado en cadenas de Markov para la generación de texto en idioma polaco. Los autores definen un conjunto de estados actuales y calculan la probabilidad de pasar al estado siguiente. La ecuación (DISPLAY_FORM1) calcula la probabilidad de pasar al estado $X_{i}$ a partir de $X_{j}$, Para ello, se utiliza una matriz de transición, la cual contiene las probabilidades de transición de un estado actual $X_i$ a los posibles estados futuros $X_{i+1}$. Cada estado puede estar definido por $n$-gramas de letras o de palabras. La tarea inicia en un estado $X_i$ dado por el usuario. Posteriormente, usando la matriz de transición, se calcula la probabilidad de pasar al estado siguiente $X_{i+1}$. En ese momento el estado predicho $X_{i+1}$ se convierte en el estado actual $X_i$, repitiendo este proceso hasta satisfacer una condición. Este método tiene un buen comportamiento al generar palabras de 4 o 5 letras. En polaco esta longitud corresponde a la longitud media de la mayor parte de las palabras BIBREF6. También hay trabajos que realizan análisis más profundos para generar no solamente palabras, sino párrafos completos. BIBREF7 presentan un algoritmo que genera automáticamente comentarios descriptivos para bloques de código (métodos) en Java. Para ello, se toma el nombre del método y se usa como la acción o idea central de la descripción a generar. Posteriormente se usan un conjunto de heurísticas, para seleccionar las líneas de código del método que puedan aportar mayor información, y se procesan para generar la descripción. La tarea consiste en construir sintagmas, a partir de la idea central dada por el nombre del método, y enriquecerlos con la información de los elementos extraídos. Por ejemplo, si hay un método removeWall(Wall x) y se encuentra la llamada al método removeWall(oldWall), la descripción generada podría ser: “Remove old Wall”. Obteniéndose la acción (verbo) y el objeto (sustantivo) directamente del nombre del método y el adjetivo a partir de la llamada. Estas ideas permiten a los autores la generación de comentarios extensos sin perder la coherencia y la gramaticalidad. También se encuentran trabajos de generación textual que se proponen como meta resultados con un valor más artístico. BIBREF8 presentan un conjunto de algoritmos para la generación de una guía narrativa basada en la idea de Creatividad Exploratoria BIBREF3. El modelo establece i/ un conjunto universal U de conceptos relevantes relacionados a un dominio; ii/ un modelo generador de texto; iii/ un subconjunto de conceptos S que pertenecen al conjunto universal U; y iv/ algoritmos encargados de establecer las relaciones entre U y S para generar nuevos conceptos. Estos nuevos conceptos serán posteriormente comparados con los conceptos ya existentes en U para verificar la coherencia y relación con la idea principal. Si los resultados son adecuados, estos nuevos conceptos se utilizan para dar continuación a la narrativa. Son diversos los trabajos que están orientados a la generación de una narrativa ficticia como cuentos o historias. BIBREF9 proponen un modelo de generación de texto narrativo a partir del análisis de entidades. Dichas entidades son palabras (verbos, sustantivos o adjetivos) dentro de un texto que serán usados para generar la frase siguiente. El modelo recupera las entidades obtenidas de tres fuentes principales: la frase actual, la frase previa y el documento completo (contexto), y las procesa con una red neuronal para seleccionar las mejores de acuerdo a diversos criterios. A partir de un conjunto de heurísticas, se analizaron las frases generadas para separar aquellas que expresaran una misma idea (paráfrasis), de aquellas que tuvieran una relación entre sus entidades pero con ideas diferentes. La generación de texto literario es un proceso muy diferente a la generación de texto aleatorio BIBREF10, BIBREF11 y tampoco se limita a una idea o concepto general. El texto literario está destinado a ser un documento elegante y agradable a la lectura, haciendo uso de figuras literarias y un vocabulario distinto al empleado en la lengua general. Esto da a la obra una autenticidad y define el estilo del autor. El texto literario también debe diferenciarse de las estructuras rígidas o estereotipadas de los géneros periodístico, enciclopédico o científico. BIBREF12 proponen un modelo para la generación de poemas y se basa en dos premisas básicas: ¿qué decir? y ¿cómo decirlo? La propuesta parte de la selección de un conjunto de frases tomando como guía una lista de palabras dadas por el usuario. Las frases son procesadas por un modelo de red neuronal BIBREF13, para construir combinaciones coherentes y formular un contexto. Este contexto es analizado para identificar sus principales elementos y generar las líneas del poema, que también pasarán a formar parte del contexto. El modelo fue evaluado manualmente por 30 expertos en una escala de 1 a 5, analizando legibilidad, coherencia y significatividad en frases de 5 palabras, obteniendo una precisión de 0.75. Sin embargo, la coherencia entre frases resultó ser muy pobre. BIBREF14, BIBREF15 proponen un modelo de generación de poemas a base de plantillas. El algoritmo inicia con un conjunto de frases relacionadas a partir de palabras clave. Las palabras clave sirven para generar un contexto. Las frases son procesadas usando el sistema PEN para obtener su información gramatical. Esta información es empleada para la generación de nuevas platillas gramaticales y finalmente la construcción de las líneas del poema, tratando de mantener la coherencia y la gramaticalidad. El modelo sentiGAN BIBREF16 pretende generar texto con un contexto emocional. Se trata de una actualización del modelo GAN (Generative Adversarial Net) BIBREF17 que ha producido resultados alentadores en la generación textual, aunque con ciertos problemas de calidad y coherencia. Se utiliza el análisis semántico de una entrada proporcionada por el usuario que sirve para la creación del contexto. La propuesta principal de SentiGAN sugiere establecer un número definido de generadores textuales que deberán producir texto relacionado a una emoción definida. Los generadores son entrenados bajo dos esquemas: i/ una serie de elementos lingüísticos que deben ser evitados para la generación del texto; y ii/ un conjunto de elementos relacionados con la emoción ligada al generador. A través de cálculos de distancia, heurísticas y modelos probabilísticos, el generador crea un texto lo más alejado del primer esquema y lo más cercano al segundo. También existen trabajos con un alcance más corto pero de mayor precisión. BIBREF18 proponen la evaluación de un conjunto de datos con un modelo basado en redes neuronales para la generación de subconjuntos de multi-palabras. Este mismo análisis, se considera en BIBREF19, en donde se busca establecer o detectar la relación hiperónimo-hipónimo con la ayuda del modelo de Deep Learning Word2vec BIBREF20. La propuesta de BIBREF19 reporta una precisión de 0.70 al ser evaluado sobre un corpus manualmente etiquetado. La literatura es una actividad artística que exige capacidades creativas importantes y que ha llamado la atención de científicos desde hace cierto tiempo. BIBREF4 realiza un estado del arte interesante donde menciona algunos trabajos que tuvieron un primer acercamiento a la obra literaria desde una perspectiva superficial. Por ejemplo, el modelo “Through the park” BIBREF21, es capaz de generar narraciones históricas empleando la elipsis. Esta técnica es empleada para manipular, entre otras cosas, el ritmo de la narración. En los trabajos “About So Many Things” BIBREF22 y “Taroko Gorge” BIBREF23 se muestran textos generados automáticamente. El primero de ellos genera estrofas de 4 líneas estrechamente relacionadas entre ellas. Eso se logra a través de un análisis gramatical que establece conexiones entre entidades de distintas líneas. El segundo trabajo muestra algunos poemas cortos generados automáticamente con una estructura más compleja que la de las estrofas. El inconveniente de ambos enfoques es el uso de una estructura inflexible, lo que genera textos repetitivos con una gramaticalidad limitada. El proyecto MEXICA modela la generación colaborativa de narraciones BIBREF4. El propósito es la generación de narraciones completas utilizando obras de la época Precolombina. MEXICA genera narraciones simulando el proceso creativo de E-R (Engaged y Reflexive) BIBREF24. Este proceso se describe como la acción, donde el autor trae a su mente un conjunto de ideas y contextos y establece una conexión coherente entre estas (E). Posteriormente se reflexiona sobre las conexiones establecidas y se evalúa el resultado final para considerar si este realmente satisface lo esperado (R). El proceso itera hasta que el autor lo considera concluido. Corpus utilizados ::: Corpus 5KL Este corpus fue constituido con aproximadamente 5 000 documentos (en su mayor parte libros) en español. Los documentos originales, en formatos heterogéneos, fueron procesados para crear un único documento codificado en utf8. Las frases fueron segmentadas automáticamente, usando un programa en PERL 5.0 y expresiones regulares, para obtener una frase por línea. Las características del corpus 5KL se encuentran en la Tabla TABREF4. Este corpus es empleado para el entrenamiento de los modelos de aprendizaje profundo (Deep Learning, Sección SECREF4). El corpus literario 5KL posee la ventaja de ser muy extenso y adecuado para el aprendizaje automático. Tiene sin embargo, la desventaja de que no todas las frases son necesariamente “frases literarias”. Muchas de ellas son frases de lengua general: estas frases a menudo otorgan una fluidez a la lectura y proporcionan los enlaces necesarios a las ideas expresadas en las frases literarias. Otra desventaja de este corpus es el ruido que contiene. El proceso de segmentación puede producir errores en la detección de fronteras de frases. También los números de página, capítulos, secciones o índices producen errores. No se realizó ningún proceso manual de verificación, por lo que a veces se introducen informaciones indeseables: copyrights, datos de la edición u otros. Estas son, sin embargo, las condiciones que presenta un corpus literario real. Corpus utilizados ::: Corpus 8KF Un corpus heterogéneo de casi 8 000 frases literarias fue constituido manualmente a partir de poemas, discursos, citas, cuentos y otras obras. Se evitaron cuidadosamente las frases de lengua general, y también aquellas demasiado cortas ($N \le 3$ palabras) o demasiado largas ($N \ge 30$ palabras). El vocabulario empleado es complejo y estético, además que el uso de ciertas figuras literarias como la rima, la anáfora, la metáfora y otras pueden ser observadas en estas frases. Las características del corpus 8KF se muestran en la Tabla TABREF6. Este corpus fue utilizado principalmente en los dos modelos generativos: modelo basado en cadenas de Markov (Sección SECREF13) y modelo basado en la generación de Texto enlatado (Canned Text, Sección SECREF15). Modelos propuestos En este trabajo proponemos tres modelos híbridos (combinaciones de modelos generativos clásicos y aproximaciones semánticas) para la producción de frases literarias. Hemos adaptado dos modelos generativos, usando análisis sintáctico superficial (shallow parsing) y un modelo de aprendizaje profundo (Deep Learning) BIBREF25, combinados con tres modelos desarrollados de aproximación semántica. En una primera fase, los modelos generativos recuperan la información gramatical de cada palabra del corpus 8KF (ver Sección SECREF3), en forma de etiquetas POS (Part of Speech), a través de un análisis morfosintáctico. Utilizamos Freeling BIBREF26 que permite análisis lingüísticos en varios idiomas. Por ejemplo, para la palabra “Profesor” Freeling genera la etiqueta POS [NCMS000]. La primera letra indica un sustantivo (Noun), la segunda un sustantivo común (Common); la tercera indica el género masculino (Male) y la cuarta da información de número (Singular). Los 3 últimos caracteres dan información detallada del campo semántico, entidades nombradas, etc. En nuestro caso usaremos solamente los 4 primeros niveles de las etiquetas. Con los resultados del análisis morfosintáctico, se genera una salida que llamaremos Estructura gramatical vacía (EGV): compuesta exclusivamente de una secuencia de etiquetas POS; o Estructura gramatical parcialmente vacía (EGP), compuesta de etiquetas POS y de palabras funcionales (artículos, pronombres, conjunciones, etc.). En la segunda fase, las etiquetas POS (en la EGV y la EGP) serán reemplazadas por un vocabulario adecuado usando ciertas aproximaciones semánticas. La producción de una frase $f(Q,N)$ es guiada por dos parámetros: un contexto representado por un término $Q$ (o query) y una longitud $3 \le N \le 15$, dados por el usuario. Los corpus 5KL y 8KF son utilizados en varias fases de la producción de las frases $f$. El Modelo 1 está compuesto por: i/ un modelo generativo estocástico basado en cadenas de Markov para la selección de la próxima etiqueta POS usando el algoritmo de Viterbi; y ii/ un modelo de aprendizaje profundo (Word2vec), para recuperar el vocabulario que reemplazará la secuencia de etiquetas POS. El Modelo 2 es una combinación de: i/ el modelo generativo de Texto enlatado; y ii/ un modelo Word2vec, con un cálculo de distancias entre diversos vocabularios que han sido constituidos manualmente. El Modelo 3 utiliza: i/ la generación de Texto enlatado; y ii/ una interpretación geométrica del aprendizaje profundo. Esta interpretación está basada en una búsqueda de información iterativa (Information Retrieval, IR), que realiza simultáneamente un alejamiento de la semántica original y un acercamiento al query $Q$ del usuario. Modelos propuestos ::: Modelo generativo estocástico usando cadenas de Markov Este modelo generativo, que llamaremos Modelo de Markov, está basado en el algoritmo de Viterbi y las cadenas de Markov BIBREF27, donde se selecciona una etiqueta POS con la máxima probabilidad de ocurrencia, para ser agregada al final de la secuencia actual. Utilizamos el corpus de frases literarias 8KF (ver Sección SECREF5), que fue convenientemente filtrado para eliminar tokens indeseables: números, siglas, horas y fechas. El corpus filtrado se analizó usando Freeling, que recibe en entrada una cadena de texto y entrega el texto con una etiqueta POS para cada palabra. El corpus es analizado frase a frase, reemplazando cada palabra por su respectiva etiqueta POS. Al final del análisis, se obtiene un nuevo corpus 8KPOS con $s = 7~679$ secuencias de etiquetas POS, correspondientes al mismo número de frases del corpus 8KF. Las secuencias del corpus 8KPOS sirven como conjunto de entrenamiento para el algoritmo de Viterbi, que calcula las probabilidades de transición, que serán usadas para generar cadenas de Markov. Las $s$ estructuras del corpus 8KPOS procesadas con el algoritmo de Viterbi son representadas en una matriz de transición $P_{[s \times s]}$. $P$ será utilizada para crear nuevas secuencias de etiquetas POS no existentes en el corpus 8KPOS, simulando un proceso creativo. Nosotros hemos propuesto el algoritmo Creativo-Markov que describe este procedimiento. En este algoritmo, $X_i$ representa el estado de una etapa de la creación de una frase, en el instante $i$, que corresponde a una secuencia de etiquetas POS. Siguiendo un procedimiento de Markov, en un instante $i$ se selecciona la próxima etiqueta POS$_{i+1}$, con máxima probabilidad de ocurrencia, dada la última etiqueta POS$_i$ de la secuencia $X_{i}$. La etiqueta POS$_{i+1}$ será agregada al final de $X_{i}$ para generar el estado $X_{i+1}$. $P(X_{i+1}=Y|X_{i}=Z)$ es la probabilidad de transición de un estado a otro, obtenido con el algoritmo de Viterbi. Se repiten las transiciones, hasta alcanzar una longitud deseada. El resultado es una EGV, donde cada cuadro vacío representa una etiqueta POS que será remplazada por una palabra en la etapa final de generación de la nueva frase. El remplazo se realiza usando un modelo de aprendizaje profundo (Sección SECREF19). La arquitectura general de este modelo se muestra en la Figura FIGREF14. Modelos propuestos ::: Modelo generativo basado en Texto enlatado El algoritmo creativo-Markov del Modelo de Markov logra reproducir patrones lingüísticos (secuencias POS) detectados en el corpus 8KPOS, pero de corta longitud. Cuando se intentó extender la longitud de las frases a $N>6$ palabras, no fue posible mantener la coherencia y legibilidad (como se verá en la Sección SECREF19). Decidimos entonces utilizar métodos de generación textual guiados por estructuras morfosintácticas fijas: el Texto enlatado. BIBREF28 argumentan que el uso de estas estructuras ahorran tiempo de análisis sintáctico y permite concentrarse directamente en el vocabulario. La técnica de Texto enlatado ha sido empleada también en varios trabajos, con objetivos específicos. BIBREF29, BIBREF30 desarrollaron modelos para la generación de diálogos y frases simples. Esta técnica es llamada “Generación basada en plantillas” (Template-based Generation) o de manera intuitiva, Texto enlatado. Decidimos emplear texto enlatado para la generación textual usando un corpus de plantillas (templates), construido a partir del corpus 8KF (Sección SECREF3). Este corpus contiene estructuras gramaticales flexibles que pueden ser manipuladas para crear nuevas frases. Estas plantillas pueden ser seleccionadas aleatoriamente o a través de heurísticas, según un objetivo predefinido. Una plantilla es construida a partir de las palabras de una frase $f$, donde se reemplazan únicamente las palabras llenas de las clases verbo, sustantivo o adjetivo $\lbrace V, S, A \rbrace $, por sus respectivas etiquetas POS. Las otras palabras, en particular las palabras funcionales, son conservadas. Esto producirá una estructura gramatical parcialmente vacía, EGP. Posteriormente las etiquetas podrán ser reemplazadas por palabras (términos), relacionadas con el contexto definido por el query $Q$ del usuario. El proceso inicia con la selección aleatoria de una frase original $f_{o} \in $ corpus 8KF de longitud $|f_{o}|=N$. $f_{o}$ será analizada con Freeling para identificar los sintagmas. Los elementos $\lbrace V, S, A \rbrace $ de los sintagmas de $f_{o}$ serán reemplazados por sus respectivas etiquetas POS. Estos elementos son los que mayor información aportan en cualquier texto, independientemente de su longitud o género BIBREF31. Nuestra hipótesis es que al cambiar solamente estos elementos, simulamos la generación de frases por homosintaxis: semántica diferente, misma estructura. La salida de este proceso es una estructura híbrida parcialmente vacía (EGP) con palabras funcionales que dan un soporte gramatical y las etiquetas POS. La arquitectura general de este modelo se ilustra en la Figura FIGREF18. Los cuadros llenos representan palabras funcionales y los cuadros vacíos etiquetas POS a ser reemplazadas. Modelos propuestos ::: Modelo 1: Markov y aprendizaje profundo Los modelos generativos generan estructuras gramaticales vacías (EGV) o parcialmente vacías (EGP) que pueden ser manipuladas para generar nuevas frases $f(Q,N)$. La idea es que las frases $f$ sean generadas por homosintaxis. En esta sección, proponemos un modelo de aproximación semántica que utiliza el algoritmo Word2vec (basado en aprendizaje profundo), combinado con el modelo generativo de Markov descrito en la Sección SECREF13. El proceso se describe a continuación. El corpus 5KL es pre-procesado para uniformizar el formato del texto, eliminando caracteres que no son importantes para el análisis semántico: puntuación, números, etc. Esta etapa prepara los datos de entrenamiento del algoritmo de aprendizaje profundo que utiliza una representación vectorial del corpus 5KL. Para el aprendizaje profundo utilizamos la biblioteca Gensim, la versión en Python de Word2vec. Con este algoritmo se obtiene un conjunto de palabras asociadas (embeddings) a un contexto definido por un query $Q$. Word2vec recibe un término $Q$ y devuelve un léxico $L(Q)=(w_1,w_2,...,w_m)$ que representa un conjunto de $m$ palabras semánticamente próximas a $Q$. Formalmente, Word2vec: $Q \rightarrow L(Q)$. El próximo paso consiste en procesar la EGV producida por Markov. Las etiquetas POS serán identificadas y clasificadas como POS$_{\Phi }$ funcionales (correspondientes a puntuación y palabras funcionales) y POS$_\lambda $ llenas $\in \lbrace V, S, A \rbrace $ (verbos, sustantivos, adjetivos). Las etiquetas POS$_\Phi $ serán reemplazadas por palabras obtenidas de recursos lingüísticos (diccionarios) construídos con la ayuda de Freeling. Los diccionarios consisten en entradas de pares: POS$_\Phi $ y una lista de palabras y signos asociados, formalmente POS$_\Phi $ $\rightarrow $ $l$(POS$_\Phi )=(l_1,l_2,...,l_j)$. Se reemplaza aleatoriamente cada POS$_\Phi $ por una palabra de $l$ que corresponda a la misma clase gramatical. Las etiquetas POS$_\lambda $ serán reemplazadas por las palabras producidas por Word2vec $L(Q)$. Si ninguna de las palabras de $L(Q)$ tiene la forma sintáctica exigida por POS$_\lambda $, empleamos la biblioteca PATTERN para realizar conjugaciones o conversiones de género y/o número y reemplazar correctamente POS$_\lambda $. Si el conjunto de palabras $L(Q)$, no contiene ningún tipo de palabra llena, que sea adecuada o que pueda manipularse con la biblioteca PATTERN, para reemplazar las etiquetas POS$_\lambda $, se toma otra palabra, $w_i \in L(Q)$, lo más cercana a $Q$ (en función de la distancia producida por Word2vec). Se define un nuevo $Q*=w_i$ que será utilizado para generar un nuevo conjunto de palabras $L(Q*)$. Este procedimiento se repite hasta que $L(Q*)$ contenga una palabra que pueda reemplazar la POS$_{\lambda }$ en cuestión. El resultado de este procedimiento es una nueva frase $f$ que no existe en los corpora 5KL y 8KF. La Figura FIGREF23 muestra el proceso descrito. Modelos propuestos ::: Modelo 2: Texto enlatado, aprendizaje profundo y análisis morfosintáctico En este modelo proponemos una combinación entre el modelo de Texto enlatado (Sección SECREF15) y un algoritmo de aprendizaje profundo con Word2vec entrenado sobre el corpus 5KL. El objetivo es eliminar las iteraciones del Modelo 1, que son necesarias cuando las etiquetas POS no pueden ser reemplazadas con el léxico $L(Q)$. Se efectúa un análisis morfosintáctico del corpus 5KL usando Freeling y se usan las etiquetas POS para crear conjuntos de palabras que posean la misma información gramatical (etiquetas POS idénticas). Una Tabla Asociativa (TA) es generada como resultado de este proceso. La TA consiste en $k$ entradas de pares POS$_k$ y una lista de palabras asociadas. Formalmente POS$_k \rightarrow V_k =\lbrace v_{k,1},v_{k,2},...,v_{k,i}\rbrace $. El Modelo 2 es ejecutado una sola vez para cada etiqueta POS$_k$. La EGP no será reemplazada completamente: las palabras funcionales y los signos de puntuación son conservados. Para generar una nueva frase se reemplaza cada etiqueta POS$_k \in $ EGP, $k=1,2,...$, por una palabra adecuada. Para cada etiqueta POS$_k$, se recupera el léxico $V_k$ a partir de TA. El vocabulario es procesado por el algoritmo Word2vec, que calcula el valor de proximidad (distancia) entre cada palabra del vocabulario $v_{k,i}$ y el query $Q$ del usuario, $dist(Q,v_{k,i})$. Después se ordena el vocabulario $V_k$ en forma descendente según los valores de proximidad $dist(Q,v_{k,i})$ y se escoge aleatoriamente uno de los primeros tres elementos para reemplazar la etiqueta POS$_k$ de la EGP. El resultado es una nueva frase $f_2(Q,N)$ que no existe en los corpora 5KL y 8KF. El proceso se ilustra en la figura FIGREF26. Modelos propuestos ::: Modelo 3: Texto enlatado, aprendizaje profundo e interpretación geométrica El Modelo 3 reutiliza varios de los recursos anteriores: el algoritmo Word2vec, la Tabla Asociativa TA y la estructura gramatical parcialmente vacía (EGP) obtenida del modelo de Texto enlatado. El modelo utiliza distancias vectoriales para determinar las palabras más adecuadas que sustituirán las etiquetas POS de una EGP y así generar una nueva frase. Para cada etiqueta POS$_k$, $k=1,2,...$ $\in $ EGP, que se desea sustituir, usamos el algoritmo descrito a continuación. Se construye un vector para cada una de las tres palabras siguientes: $o$: es la palabra $k$ de la frase $f_{o}$ (Sección SECREF15), correspondiente a la etiqueta POS$_k$. Esta palabra permite recrear un contexto del cual la nueva frase debe alejarse, evitando producir una paráfrasis. $Q$: palabra que define al query proporcionado por el usuario. $w$: palabra candidata que podría reemplazar POS$_k$, $w \in V_k$. El vocabulario posee un tamaño $|V_k| = m$ palabras y es recuperado de la TA correspondiente a la POS$_k$. Las 10 palabras $o_i$ más próximas a $o$, las 10 palabras $Q_i$ más próximas a $Q$ y las 10 palabras $w_i$ más próximas a $w$ (en este orden y obtenidas con Word2vec), son concatenadas y representadas en un vector simbólico $\vec{U}$ de 30 dimensiones. El número de dimensiones fue fijado a 30 de manera empírica, como un compromiso razonable entre diversidad léxica y tiempo de procesamiento. El vector $\vec{U}$ puede ser escrito como: donde cada elemento $u_j, j=1,...,10$, representa una palabra próxima a $o$; $u_j, j=11,...,20$, representa una palabra próxima a $Q$; y $u_j, j=21,...,30$, es una palabra próxima a $w$. $\vec{U}$ puede ser re-escrito de la siguiente manera (ecuación DISPLAY_FORM32): $o$, $Q$ y $w$ generan respectivamente tres vectores numéricos de 30 dimensiones: donde los valores de $\vec{X}$ son obtenidos tomando la distancia entre la palabra $o$ y cada palabra $u_j \in \vec{U}, j=1,...,30$. La distancia, $x_j=dist(o,u_j)$ es proporcionada por Word2vec y además $x_j \in [0,1]$. Evidentemente la palabra $o$ estará más próxima a las 10 primeras palabras $u_j$ que a las restantes. Un proceso similar permite obtener los valores de $\vec{Q}$ y $\vec{W}$ a partir de $Q$ y $w$, respectivamente. En estos casos, el $query$ $Q$ estará más próximo a las palabras $u_j$ en las posiciones $j=11,...,20$ y la palabra candidata $w$ estará más próxima a las palabras $u_j$ en las posiciones $j=21,...30$. Enseguida, se calculan las similitudes coseno entre $\vec{Q}$ y $\vec{W}$ (ecuación DISPLAY_FORM34) y entre $\vec{X}$ y $\vec{W}$ (ecuación DISPLAY_FORM35). Estos valores también están normalizados entre [0,1]. El proceso se repite para todas las palabras $w$ del léxico $V_k$. Esto genera otro conjunto de vectores $\vec{X}, \vec{Q}$ y $\vec{W}$ para los cuales se deberán calcular nuevamente las similitudes. Al final se obtienen $m$ valores de similitudes $\theta _i$ y $\beta _i$, $ i= 1,..., m$, y se calculan los promedios $\langle \theta \rangle $ y $\langle \beta \rangle $. El cociente normalizado $\left( \frac{\langle \theta \rangle }{\theta _i} \right)$ indica qué tan grande es la similitud de $\theta _i$ con respecto al promedio $\langle \theta \rangle $ (interpretación de tipo maximización); es decir, que tan próxima se encuentra la palabra candidata $w$ al query $Q$. El cociente normalizado $\left( \frac{\beta _i}{\langle \beta \rangle } \right)$ indica qué tan reducida es la similitud de $\beta _i$ con respecto a $\langle \beta \rangle $ (interpretación de tipo minimización); es decir, qué tan lejos se encuentra la palabra candidata $w$ de la palabra $o$ de $f_{o}$. Estas fracciones se obtienen en cada par $(\theta _i, \beta _i)$ y se combinan (minimización-maximización) para calcular un score $S_i$, según la ecuación (DISPLAY_FORM36): Mientras más elevado sea el valor $S_i$, mejor obedece a nuestros objetivos: acercarse al $query$ y alejarse de la semántica original. Finalmente ordenamos en forma decreciente la lista de valores de $S_i$ y se escoge, de manera aleatoria, entre los 3 primeros, la palabra candidata $w$ que reemplazará la etiqueta POS$_k$ en cuestión. El resultado es una nueva frase $f_3(Q,N)$ que no existe en los corpora utilizados para construir el modelo. En la Figura FIGREF37 se muestra una representación del modelo descrito. Experimentos y resultados Dado la especificidad de nuestros experimentos (idioma, corpora disponibles, homosintaxis), no es posible compararse directamente con otros métodos. Tampoco consideramos la utilización de un baseline de tipo aleatorio, porque los resultados carecerían de la homosintaxis y sería sumamente fácil obtener mejores resultados. Dicho lo anterior, el Modelo 1 podría ser considerado como nuestro propio baseline. Experimentos y resultados ::: Resultados A continuación presentamos un protocolo de evaluación manual de los resultados obtenidos. El experimento consistió en la generación de 15 frases por cada uno de los tres modelos propuestos. Para cada modelo, se consideraron tres queries: $Q=$ {AMOR, GUERRA, SOL}, generando 5 frases con cada uno. Las 15 frases fueron mezcladas entre sí y reagrupadas por queries, antes de presentarlas a los evaluadores. Para la evaluación, se pidió a 7 personas leer cuidadosamente las 45 frases (15 frases por query). Todos los evaluadores poseen estudios universitarios y son hispanohablantes nativos. Se les pidió anotar en una escala de [0,1,2] (donde 0=mal, 1=aceptable y 2=correcto) los criterios siguientes: Gramaticalidad: ortografía, conjugaciones correctas, concordancia en género y número. Coherencia: legibilidad, percepción de una idea general. Contexto: relación de la frase con respecto al query. Los resultados de la evaluación se presentan en la Tabla TABREF42, en la forma de promedios normalizados entre [0,1] y de su desviación estándar $\sigma $. Las frases generadas por los modelos propuestos presentan características particulares. El Modelo 1 produce generalmente frases con un contexto estrechamente relacionado con el query del usuario, pero a menudo carecen de coherencia y gramaticalidad. Este modelo presenta el valor más alto para el contexto, pero también la desviación estándar más elevada. Se puede inferir que existe cierta discrepancia entre los evaluadores. Los valores altos para el contexto se explican por el grado de libertad de la EGV generada por el modelo de Markov. La EGV permite que todos los elementos de la estructura puedan ser sustituidos por un léxico guiado únicamente por los resultados del algoritmo Word2vec. El Modelo 2 genera frases razonablemente coherentes y gramaticalmente correctas, pero en ocasiones el contexto se encuentra más próximo a la frase original que al query. Esto puede ser interpretado como una paráfrasis elemental, que no es lo que deseamos. Finalmente, el Modelo 3 genera frases coherentes, gramaticalmente correctas y mejor relacionadas al query que el Modelo 2. Esto se logra siguiendo una intuición opuesta a la paráfrasis: buscamos conservar la estructura sintáctica de la frase original, generando una semántica completamente diferente. Por otro lado, la mínima dispersión se observa en el Modelo 1, es decir, hay una gran concordancia entre las percepciones de los evaluadores para este criterio. Conclusión y trabajo futuro En este artículo hemos presentado tres modelos de producción de frases literarias. La generación de este género textual necesita sistemas específicos que deben considerar el estilo, la sintaxis y una semántica que no necesariamente respeta la lógica de los documentos de géneros factuales, como el periodístico, enciclopédico o científico. Los resultados obtenidos son alentadores para el Modelo 3, utilizando Texto enlatado, aprendizaje profundo y una interpretación del tipo IR. El trabajo a futuro necesita la implementación de módulos para procesar los $queries$ multi-término del usuario. También se tiene contemplada la generación de frases retóricas utilizando los modelos aquí propuestos u otros con un enfoque probabilístico BIBREF32. Los modelos aquí presentados pueden ser enriquecidos a través de la integración de otros componentes, como características de una personalidad y/o las emociones BIBREF33, BIBREF34, BIBREF35, BIBREF36. Finalmente, un protocolo de evaluación semi-automático (y a gran escala) está igualmente previsto. Agradecimientos Los autores agradecen a Eric SanJuan respecto a las ideas y el concepto de la homosintaxis.
accuracy with standard deviation
40b9f502f15e955ba8615822e6fa08cb5fd29c81
40b9f502f15e955ba8615822e6fa08cb5fd29c81_0
Q: What datasets are used? Text: Introducción Los investigadores en Procesamiento de Lenguaje Natural (PLN) durante mucho tiempo han utilizado corpus constituidos por documentos enciclopédicos (notablemente Wikipedia), periodísticos (periódicos o revistas) o especializados (documentos legales, científicos o técnicos) para el desarrollo y pruebas de sus modelos BIBREF0, BIBREF1, BIBREF2. La utilización y estudios de corpora literarios sistemáticamente han sido dejados a un lado por varias razones. En primer lugar, el nivel de discurso literario es más complejo que los otros géneros. En segundo lugar, a menudo, los documentos literarios hacen referencia a mundos o situaciones imaginarias o alegóricas, a diferencia de los otros géneros que describen sobre todo situaciones o hechos factuales. Estas y otras características presentes en los textos literarios, vuelven sumamente compleja la tarea de análisis automático de este tipo de textos. En este trabajo nos proponemos utilizar corpora literarios, a fin de generar realizaciones literarias (frases nuevas) no presentes en dichos corpora. La producción de textos literarios es el resultado de un proceso donde una persona hace uso de aptitudes creativas. Este proceso, denominado “proceso creativo”, ha sido analizado por BIBREF3, quien propone tres tipos básicos de creatividad: la primera, Creatividad Combinatoria (CCO), donde se fusionan elementos conocidos para la generación de nuevos elementos. La segunda, Creatividad Exploratoria (CE), donde la generación ocurre a partir de la observación o exploración. La tercera, Creatividad Transformacional (CT), donde los elementos generados son producto de alteraciones o experimentaciones aplicadas al dominio de la CE. Sin embargo, cuando se pretende automatizar el proceso creativo, la tarea debe ser adaptada a métodos formales que puedan ser realizados en un algoritmo. Este proceso automatizado da lugar a un nuevo concepto denominado Creatividad Computacional (CC), introducido por BIBREF4, quien retoma para ello la CT y la CE propuestas por BIBREF3. La definición de literatura no tiene un consenso universal, y muchas variantes de la definición pueden ser encontradas. En este trabajo optaremos por introducir una definición pragmática de frase literaria, que servirá para nuestros modelos y experimentos. Definición. Una frase literaria es una frase que se diferencia de las frases en lengua general, porque contiene elementos (nombres, verbos, adjetivos, adverbios) que son percibidos como elegantes o menos coloquiales que sus equivalentes en lengua general. En particular, proponemos crear artificialmente frases literarias utilizando modelos generativos y aproximaciones semánticas basados en corpus de lengua literaria. La combinación de esos modelos da lugar a una homosintaxis, es decir, la producción de texto nuevo a partir de formas de discurso de diversos autores. La homosintaxis no tiene el mismo contenido semántico, ni siquiera las mismas palabras, aunque guarda la misma estructura sintáctica. En este trabajo proponemos estudiar el problema de la generación de texto literario original en forma de frases aisladas, no a nivel de párrafos. La generación de párrafos puede ser objeto de trabajos futuros. Una evaluación de la calidad de las frases generadas por nuestro sistema será presentada. Este artículo está estructurado como sigue. En la Sección SECREF2 presentamos un estado del arte de la creatividad computacional. En la Sección SECREF3 describimos los corpus utilizados. Nuestros modelos son descritos en la Sección SECREF4. Los resultados y su interpretación se encuentran en la Sección SECREF5. Finalmente la Sección SECREF6 presenta algunas ideas de trabajos futuros antes de concluir. Trabajos previos La generación de texto es una tarea relativamente clásica, que ha sido estudiada en diversos trabajos. Por ejemplo, BIBREF5 presentan un modelo basado en cadenas de Markov para la generación de texto en idioma polaco. Los autores definen un conjunto de estados actuales y calculan la probabilidad de pasar al estado siguiente. La ecuación (DISPLAY_FORM1) calcula la probabilidad de pasar al estado $X_{i}$ a partir de $X_{j}$, Para ello, se utiliza una matriz de transición, la cual contiene las probabilidades de transición de un estado actual $X_i$ a los posibles estados futuros $X_{i+1}$. Cada estado puede estar definido por $n$-gramas de letras o de palabras. La tarea inicia en un estado $X_i$ dado por el usuario. Posteriormente, usando la matriz de transición, se calcula la probabilidad de pasar al estado siguiente $X_{i+1}$. En ese momento el estado predicho $X_{i+1}$ se convierte en el estado actual $X_i$, repitiendo este proceso hasta satisfacer una condición. Este método tiene un buen comportamiento al generar palabras de 4 o 5 letras. En polaco esta longitud corresponde a la longitud media de la mayor parte de las palabras BIBREF6. También hay trabajos que realizan análisis más profundos para generar no solamente palabras, sino párrafos completos. BIBREF7 presentan un algoritmo que genera automáticamente comentarios descriptivos para bloques de código (métodos) en Java. Para ello, se toma el nombre del método y se usa como la acción o idea central de la descripción a generar. Posteriormente se usan un conjunto de heurísticas, para seleccionar las líneas de código del método que puedan aportar mayor información, y se procesan para generar la descripción. La tarea consiste en construir sintagmas, a partir de la idea central dada por el nombre del método, y enriquecerlos con la información de los elementos extraídos. Por ejemplo, si hay un método removeWall(Wall x) y se encuentra la llamada al método removeWall(oldWall), la descripción generada podría ser: “Remove old Wall”. Obteniéndose la acción (verbo) y el objeto (sustantivo) directamente del nombre del método y el adjetivo a partir de la llamada. Estas ideas permiten a los autores la generación de comentarios extensos sin perder la coherencia y la gramaticalidad. También se encuentran trabajos de generación textual que se proponen como meta resultados con un valor más artístico. BIBREF8 presentan un conjunto de algoritmos para la generación de una guía narrativa basada en la idea de Creatividad Exploratoria BIBREF3. El modelo establece i/ un conjunto universal U de conceptos relevantes relacionados a un dominio; ii/ un modelo generador de texto; iii/ un subconjunto de conceptos S que pertenecen al conjunto universal U; y iv/ algoritmos encargados de establecer las relaciones entre U y S para generar nuevos conceptos. Estos nuevos conceptos serán posteriormente comparados con los conceptos ya existentes en U para verificar la coherencia y relación con la idea principal. Si los resultados son adecuados, estos nuevos conceptos se utilizan para dar continuación a la narrativa. Son diversos los trabajos que están orientados a la generación de una narrativa ficticia como cuentos o historias. BIBREF9 proponen un modelo de generación de texto narrativo a partir del análisis de entidades. Dichas entidades son palabras (verbos, sustantivos o adjetivos) dentro de un texto que serán usados para generar la frase siguiente. El modelo recupera las entidades obtenidas de tres fuentes principales: la frase actual, la frase previa y el documento completo (contexto), y las procesa con una red neuronal para seleccionar las mejores de acuerdo a diversos criterios. A partir de un conjunto de heurísticas, se analizaron las frases generadas para separar aquellas que expresaran una misma idea (paráfrasis), de aquellas que tuvieran una relación entre sus entidades pero con ideas diferentes. La generación de texto literario es un proceso muy diferente a la generación de texto aleatorio BIBREF10, BIBREF11 y tampoco se limita a una idea o concepto general. El texto literario está destinado a ser un documento elegante y agradable a la lectura, haciendo uso de figuras literarias y un vocabulario distinto al empleado en la lengua general. Esto da a la obra una autenticidad y define el estilo del autor. El texto literario también debe diferenciarse de las estructuras rígidas o estereotipadas de los géneros periodístico, enciclopédico o científico. BIBREF12 proponen un modelo para la generación de poemas y se basa en dos premisas básicas: ¿qué decir? y ¿cómo decirlo? La propuesta parte de la selección de un conjunto de frases tomando como guía una lista de palabras dadas por el usuario. Las frases son procesadas por un modelo de red neuronal BIBREF13, para construir combinaciones coherentes y formular un contexto. Este contexto es analizado para identificar sus principales elementos y generar las líneas del poema, que también pasarán a formar parte del contexto. El modelo fue evaluado manualmente por 30 expertos en una escala de 1 a 5, analizando legibilidad, coherencia y significatividad en frases de 5 palabras, obteniendo una precisión de 0.75. Sin embargo, la coherencia entre frases resultó ser muy pobre. BIBREF14, BIBREF15 proponen un modelo de generación de poemas a base de plantillas. El algoritmo inicia con un conjunto de frases relacionadas a partir de palabras clave. Las palabras clave sirven para generar un contexto. Las frases son procesadas usando el sistema PEN para obtener su información gramatical. Esta información es empleada para la generación de nuevas platillas gramaticales y finalmente la construcción de las líneas del poema, tratando de mantener la coherencia y la gramaticalidad. El modelo sentiGAN BIBREF16 pretende generar texto con un contexto emocional. Se trata de una actualización del modelo GAN (Generative Adversarial Net) BIBREF17 que ha producido resultados alentadores en la generación textual, aunque con ciertos problemas de calidad y coherencia. Se utiliza el análisis semántico de una entrada proporcionada por el usuario que sirve para la creación del contexto. La propuesta principal de SentiGAN sugiere establecer un número definido de generadores textuales que deberán producir texto relacionado a una emoción definida. Los generadores son entrenados bajo dos esquemas: i/ una serie de elementos lingüísticos que deben ser evitados para la generación del texto; y ii/ un conjunto de elementos relacionados con la emoción ligada al generador. A través de cálculos de distancia, heurísticas y modelos probabilísticos, el generador crea un texto lo más alejado del primer esquema y lo más cercano al segundo. También existen trabajos con un alcance más corto pero de mayor precisión. BIBREF18 proponen la evaluación de un conjunto de datos con un modelo basado en redes neuronales para la generación de subconjuntos de multi-palabras. Este mismo análisis, se considera en BIBREF19, en donde se busca establecer o detectar la relación hiperónimo-hipónimo con la ayuda del modelo de Deep Learning Word2vec BIBREF20. La propuesta de BIBREF19 reporta una precisión de 0.70 al ser evaluado sobre un corpus manualmente etiquetado. La literatura es una actividad artística que exige capacidades creativas importantes y que ha llamado la atención de científicos desde hace cierto tiempo. BIBREF4 realiza un estado del arte interesante donde menciona algunos trabajos que tuvieron un primer acercamiento a la obra literaria desde una perspectiva superficial. Por ejemplo, el modelo “Through the park” BIBREF21, es capaz de generar narraciones históricas empleando la elipsis. Esta técnica es empleada para manipular, entre otras cosas, el ritmo de la narración. En los trabajos “About So Many Things” BIBREF22 y “Taroko Gorge” BIBREF23 se muestran textos generados automáticamente. El primero de ellos genera estrofas de 4 líneas estrechamente relacionadas entre ellas. Eso se logra a través de un análisis gramatical que establece conexiones entre entidades de distintas líneas. El segundo trabajo muestra algunos poemas cortos generados automáticamente con una estructura más compleja que la de las estrofas. El inconveniente de ambos enfoques es el uso de una estructura inflexible, lo que genera textos repetitivos con una gramaticalidad limitada. El proyecto MEXICA modela la generación colaborativa de narraciones BIBREF4. El propósito es la generación de narraciones completas utilizando obras de la época Precolombina. MEXICA genera narraciones simulando el proceso creativo de E-R (Engaged y Reflexive) BIBREF24. Este proceso se describe como la acción, donde el autor trae a su mente un conjunto de ideas y contextos y establece una conexión coherente entre estas (E). Posteriormente se reflexiona sobre las conexiones establecidas y se evalúa el resultado final para considerar si este realmente satisface lo esperado (R). El proceso itera hasta que el autor lo considera concluido. Corpus utilizados ::: Corpus 5KL Este corpus fue constituido con aproximadamente 5 000 documentos (en su mayor parte libros) en español. Los documentos originales, en formatos heterogéneos, fueron procesados para crear un único documento codificado en utf8. Las frases fueron segmentadas automáticamente, usando un programa en PERL 5.0 y expresiones regulares, para obtener una frase por línea. Las características del corpus 5KL se encuentran en la Tabla TABREF4. Este corpus es empleado para el entrenamiento de los modelos de aprendizaje profundo (Deep Learning, Sección SECREF4). El corpus literario 5KL posee la ventaja de ser muy extenso y adecuado para el aprendizaje automático. Tiene sin embargo, la desventaja de que no todas las frases son necesariamente “frases literarias”. Muchas de ellas son frases de lengua general: estas frases a menudo otorgan una fluidez a la lectura y proporcionan los enlaces necesarios a las ideas expresadas en las frases literarias. Otra desventaja de este corpus es el ruido que contiene. El proceso de segmentación puede producir errores en la detección de fronteras de frases. También los números de página, capítulos, secciones o índices producen errores. No se realizó ningún proceso manual de verificación, por lo que a veces se introducen informaciones indeseables: copyrights, datos de la edición u otros. Estas son, sin embargo, las condiciones que presenta un corpus literario real. Corpus utilizados ::: Corpus 8KF Un corpus heterogéneo de casi 8 000 frases literarias fue constituido manualmente a partir de poemas, discursos, citas, cuentos y otras obras. Se evitaron cuidadosamente las frases de lengua general, y también aquellas demasiado cortas ($N \le 3$ palabras) o demasiado largas ($N \ge 30$ palabras). El vocabulario empleado es complejo y estético, además que el uso de ciertas figuras literarias como la rima, la anáfora, la metáfora y otras pueden ser observadas en estas frases. Las características del corpus 8KF se muestran en la Tabla TABREF6. Este corpus fue utilizado principalmente en los dos modelos generativos: modelo basado en cadenas de Markov (Sección SECREF13) y modelo basado en la generación de Texto enlatado (Canned Text, Sección SECREF15). Modelos propuestos En este trabajo proponemos tres modelos híbridos (combinaciones de modelos generativos clásicos y aproximaciones semánticas) para la producción de frases literarias. Hemos adaptado dos modelos generativos, usando análisis sintáctico superficial (shallow parsing) y un modelo de aprendizaje profundo (Deep Learning) BIBREF25, combinados con tres modelos desarrollados de aproximación semántica. En una primera fase, los modelos generativos recuperan la información gramatical de cada palabra del corpus 8KF (ver Sección SECREF3), en forma de etiquetas POS (Part of Speech), a través de un análisis morfosintáctico. Utilizamos Freeling BIBREF26 que permite análisis lingüísticos en varios idiomas. Por ejemplo, para la palabra “Profesor” Freeling genera la etiqueta POS [NCMS000]. La primera letra indica un sustantivo (Noun), la segunda un sustantivo común (Common); la tercera indica el género masculino (Male) y la cuarta da información de número (Singular). Los 3 últimos caracteres dan información detallada del campo semántico, entidades nombradas, etc. En nuestro caso usaremos solamente los 4 primeros niveles de las etiquetas. Con los resultados del análisis morfosintáctico, se genera una salida que llamaremos Estructura gramatical vacía (EGV): compuesta exclusivamente de una secuencia de etiquetas POS; o Estructura gramatical parcialmente vacía (EGP), compuesta de etiquetas POS y de palabras funcionales (artículos, pronombres, conjunciones, etc.). En la segunda fase, las etiquetas POS (en la EGV y la EGP) serán reemplazadas por un vocabulario adecuado usando ciertas aproximaciones semánticas. La producción de una frase $f(Q,N)$ es guiada por dos parámetros: un contexto representado por un término $Q$ (o query) y una longitud $3 \le N \le 15$, dados por el usuario. Los corpus 5KL y 8KF son utilizados en varias fases de la producción de las frases $f$. El Modelo 1 está compuesto por: i/ un modelo generativo estocástico basado en cadenas de Markov para la selección de la próxima etiqueta POS usando el algoritmo de Viterbi; y ii/ un modelo de aprendizaje profundo (Word2vec), para recuperar el vocabulario que reemplazará la secuencia de etiquetas POS. El Modelo 2 es una combinación de: i/ el modelo generativo de Texto enlatado; y ii/ un modelo Word2vec, con un cálculo de distancias entre diversos vocabularios que han sido constituidos manualmente. El Modelo 3 utiliza: i/ la generación de Texto enlatado; y ii/ una interpretación geométrica del aprendizaje profundo. Esta interpretación está basada en una búsqueda de información iterativa (Information Retrieval, IR), que realiza simultáneamente un alejamiento de la semántica original y un acercamiento al query $Q$ del usuario. Modelos propuestos ::: Modelo generativo estocástico usando cadenas de Markov Este modelo generativo, que llamaremos Modelo de Markov, está basado en el algoritmo de Viterbi y las cadenas de Markov BIBREF27, donde se selecciona una etiqueta POS con la máxima probabilidad de ocurrencia, para ser agregada al final de la secuencia actual. Utilizamos el corpus de frases literarias 8KF (ver Sección SECREF5), que fue convenientemente filtrado para eliminar tokens indeseables: números, siglas, horas y fechas. El corpus filtrado se analizó usando Freeling, que recibe en entrada una cadena de texto y entrega el texto con una etiqueta POS para cada palabra. El corpus es analizado frase a frase, reemplazando cada palabra por su respectiva etiqueta POS. Al final del análisis, se obtiene un nuevo corpus 8KPOS con $s = 7~679$ secuencias de etiquetas POS, correspondientes al mismo número de frases del corpus 8KF. Las secuencias del corpus 8KPOS sirven como conjunto de entrenamiento para el algoritmo de Viterbi, que calcula las probabilidades de transición, que serán usadas para generar cadenas de Markov. Las $s$ estructuras del corpus 8KPOS procesadas con el algoritmo de Viterbi son representadas en una matriz de transición $P_{[s \times s]}$. $P$ será utilizada para crear nuevas secuencias de etiquetas POS no existentes en el corpus 8KPOS, simulando un proceso creativo. Nosotros hemos propuesto el algoritmo Creativo-Markov que describe este procedimiento. En este algoritmo, $X_i$ representa el estado de una etapa de la creación de una frase, en el instante $i$, que corresponde a una secuencia de etiquetas POS. Siguiendo un procedimiento de Markov, en un instante $i$ se selecciona la próxima etiqueta POS$_{i+1}$, con máxima probabilidad de ocurrencia, dada la última etiqueta POS$_i$ de la secuencia $X_{i}$. La etiqueta POS$_{i+1}$ será agregada al final de $X_{i}$ para generar el estado $X_{i+1}$. $P(X_{i+1}=Y|X_{i}=Z)$ es la probabilidad de transición de un estado a otro, obtenido con el algoritmo de Viterbi. Se repiten las transiciones, hasta alcanzar una longitud deseada. El resultado es una EGV, donde cada cuadro vacío representa una etiqueta POS que será remplazada por una palabra en la etapa final de generación de la nueva frase. El remplazo se realiza usando un modelo de aprendizaje profundo (Sección SECREF19). La arquitectura general de este modelo se muestra en la Figura FIGREF14. Modelos propuestos ::: Modelo generativo basado en Texto enlatado El algoritmo creativo-Markov del Modelo de Markov logra reproducir patrones lingüísticos (secuencias POS) detectados en el corpus 8KPOS, pero de corta longitud. Cuando se intentó extender la longitud de las frases a $N>6$ palabras, no fue posible mantener la coherencia y legibilidad (como se verá en la Sección SECREF19). Decidimos entonces utilizar métodos de generación textual guiados por estructuras morfosintácticas fijas: el Texto enlatado. BIBREF28 argumentan que el uso de estas estructuras ahorran tiempo de análisis sintáctico y permite concentrarse directamente en el vocabulario. La técnica de Texto enlatado ha sido empleada también en varios trabajos, con objetivos específicos. BIBREF29, BIBREF30 desarrollaron modelos para la generación de diálogos y frases simples. Esta técnica es llamada “Generación basada en plantillas” (Template-based Generation) o de manera intuitiva, Texto enlatado. Decidimos emplear texto enlatado para la generación textual usando un corpus de plantillas (templates), construido a partir del corpus 8KF (Sección SECREF3). Este corpus contiene estructuras gramaticales flexibles que pueden ser manipuladas para crear nuevas frases. Estas plantillas pueden ser seleccionadas aleatoriamente o a través de heurísticas, según un objetivo predefinido. Una plantilla es construida a partir de las palabras de una frase $f$, donde se reemplazan únicamente las palabras llenas de las clases verbo, sustantivo o adjetivo $\lbrace V, S, A \rbrace $, por sus respectivas etiquetas POS. Las otras palabras, en particular las palabras funcionales, son conservadas. Esto producirá una estructura gramatical parcialmente vacía, EGP. Posteriormente las etiquetas podrán ser reemplazadas por palabras (términos), relacionadas con el contexto definido por el query $Q$ del usuario. El proceso inicia con la selección aleatoria de una frase original $f_{o} \in $ corpus 8KF de longitud $|f_{o}|=N$. $f_{o}$ será analizada con Freeling para identificar los sintagmas. Los elementos $\lbrace V, S, A \rbrace $ de los sintagmas de $f_{o}$ serán reemplazados por sus respectivas etiquetas POS. Estos elementos son los que mayor información aportan en cualquier texto, independientemente de su longitud o género BIBREF31. Nuestra hipótesis es que al cambiar solamente estos elementos, simulamos la generación de frases por homosintaxis: semántica diferente, misma estructura. La salida de este proceso es una estructura híbrida parcialmente vacía (EGP) con palabras funcionales que dan un soporte gramatical y las etiquetas POS. La arquitectura general de este modelo se ilustra en la Figura FIGREF18. Los cuadros llenos representan palabras funcionales y los cuadros vacíos etiquetas POS a ser reemplazadas. Modelos propuestos ::: Modelo 1: Markov y aprendizaje profundo Los modelos generativos generan estructuras gramaticales vacías (EGV) o parcialmente vacías (EGP) que pueden ser manipuladas para generar nuevas frases $f(Q,N)$. La idea es que las frases $f$ sean generadas por homosintaxis. En esta sección, proponemos un modelo de aproximación semántica que utiliza el algoritmo Word2vec (basado en aprendizaje profundo), combinado con el modelo generativo de Markov descrito en la Sección SECREF13. El proceso se describe a continuación. El corpus 5KL es pre-procesado para uniformizar el formato del texto, eliminando caracteres que no son importantes para el análisis semántico: puntuación, números, etc. Esta etapa prepara los datos de entrenamiento del algoritmo de aprendizaje profundo que utiliza una representación vectorial del corpus 5KL. Para el aprendizaje profundo utilizamos la biblioteca Gensim, la versión en Python de Word2vec. Con este algoritmo se obtiene un conjunto de palabras asociadas (embeddings) a un contexto definido por un query $Q$. Word2vec recibe un término $Q$ y devuelve un léxico $L(Q)=(w_1,w_2,...,w_m)$ que representa un conjunto de $m$ palabras semánticamente próximas a $Q$. Formalmente, Word2vec: $Q \rightarrow L(Q)$. El próximo paso consiste en procesar la EGV producida por Markov. Las etiquetas POS serán identificadas y clasificadas como POS$_{\Phi }$ funcionales (correspondientes a puntuación y palabras funcionales) y POS$_\lambda $ llenas $\in \lbrace V, S, A \rbrace $ (verbos, sustantivos, adjetivos). Las etiquetas POS$_\Phi $ serán reemplazadas por palabras obtenidas de recursos lingüísticos (diccionarios) construídos con la ayuda de Freeling. Los diccionarios consisten en entradas de pares: POS$_\Phi $ y una lista de palabras y signos asociados, formalmente POS$_\Phi $ $\rightarrow $ $l$(POS$_\Phi )=(l_1,l_2,...,l_j)$. Se reemplaza aleatoriamente cada POS$_\Phi $ por una palabra de $l$ que corresponda a la misma clase gramatical. Las etiquetas POS$_\lambda $ serán reemplazadas por las palabras producidas por Word2vec $L(Q)$. Si ninguna de las palabras de $L(Q)$ tiene la forma sintáctica exigida por POS$_\lambda $, empleamos la biblioteca PATTERN para realizar conjugaciones o conversiones de género y/o número y reemplazar correctamente POS$_\lambda $. Si el conjunto de palabras $L(Q)$, no contiene ningún tipo de palabra llena, que sea adecuada o que pueda manipularse con la biblioteca PATTERN, para reemplazar las etiquetas POS$_\lambda $, se toma otra palabra, $w_i \in L(Q)$, lo más cercana a $Q$ (en función de la distancia producida por Word2vec). Se define un nuevo $Q*=w_i$ que será utilizado para generar un nuevo conjunto de palabras $L(Q*)$. Este procedimiento se repite hasta que $L(Q*)$ contenga una palabra que pueda reemplazar la POS$_{\lambda }$ en cuestión. El resultado de este procedimiento es una nueva frase $f$ que no existe en los corpora 5KL y 8KF. La Figura FIGREF23 muestra el proceso descrito. Modelos propuestos ::: Modelo 2: Texto enlatado, aprendizaje profundo y análisis morfosintáctico En este modelo proponemos una combinación entre el modelo de Texto enlatado (Sección SECREF15) y un algoritmo de aprendizaje profundo con Word2vec entrenado sobre el corpus 5KL. El objetivo es eliminar las iteraciones del Modelo 1, que son necesarias cuando las etiquetas POS no pueden ser reemplazadas con el léxico $L(Q)$. Se efectúa un análisis morfosintáctico del corpus 5KL usando Freeling y se usan las etiquetas POS para crear conjuntos de palabras que posean la misma información gramatical (etiquetas POS idénticas). Una Tabla Asociativa (TA) es generada como resultado de este proceso. La TA consiste en $k$ entradas de pares POS$_k$ y una lista de palabras asociadas. Formalmente POS$_k \rightarrow V_k =\lbrace v_{k,1},v_{k,2},...,v_{k,i}\rbrace $. El Modelo 2 es ejecutado una sola vez para cada etiqueta POS$_k$. La EGP no será reemplazada completamente: las palabras funcionales y los signos de puntuación son conservados. Para generar una nueva frase se reemplaza cada etiqueta POS$_k \in $ EGP, $k=1,2,...$, por una palabra adecuada. Para cada etiqueta POS$_k$, se recupera el léxico $V_k$ a partir de TA. El vocabulario es procesado por el algoritmo Word2vec, que calcula el valor de proximidad (distancia) entre cada palabra del vocabulario $v_{k,i}$ y el query $Q$ del usuario, $dist(Q,v_{k,i})$. Después se ordena el vocabulario $V_k$ en forma descendente según los valores de proximidad $dist(Q,v_{k,i})$ y se escoge aleatoriamente uno de los primeros tres elementos para reemplazar la etiqueta POS$_k$ de la EGP. El resultado es una nueva frase $f_2(Q,N)$ que no existe en los corpora 5KL y 8KF. El proceso se ilustra en la figura FIGREF26. Modelos propuestos ::: Modelo 3: Texto enlatado, aprendizaje profundo e interpretación geométrica El Modelo 3 reutiliza varios de los recursos anteriores: el algoritmo Word2vec, la Tabla Asociativa TA y la estructura gramatical parcialmente vacía (EGP) obtenida del modelo de Texto enlatado. El modelo utiliza distancias vectoriales para determinar las palabras más adecuadas que sustituirán las etiquetas POS de una EGP y así generar una nueva frase. Para cada etiqueta POS$_k$, $k=1,2,...$ $\in $ EGP, que se desea sustituir, usamos el algoritmo descrito a continuación. Se construye un vector para cada una de las tres palabras siguientes: $o$: es la palabra $k$ de la frase $f_{o}$ (Sección SECREF15), correspondiente a la etiqueta POS$_k$. Esta palabra permite recrear un contexto del cual la nueva frase debe alejarse, evitando producir una paráfrasis. $Q$: palabra que define al query proporcionado por el usuario. $w$: palabra candidata que podría reemplazar POS$_k$, $w \in V_k$. El vocabulario posee un tamaño $|V_k| = m$ palabras y es recuperado de la TA correspondiente a la POS$_k$. Las 10 palabras $o_i$ más próximas a $o$, las 10 palabras $Q_i$ más próximas a $Q$ y las 10 palabras $w_i$ más próximas a $w$ (en este orden y obtenidas con Word2vec), son concatenadas y representadas en un vector simbólico $\vec{U}$ de 30 dimensiones. El número de dimensiones fue fijado a 30 de manera empírica, como un compromiso razonable entre diversidad léxica y tiempo de procesamiento. El vector $\vec{U}$ puede ser escrito como: donde cada elemento $u_j, j=1,...,10$, representa una palabra próxima a $o$; $u_j, j=11,...,20$, representa una palabra próxima a $Q$; y $u_j, j=21,...,30$, es una palabra próxima a $w$. $\vec{U}$ puede ser re-escrito de la siguiente manera (ecuación DISPLAY_FORM32): $o$, $Q$ y $w$ generan respectivamente tres vectores numéricos de 30 dimensiones: donde los valores de $\vec{X}$ son obtenidos tomando la distancia entre la palabra $o$ y cada palabra $u_j \in \vec{U}, j=1,...,30$. La distancia, $x_j=dist(o,u_j)$ es proporcionada por Word2vec y además $x_j \in [0,1]$. Evidentemente la palabra $o$ estará más próxima a las 10 primeras palabras $u_j$ que a las restantes. Un proceso similar permite obtener los valores de $\vec{Q}$ y $\vec{W}$ a partir de $Q$ y $w$, respectivamente. En estos casos, el $query$ $Q$ estará más próximo a las palabras $u_j$ en las posiciones $j=11,...,20$ y la palabra candidata $w$ estará más próxima a las palabras $u_j$ en las posiciones $j=21,...30$. Enseguida, se calculan las similitudes coseno entre $\vec{Q}$ y $\vec{W}$ (ecuación DISPLAY_FORM34) y entre $\vec{X}$ y $\vec{W}$ (ecuación DISPLAY_FORM35). Estos valores también están normalizados entre [0,1]. El proceso se repite para todas las palabras $w$ del léxico $V_k$. Esto genera otro conjunto de vectores $\vec{X}, \vec{Q}$ y $\vec{W}$ para los cuales se deberán calcular nuevamente las similitudes. Al final se obtienen $m$ valores de similitudes $\theta _i$ y $\beta _i$, $ i= 1,..., m$, y se calculan los promedios $\langle \theta \rangle $ y $\langle \beta \rangle $. El cociente normalizado $\left( \frac{\langle \theta \rangle }{\theta _i} \right)$ indica qué tan grande es la similitud de $\theta _i$ con respecto al promedio $\langle \theta \rangle $ (interpretación de tipo maximización); es decir, que tan próxima se encuentra la palabra candidata $w$ al query $Q$. El cociente normalizado $\left( \frac{\beta _i}{\langle \beta \rangle } \right)$ indica qué tan reducida es la similitud de $\beta _i$ con respecto a $\langle \beta \rangle $ (interpretación de tipo minimización); es decir, qué tan lejos se encuentra la palabra candidata $w$ de la palabra $o$ de $f_{o}$. Estas fracciones se obtienen en cada par $(\theta _i, \beta _i)$ y se combinan (minimización-maximización) para calcular un score $S_i$, según la ecuación (DISPLAY_FORM36): Mientras más elevado sea el valor $S_i$, mejor obedece a nuestros objetivos: acercarse al $query$ y alejarse de la semántica original. Finalmente ordenamos en forma decreciente la lista de valores de $S_i$ y se escoge, de manera aleatoria, entre los 3 primeros, la palabra candidata $w$ que reemplazará la etiqueta POS$_k$ en cuestión. El resultado es una nueva frase $f_3(Q,N)$ que no existe en los corpora utilizados para construir el modelo. En la Figura FIGREF37 se muestra una representación del modelo descrito. Experimentos y resultados Dado la especificidad de nuestros experimentos (idioma, corpora disponibles, homosintaxis), no es posible compararse directamente con otros métodos. Tampoco consideramos la utilización de un baseline de tipo aleatorio, porque los resultados carecerían de la homosintaxis y sería sumamente fácil obtener mejores resultados. Dicho lo anterior, el Modelo 1 podría ser considerado como nuestro propio baseline. Experimentos y resultados ::: Resultados A continuación presentamos un protocolo de evaluación manual de los resultados obtenidos. El experimento consistió en la generación de 15 frases por cada uno de los tres modelos propuestos. Para cada modelo, se consideraron tres queries: $Q=$ {AMOR, GUERRA, SOL}, generando 5 frases con cada uno. Las 15 frases fueron mezcladas entre sí y reagrupadas por queries, antes de presentarlas a los evaluadores. Para la evaluación, se pidió a 7 personas leer cuidadosamente las 45 frases (15 frases por query). Todos los evaluadores poseen estudios universitarios y son hispanohablantes nativos. Se les pidió anotar en una escala de [0,1,2] (donde 0=mal, 1=aceptable y 2=correcto) los criterios siguientes: Gramaticalidad: ortografía, conjugaciones correctas, concordancia en género y número. Coherencia: legibilidad, percepción de una idea general. Contexto: relación de la frase con respecto al query. Los resultados de la evaluación se presentan en la Tabla TABREF42, en la forma de promedios normalizados entre [0,1] y de su desviación estándar $\sigma $. Las frases generadas por los modelos propuestos presentan características particulares. El Modelo 1 produce generalmente frases con un contexto estrechamente relacionado con el query del usuario, pero a menudo carecen de coherencia y gramaticalidad. Este modelo presenta el valor más alto para el contexto, pero también la desviación estándar más elevada. Se puede inferir que existe cierta discrepancia entre los evaluadores. Los valores altos para el contexto se explican por el grado de libertad de la EGV generada por el modelo de Markov. La EGV permite que todos los elementos de la estructura puedan ser sustituidos por un léxico guiado únicamente por los resultados del algoritmo Word2vec. El Modelo 2 genera frases razonablemente coherentes y gramaticalmente correctas, pero en ocasiones el contexto se encuentra más próximo a la frase original que al query. Esto puede ser interpretado como una paráfrasis elemental, que no es lo que deseamos. Finalmente, el Modelo 3 genera frases coherentes, gramaticalmente correctas y mejor relacionadas al query que el Modelo 2. Esto se logra siguiendo una intuición opuesta a la paráfrasis: buscamos conservar la estructura sintáctica de la frase original, generando una semántica completamente diferente. Por otro lado, la mínima dispersión se observa en el Modelo 1, es decir, hay una gran concordancia entre las percepciones de los evaluadores para este criterio. Conclusión y trabajo futuro En este artículo hemos presentado tres modelos de producción de frases literarias. La generación de este género textual necesita sistemas específicos que deben considerar el estilo, la sintaxis y una semántica que no necesariamente respeta la lógica de los documentos de géneros factuales, como el periodístico, enciclopédico o científico. Los resultados obtenidos son alentadores para el Modelo 3, utilizando Texto enlatado, aprendizaje profundo y una interpretación del tipo IR. El trabajo a futuro necesita la implementación de módulos para procesar los $queries$ multi-término del usuario. También se tiene contemplada la generación de frases retóricas utilizando los modelos aquí propuestos u otros con un enfoque probabilístico BIBREF32. Los modelos aquí presentados pueden ser enriquecidos a través de la integración de otros componentes, como características de una personalidad y/o las emociones BIBREF33, BIBREF34, BIBREF35, BIBREF36. Finalmente, un protocolo de evaluación semi-automático (y a gran escala) está igualmente previsto. Agradecimientos Los autores agradecen a Eric SanJuan respecto a las ideas y el concepto de la homosintaxis.
Corpus 5KL, Corpus 8KF
ba56afe426906c4cfc414bca4c66ceb4a0a68121
ba56afe426906c4cfc414bca4c66ceb4a0a68121_0
Q: What are the datasets used for the task? Text: Introduction Words can be considered compositions of syllables, which in turn are compositions of phones. Phones are units of sound producible by the human vocal apparatus. Syllables play an important role in prosody and are influential components of natural language understanding, speech production, and speech recognition systems. Text-to-speech (TTS) systems can rely heavily on automatically syllabified phone sequences BIBREF0. One prominent example is Festival, an open source TTS system that relies on a syllabification algorithm to organize speech production BIBREF1. Linguists have recognized since the late 1940s that the syllable is a hierarchical structure, present in most, if not all, languages (though there is some disagreement on this score. See, for example, BIBREF2). An optional consonant onset is followed by a rime, which may be further decomposed into a high sonority vowel nucleus followed by an optional consonant coda. All languages appear to have at least the single syllable vowel ($V$) and the two syllable vowel-consonant ($VC$) forms in their syllable inventories. For example, oh and so in English. Most languages supplement these with codas to form the $\lbrace V, CV, VC, CVC\rbrace $ syllable inventory. Sonority rises from the consonant onset to the vowel nucleus and falls toward the consonant coda, as in the English pig. The components of the syllable obey the phonotactic constraints of the language in which they occur, and therein lies the question that motivates this research. Phonologists agree that the human vocal apparatus produces speech sounds that form a sonority hierarchy, from highest to lowest: vowels, glides, liquids, nasals, and obstruents. Examples are, come, twist, lack, ring, and cat, respectively. English, and other languages with complex syllable inventories, supplement the basic forms in ways that are usually consistent with the sonority hierarchy, where usually is the operative word. Thus, English permits double consonant onsets, as in twist with a consonant lower in the hierarchy (t, an obstruent) followed by a consonant one higher in the hierarchy (w, a glide). So sonority rises to the vowel, i, falls to the fricative, s, an obstruent, and falls further to another obstruent, t, still lower in the hierarchy. Yet p and w do not form a double consonant onset in English, probably because English avoids grouping sounds that use the same articulators, the lips, in this instance. Constructing an automatic syllabifier could be the process of encoding all rules such as these in the language under investigation. Another approach, one more congenial to the rising tide of so-called usage-based linguists (e.g, BIBREF3), is to recognize that the regularities of language formulated as rules can be usefully expressed as probabilities BIBREF4, BIBREF5, BIBREF6. An automatic syllabifier is a computer program that, given a word as a sequence of phones, divides the word into its component syllables, where the syllables are legal in the language under investigation. Approaches take the form of dictionary-based look-up procedures, rule-based systems, data-driven systems, and hybrids thereof BIBREF7. Dictionary look-ups are limited to phone sequences previously seen and thus cannot handle new vocabulary BIBREF8. Rule-based approaches can process previously unseen phone sequences by encoding linguistic knowledge. Formalized language-specific rules are developed by hand, necessarily accompanied by many exceptions, such as the one noted in the previous paragraph. An important example is the syllabification package tsylb, developed at the National Institute of Standards and Technology (NIST), which is based on Daniel Kahn's 1979 MIT dissertation BIBREF9, BIBREF10. Language particularity is a stumbling block for rule-based and other formal approaches to language such as Optimality Theory (OT), however much they strive for universality. Thus, T.A. Hall argues that the OT approach to syllabification found in BIBREF11 is superior to previous OT research as well as to Kahn's rule-based work, because both postulate language-specific structures without cross-linguistic motivation. From Hall's perspective, previous systems do not capture important cross-linguistic features of the syllable. In a word, the earlier systems require kludges, an issue for both builders of automatic, language-agnostic syllabifiers and theoretical linguists like Hall. Data-driven syllabification methods, like the one to be presented in this paper, have the potential to function across languages and to process new, out of dictionary words. For languages that have transcribed syllable data, data-driven approaches often outperform rule-based ones. BIBREF12 used a combined support vector machine (SVM) and hidden Markov model (HMM) to maximize the classification margin between a correct and incorrect syllable boundary. BIBREF13 used segmental conditional random fields (SCRF). The SCRF hybrid method statistically leveraged general principles of syllabification such as legality, sonority and maximal onset. Many other HMM-based labeling structures exist, such as evolved phonetic categorization and high order n-gram models with back-off BIBREF14, BIBREF15. Data-driven models are evaluated by word accuracy against transcribed datasets. Commonly, only one language or languages of the same family are used. The CELEX lexical database from BIBREF16 contains syllabifications of phone sequences for English, Dutch, and German. These three languages fall into the West Germanic language family, so the phonologies of each are closely related. Evaluating a model solely on these three languages, the approach taken in BIBREF13 and others, does not adequately test a model's generalized ability to learn diverse syllable structures. In this paper, we present a neural network that can syllabify phone sequences without introducing any fixed principles or rules of syllabification. We show that this novel approach to syllabification is language-agnostic by evaluating it on datasets of six languages, five from two major language families, and one that appears to be unrelated to any existing language. Method Syllabification can be considered a sequence labeling task where each label delineates the existence or absence of a syllable boundary. As such, syllabification has much in common with well-researched topics such as part-of-speech tagging, named-entity recognition, and chunking BIBREF17. Neural networks have recently outpaced more traditional methods in sequence labeling tasks. These neural-based approaches are taking the place of HMMs, maximum entropy Markov models (MEMM), and conditional random fields (CRF) BIBREF18. In the following section and in Fig. FIGREF1, we present a neural network architecture that leverages both recurrence and one-dimensional convolutions. Recurrence enables our model to read a sequence much like a human would; a sequence with elements $abcd$ would be read one element at a time, updating a latent understanding after reading each $a$, $b$, $c$, and finally $d$. One-dimensional convolutions extract a spatial relationship between sequential elements. The $abcd$ example sequence may then be read as $ab$, $bc$, $cd$. Explicitly recognizing this spatial relationship is beneficial in syllabification because a syllable is a local sub-sequence of phones within a word. The input to the model is a sequence of phones that together represent a word. We pad each phone sequence to a length of $n$ where $n$ is the length of the longest phone sequence. All inputs then take the form Each phone $p_i$ is mapped to a $d$-dimensional embedding vector $x_i$ resulting in where $x$ has a dimension of $d\times n$. Taken together, the phone embeddings represent the relationships between phones in a real-valued vector space. The embedding dimension $d$ is optimized as a model hyperparameter and has a large impact on overall model performance BIBREF19. As such, we carefully tune $d$ for the proposed Base model and reduce it for our Small model as described in Section SECREF24. The vector values of the phone embeddings are learned during each model training. Using learned embeddings enables the model to have a custom embedding space for each language that it is trained on. This is desirable because phonetic patterns differ from language to language. Also, learned embeddings allow the model to be trained using the input of any phonetic transcription. For example, one training of the model can use IPA and one can use SAMPA without needing to specify a mapping of one alphabet to another. Method ::: Bidirectional LSTM Recurrent neural networks (RNNs) differ from standard feed-forward neural networks in their treatment of input order; each element is processed given the context of the input that came before. RNNs operate on sequential data and can take many forms. Our network leverages the long short-term memory (LSTM) cell which is a prominent RNN variant capable of capturing long-term sequential dependencies BIBREF20. The gated memory cells of LSTM are an improvement over the standard RNN because the standard RNN is often biased toward short-term dependencies BIBREF21, BIBREF22. At each time step, the LSTM cell determines what information is important to introduce, to keep, and to output. This is done using an input gate, a forget gate, and an output gate shown in Fig. FIGREF5. LSTM operates in a single direction through time. This can be a limitation when a time step has both past dependency and future dependency. For example, a consonant sound may be the coda of a syllable earlier in the sequence or the onset of a syllable later in the sequence. Thus, processing a phonetic sequence in both the forward and backwards directions provides an improved context for assigning syllable boundaries. A bidirectional LSTM (BiLSTM) is formed when an LSTM moving forward through time is concatenated with an LSTM moving backward through time BIBREF23. We use the LSTM network as follows. The $x$ vector is fed through the LSTM network which outputs a vector $\overrightarrow{h_i}$ for each time step $i$ from 0 to $n-1$. This is the forward LSTM. As we have access to the complete vector $x$, we can process a backward LSTM as well. This is done by computing a vector $\overleftarrow{h_i}$ for each time step $i$ from $n-1$ to 0. Finally, we concatenate the backward LSTM with the forward LSTM: Both $\overrightarrow{h_i}$ and $\overleftarrow{h_i}$ have a dimension of $l$, which is an optimized hyperparameter. The BiLSTM output $h$ thus has dimension $2l\times n$. Method ::: CNN Convolutional neural networks (CNNs) are traditionally used in computer vision, but perform well in many text processing tasks that benefit from position-invariant abstractions BIBREF24, BIBREF25. These abstractions depend exclusively on local neighboring features rather than the position of features in a global structure. According to a comparative study by BIBREF26, BiLSTMs tend to outperform CNNs in sequential tasks such as POS tagging, but CNNs tend to outperform BiLSTMs in global relation detection tasks such as keyphrase matching for question answering. We use both the BiLSTM and the CNN in our network so that the strengths of each are incorporated. CNNs have been combined with BiLSTMs to perform state-of-the-art sequence tagging in both POS tagging and NER. BIBREF27 used BiLSTMs to process the word sequence while each word's character sequence was processed with CNNs to provide a second representation. In textual syllabification, the only input is the phone sequence. Both our BiLSTM and CNN components process the same input: the $x$ vector. We pad $x$ with $w-1$ $d$-dimensional zero vectors before $x_0$. A 1-dimensional convolutional filter of width $w$ processes a window $x_{i-w+1},...,x_i$ for all $i$ from 0 to $n-1$. To determine the output vector $c$, the convolutional filter performs a nonlinear weight and bias computation. Due to the padding of $x$, the resulting dimension of $c$ is $f\times n$ where $f$ is the number of filters used. A 1-dimensional max pooling is performed over $c$ with a stride of 1 which keeps the dimensionality unaltered. The pool size is an optimized hyperparameter that determines how many adjacent elements are used in the $max$ operation. The convolutional and max pooling components can be repeated to compute higher-level abstractions. As the convolutional and max pooling output is conformant to the BiLSTM output, we can concatenate them to create a combined vector with dimension $(2l+f)\times n$: Method ::: Output: Conditional Random Field We introduce a time-distributed fully connected layer over vector $o$, taking $o$ from a dimension of $(2l+f)\times n$ down to a dimension of $2\times n$. We do this because there are two class labels: either a syllable boundary or no syllable boundary. The output of the model is a sequence When $y_i\equiv 0$, there is no syllable boundary predicted to follow the phone $p_i$. When $y_i\equiv 1$, there is a syllable boundary predicted to follow $p_i$. Intuitively, we seek an output sequence $y$ that gives the highest $p(y|o)$. One approach calculates the softmax for each $o_i$: The softmax normalizes each $o_i$ to a probability distribution over the two discrete class labels. We can then model $p(y|o)$ by multiplying the maximum of each $s_i$ together: When using the softmax, $p(y|o)$ is calculated under the limiting assumption that each $o_i$ is independent. To more accurately model $p(y|o)$, we replace the softmax classifier with a conditional random field (CRF) BIBREF28. Specifically, we use a linear-chain CRF which is a sequential model that leverages both past and future output tags to model the output probability. The linear-chain CRF can be considered a sequential generalization of logistic regression classifiers as well as a discriminative analogue of hidden Markov models because it models $p(y|o)$ directly instead of modeling $p(o|y)$ BIBREF29. Using sequence-level tag information with a CRF has been shown to improve tag accuracy in the related tasks of POS tagging, chunking, and NER BIBREF30, BIBREF31. We use a linear-chain CRF to model the conditional distribution directly: where $Z(o)$ is the normalization function and $\theta $ is a learned parameter vector scaled by the set of transition feature functions $f$. Method ::: Training Training of the network parameters is performed using backpropagation. Using Keras, the backpropagation is automatically defined given the forward definition of the network. The defined loss function is sparse categorical cross entropy, in accordance with the real-valued probabilities given by the CRF output layer. Loss optimization is performed with the Adam optimizer BIBREF32. Adam was chosen because it adapts the learning rate on a parameter-to-parameter basis; strong convergence occurs at the end of optimization. Training is performed to a set number of epochs. Early stopping allows the network to conclude training if convergence is reached prior to reaching the epoch training limit BIBREF33. Materials The materials for this research comprises the software described above and several syllabified datasets. Materials ::: Software The implementation of our model was adapted from an open source code library designed for general-purpose sequence tagging and made available by BIBREF37. The modifications to this code include adding data preparation scripts and changing the model architecture to reflect the network architecture described above. Our code is made publicly available for future research at https://github.com/jacobkrantz/lstm-syllabify. Materials ::: Datasets To produce a language-agnostic syllabifier, it is crucial to test syllabification accuracy across different language families and language groupings within families. We selected six evaluation languages: English, Dutch, Italian, French, Basque, and Manipuri. These represent two language families (Indo-European, Sino-Tibetan), a language isolate thought to be unrelated to any existing language (Basque), and two different subfamilies within the Indo-European family (West Germanic, Romance). The primary constraint was the availability of syllabified datasets for training and testing. Table TABREF17 presents details of each dataset. Among the six languages we evaluate with, both English and Dutch are notable for the availability of rich datasets of phonetic and syllabic transcriptions. These are found in the CELEX (Dutch Centre for Lexical Information) database BIBREF16. CELEX was built jointly by the University of Nijmegen, the Institute for Dutch Lexicology in Leiden, the Max Planck Institute for Psycholinguistics in Nijmegen, and the Institute for Perception Research in Eindhoven. CELEX is maintained by the Max Planck Institute for Psycholinguistics. The CELEX database contains information on orthography, phonology, morphology, syntax and word frequency. It also contains syllabified words in Dutch and English transcribed using SAM-PA, CELEX, CPA, and DISC notations. The first three are variations of the International Phonetic Alphabet (IPA), in that each uses a standard ASCII character to represent each IPA character. DISC is different than the other three in that it maps a distinct ASCII character to each phone in the sound systems of Dutch, English, and German BIBREF38. Different phonetic transcriptions are used in different datasets. Part of the strength of our proposed syllabifier is that every transcription can be used as-is without any additional modification to the syllabifier or the input sequences. The other datasets were hand-syllabified by linguists with the exception of the IIT-Guwahat dataset and the Festival dataset. Both IIT-Guwahat and Festival were initially syllabified with a naive algorithm and then each entry was confirmed or corrected by hand. For each dataset used to evaluate the proposed model, we compare our results with published accuracies of existing syllabification systems. Table TABREF21 shows the performance of well known and state of the art syllabifiers for each dataset. Liang's hyphenation algorithm is commonly known for its usage in . The patgen program was used to learn the rules of syllable boundaries BIBREF39. What we call Entropy CRF is a method particular to Manipuri; a rule-based component estimates the entropy of phones and phone clusters while a data-driven CRF component treats syllabification as a sequence modeling task BIBREF35. Experiments Each dataset used to evaluate the model was split into three groups: training, development, and test. Each training epoch iterated over the training set to optimize the model parameters. The development set was used to tune the hyperparameters of the model, such as the batch size and the phone embedding dimension. The test set was exclusively used for reporting the model accuracy. The datasets were split randomly by percentages 80 (training), 10 (development), and 10 (test). For the English CELEX dataset of $89,402$ words, this resulted in $71,522$ words for training and $8,940$ words for each development and training. For each experiment, models were initialized with a random set of parameter weights. BIBREF37 showed that differences in random number generation produce statistically significant variances in the accuracy of LSTM-based models. Due to the stochastic nature of neural network training, we performed each experiment 20 times. We report model accuracy as a mean and standard deviation of these experiment repetitions. Experiments ::: Data Cleaning Prior to splitting each dataset, a simple cleaning process had to be performed to remove unwanted entries. This cleaning involved removing all entries that had at least one other entry with the same word. It is important to note that two words being different does not necessitate a different pronunciation or syllabification. These entries with different words but same pronunciations were kept in the dataset. No other cleaning was needed for the datasets other than mapping the syllabified phone sequence to an input-target pair usable by our model for training and evaluation. This cleaning process contributes to the language-agnostic nature of this research. The simplicity of the cleaning process is enabled by the fact that the model is end to end; no external phonetic features are gathered, and any phonetic transcription can be accommodated in the training process. Experiments ::: Hyperparameter Specification For all experiments, models were trained with a batch size of 64. A limit of 120 epochs was imposed with early stopping after 10 unimproved epochs. Dropout was used for the input connection to the BiLSTM layer at $25\%$ BIBREF41. The learned embeddings layer had dimension $d=300$. The LSTM outputs, $\overrightarrow{h_i}$ and $\overleftarrow{h_i}$, both had dimension $l=300$. The convolutional to max pooling component was repeated twice before concatenation with the BiLSTM output. 200 convolutional filters were used and each had a dimension of 3. Finally, when using the Adam optimizer, we scaled the gradient norm when it exceeded $1.0$ using the Keras clipnorm parameter. All training was performed on single GPU machines on Amazon Web Services (AWS) servers which provided more than enough compute power. The average training of a model on the English CELEX dataset took approximately 45 minutes to reach convergence. Experiments ::: Results We tested three model versions against all datasets. The model we call Base is the BiLSTM-CNN-CRF model described in Section SECREF2 with the associated hyperparameters. Another model, Small, uses the same architecture as Base but reduces the number of convolutional layers to 1, the convolutional filters to 40, the LSTM dimension $l$ to 50, and the phone embedding size $d$ to 100. We also tested a Base-Softmax model, which replaces the CRF output of the Base model with a softmax. A comparison of the results of these three models can be seen in Table TABREF25. This comparison empirically motivates the CRF output because Base almost always outperforms Base-Softmax. Of these three models, the Base model performed the best with the exception of the French and Manipuri datasets. The differences in the French results can be considered negligible because the accuracies are all near $100\%$. The Small model performed best on Manipuri, which may suggest that reducing the number of parameters of the Base model leads to better accuracy on smaller datasets. When comparing our model with previous syllabifiers, we consider the Base model exclusively. In Table TABREF26, a side-by-side comparison of our Base model to a selection of published syllabifiers shows that Base is near state-of-the art performance on English CELEX. For the Dutch dataset, we report an accuracy of $99.47 \pm 0.04\%$, which improves on the previously best-known accuracy of $99.16\%$ from the HMM-SVM of BIBREF12. Best-known results are also obtained on the Italian, French, and Basque datasets. Our reported accuracy of $94.9 \pm 0.3\%$ on the Manipuri dataset is furthest from state of the art. We suspect this to be due to having limited amounts of training data; the $97.5\%$ accurate system from BIBREF35 supplemented their data-driven approach with rules of syllabification. Discussion Examples from the outputs of the Base model can give us insight into what the model does well and what types of words it struggles with. The total number of sounds across languages is vast, but not infinite, as Ladefoged and Maddieson's The Sounds of the the World's Languages demonstrates BIBREF42. Different languages choose different inventories from the total producible by the human vocal apparatus. Within a language, sounds and patterns of sound vary widely in frequency, though with considerable regularity. This regularity has led a generation of linguists to attempt to uncover rules that describe not only syntax, but sound as well. Chomsky and Halle's The Sound Pattern of English is the classic effort, first appearing in 1968 BIBREF43. It is not surprising that the earliest attempts to produce automatic syllabifiers were based on just such rule collections. Nor is it surprising that the best-known rule-based syllabifier was inspired by a doctoral dissertation at MIT, Noam Chomsky's home institution for five decades. An alternative approach is to recognize that 1) rules can be reconceptualized as probabilities and 2) native speakers of a language have internalized those very probabilities. Nevertheless, where there is probability, there is ambiguity. With all of these caveats in mind, a few examples have been selected from our results to showcase the model as shown in Table TABREF27. The syllabification of misinterpretation illustrates the model's ability to process longer words. Containing 14 phones and 5 syllables, this word demonstrates that the model's pattern finding technique works well regardless of the location of phonetic and syllabic patterns in the word. The model can accurately handle prefixes, correctly syllabifying mis- as Table TABREF27 shows. Another word is achieved. Inflected languages, such as English, use morphemes to distinguish mood, tense, case, and number, among others. Thus, the verb achieve has several forms, or conjugates. The syllabifier correctly detected the stem and the past tense morpheme, ed. An odd aspect of the English CELEX dataset is the occurrence of entries, $22,393$ of which, that either have hyphens or are multiple entirely separate words, such as public-address systems. Because the phonetic representation does not denote hyphens or whitespace, the model has difficulties processing these words. Conclusion We proposed a sequential neural network model that is capable of syllabifying phonetic sequences. This model is independent of any hand-crafted linguistic knowledge. We showed that this model performs at or near state of the art levels on a variety of datasets sampled from two Indo-European, one Sino-Tibetan, and an apparently family-less language. Specifically, the proposed model achieved accuracies higher than any other we could find on datasets from Dutch, Italian, French, and Basque languages and close to the best-reported accuracy for English and Manipuri. Evaluating the performance of the syllabifier across diverse languages provides strong evidence that the proposed model is language-agnostic. Conclusion ::: Future Work With a language-agnostic syllabification system, any language can be syllabified given enough labeled training data. A problem is that many languages do not have large, labeled syllabification datasets. For example, we failed to find available and sufficient datasets in the Slavic languages of Russian and Serbian. This problem can be addressed either in a concentrated effort to create more labeled data or in the development of systems that require limited data. Acknowledgment This research was supported in part by a Gonzaga University McDonald Work Award by Robert and Claire McDonald and an Amazon Web Services (AWS) grant through the Cloud Credits for Research program.
Datasets used are Celex (English, Dutch), Festival (Italian), OpenLexuque (French), IIT-Guwahati (Manipuri), E-Hitz (Basque)
14634943d96ea036725898ab2e652c2948bd33eb
14634943d96ea036725898ab2e652c2948bd33eb_0
Q: What is the accuracy of the model for the six languages tested? Text: Introduction Words can be considered compositions of syllables, which in turn are compositions of phones. Phones are units of sound producible by the human vocal apparatus. Syllables play an important role in prosody and are influential components of natural language understanding, speech production, and speech recognition systems. Text-to-speech (TTS) systems can rely heavily on automatically syllabified phone sequences BIBREF0. One prominent example is Festival, an open source TTS system that relies on a syllabification algorithm to organize speech production BIBREF1. Linguists have recognized since the late 1940s that the syllable is a hierarchical structure, present in most, if not all, languages (though there is some disagreement on this score. See, for example, BIBREF2). An optional consonant onset is followed by a rime, which may be further decomposed into a high sonority vowel nucleus followed by an optional consonant coda. All languages appear to have at least the single syllable vowel ($V$) and the two syllable vowel-consonant ($VC$) forms in their syllable inventories. For example, oh and so in English. Most languages supplement these with codas to form the $\lbrace V, CV, VC, CVC\rbrace $ syllable inventory. Sonority rises from the consonant onset to the vowel nucleus and falls toward the consonant coda, as in the English pig. The components of the syllable obey the phonotactic constraints of the language in which they occur, and therein lies the question that motivates this research. Phonologists agree that the human vocal apparatus produces speech sounds that form a sonority hierarchy, from highest to lowest: vowels, glides, liquids, nasals, and obstruents. Examples are, come, twist, lack, ring, and cat, respectively. English, and other languages with complex syllable inventories, supplement the basic forms in ways that are usually consistent with the sonority hierarchy, where usually is the operative word. Thus, English permits double consonant onsets, as in twist with a consonant lower in the hierarchy (t, an obstruent) followed by a consonant one higher in the hierarchy (w, a glide). So sonority rises to the vowel, i, falls to the fricative, s, an obstruent, and falls further to another obstruent, t, still lower in the hierarchy. Yet p and w do not form a double consonant onset in English, probably because English avoids grouping sounds that use the same articulators, the lips, in this instance. Constructing an automatic syllabifier could be the process of encoding all rules such as these in the language under investigation. Another approach, one more congenial to the rising tide of so-called usage-based linguists (e.g, BIBREF3), is to recognize that the regularities of language formulated as rules can be usefully expressed as probabilities BIBREF4, BIBREF5, BIBREF6. An automatic syllabifier is a computer program that, given a word as a sequence of phones, divides the word into its component syllables, where the syllables are legal in the language under investigation. Approaches take the form of dictionary-based look-up procedures, rule-based systems, data-driven systems, and hybrids thereof BIBREF7. Dictionary look-ups are limited to phone sequences previously seen and thus cannot handle new vocabulary BIBREF8. Rule-based approaches can process previously unseen phone sequences by encoding linguistic knowledge. Formalized language-specific rules are developed by hand, necessarily accompanied by many exceptions, such as the one noted in the previous paragraph. An important example is the syllabification package tsylb, developed at the National Institute of Standards and Technology (NIST), which is based on Daniel Kahn's 1979 MIT dissertation BIBREF9, BIBREF10. Language particularity is a stumbling block for rule-based and other formal approaches to language such as Optimality Theory (OT), however much they strive for universality. Thus, T.A. Hall argues that the OT approach to syllabification found in BIBREF11 is superior to previous OT research as well as to Kahn's rule-based work, because both postulate language-specific structures without cross-linguistic motivation. From Hall's perspective, previous systems do not capture important cross-linguistic features of the syllable. In a word, the earlier systems require kludges, an issue for both builders of automatic, language-agnostic syllabifiers and theoretical linguists like Hall. Data-driven syllabification methods, like the one to be presented in this paper, have the potential to function across languages and to process new, out of dictionary words. For languages that have transcribed syllable data, data-driven approaches often outperform rule-based ones. BIBREF12 used a combined support vector machine (SVM) and hidden Markov model (HMM) to maximize the classification margin between a correct and incorrect syllable boundary. BIBREF13 used segmental conditional random fields (SCRF). The SCRF hybrid method statistically leveraged general principles of syllabification such as legality, sonority and maximal onset. Many other HMM-based labeling structures exist, such as evolved phonetic categorization and high order n-gram models with back-off BIBREF14, BIBREF15. Data-driven models are evaluated by word accuracy against transcribed datasets. Commonly, only one language or languages of the same family are used. The CELEX lexical database from BIBREF16 contains syllabifications of phone sequences for English, Dutch, and German. These three languages fall into the West Germanic language family, so the phonologies of each are closely related. Evaluating a model solely on these three languages, the approach taken in BIBREF13 and others, does not adequately test a model's generalized ability to learn diverse syllable structures. In this paper, we present a neural network that can syllabify phone sequences without introducing any fixed principles or rules of syllabification. We show that this novel approach to syllabification is language-agnostic by evaluating it on datasets of six languages, five from two major language families, and one that appears to be unrelated to any existing language. Method Syllabification can be considered a sequence labeling task where each label delineates the existence or absence of a syllable boundary. As such, syllabification has much in common with well-researched topics such as part-of-speech tagging, named-entity recognition, and chunking BIBREF17. Neural networks have recently outpaced more traditional methods in sequence labeling tasks. These neural-based approaches are taking the place of HMMs, maximum entropy Markov models (MEMM), and conditional random fields (CRF) BIBREF18. In the following section and in Fig. FIGREF1, we present a neural network architecture that leverages both recurrence and one-dimensional convolutions. Recurrence enables our model to read a sequence much like a human would; a sequence with elements $abcd$ would be read one element at a time, updating a latent understanding after reading each $a$, $b$, $c$, and finally $d$. One-dimensional convolutions extract a spatial relationship between sequential elements. The $abcd$ example sequence may then be read as $ab$, $bc$, $cd$. Explicitly recognizing this spatial relationship is beneficial in syllabification because a syllable is a local sub-sequence of phones within a word. The input to the model is a sequence of phones that together represent a word. We pad each phone sequence to a length of $n$ where $n$ is the length of the longest phone sequence. All inputs then take the form Each phone $p_i$ is mapped to a $d$-dimensional embedding vector $x_i$ resulting in where $x$ has a dimension of $d\times n$. Taken together, the phone embeddings represent the relationships between phones in a real-valued vector space. The embedding dimension $d$ is optimized as a model hyperparameter and has a large impact on overall model performance BIBREF19. As such, we carefully tune $d$ for the proposed Base model and reduce it for our Small model as described in Section SECREF24. The vector values of the phone embeddings are learned during each model training. Using learned embeddings enables the model to have a custom embedding space for each language that it is trained on. This is desirable because phonetic patterns differ from language to language. Also, learned embeddings allow the model to be trained using the input of any phonetic transcription. For example, one training of the model can use IPA and one can use SAMPA without needing to specify a mapping of one alphabet to another. Method ::: Bidirectional LSTM Recurrent neural networks (RNNs) differ from standard feed-forward neural networks in their treatment of input order; each element is processed given the context of the input that came before. RNNs operate on sequential data and can take many forms. Our network leverages the long short-term memory (LSTM) cell which is a prominent RNN variant capable of capturing long-term sequential dependencies BIBREF20. The gated memory cells of LSTM are an improvement over the standard RNN because the standard RNN is often biased toward short-term dependencies BIBREF21, BIBREF22. At each time step, the LSTM cell determines what information is important to introduce, to keep, and to output. This is done using an input gate, a forget gate, and an output gate shown in Fig. FIGREF5. LSTM operates in a single direction through time. This can be a limitation when a time step has both past dependency and future dependency. For example, a consonant sound may be the coda of a syllable earlier in the sequence or the onset of a syllable later in the sequence. Thus, processing a phonetic sequence in both the forward and backwards directions provides an improved context for assigning syllable boundaries. A bidirectional LSTM (BiLSTM) is formed when an LSTM moving forward through time is concatenated with an LSTM moving backward through time BIBREF23. We use the LSTM network as follows. The $x$ vector is fed through the LSTM network which outputs a vector $\overrightarrow{h_i}$ for each time step $i$ from 0 to $n-1$. This is the forward LSTM. As we have access to the complete vector $x$, we can process a backward LSTM as well. This is done by computing a vector $\overleftarrow{h_i}$ for each time step $i$ from $n-1$ to 0. Finally, we concatenate the backward LSTM with the forward LSTM: Both $\overrightarrow{h_i}$ and $\overleftarrow{h_i}$ have a dimension of $l$, which is an optimized hyperparameter. The BiLSTM output $h$ thus has dimension $2l\times n$. Method ::: CNN Convolutional neural networks (CNNs) are traditionally used in computer vision, but perform well in many text processing tasks that benefit from position-invariant abstractions BIBREF24, BIBREF25. These abstractions depend exclusively on local neighboring features rather than the position of features in a global structure. According to a comparative study by BIBREF26, BiLSTMs tend to outperform CNNs in sequential tasks such as POS tagging, but CNNs tend to outperform BiLSTMs in global relation detection tasks such as keyphrase matching for question answering. We use both the BiLSTM and the CNN in our network so that the strengths of each are incorporated. CNNs have been combined with BiLSTMs to perform state-of-the-art sequence tagging in both POS tagging and NER. BIBREF27 used BiLSTMs to process the word sequence while each word's character sequence was processed with CNNs to provide a second representation. In textual syllabification, the only input is the phone sequence. Both our BiLSTM and CNN components process the same input: the $x$ vector. We pad $x$ with $w-1$ $d$-dimensional zero vectors before $x_0$. A 1-dimensional convolutional filter of width $w$ processes a window $x_{i-w+1},...,x_i$ for all $i$ from 0 to $n-1$. To determine the output vector $c$, the convolutional filter performs a nonlinear weight and bias computation. Due to the padding of $x$, the resulting dimension of $c$ is $f\times n$ where $f$ is the number of filters used. A 1-dimensional max pooling is performed over $c$ with a stride of 1 which keeps the dimensionality unaltered. The pool size is an optimized hyperparameter that determines how many adjacent elements are used in the $max$ operation. The convolutional and max pooling components can be repeated to compute higher-level abstractions. As the convolutional and max pooling output is conformant to the BiLSTM output, we can concatenate them to create a combined vector with dimension $(2l+f)\times n$: Method ::: Output: Conditional Random Field We introduce a time-distributed fully connected layer over vector $o$, taking $o$ from a dimension of $(2l+f)\times n$ down to a dimension of $2\times n$. We do this because there are two class labels: either a syllable boundary or no syllable boundary. The output of the model is a sequence When $y_i\equiv 0$, there is no syllable boundary predicted to follow the phone $p_i$. When $y_i\equiv 1$, there is a syllable boundary predicted to follow $p_i$. Intuitively, we seek an output sequence $y$ that gives the highest $p(y|o)$. One approach calculates the softmax for each $o_i$: The softmax normalizes each $o_i$ to a probability distribution over the two discrete class labels. We can then model $p(y|o)$ by multiplying the maximum of each $s_i$ together: When using the softmax, $p(y|o)$ is calculated under the limiting assumption that each $o_i$ is independent. To more accurately model $p(y|o)$, we replace the softmax classifier with a conditional random field (CRF) BIBREF28. Specifically, we use a linear-chain CRF which is a sequential model that leverages both past and future output tags to model the output probability. The linear-chain CRF can be considered a sequential generalization of logistic regression classifiers as well as a discriminative analogue of hidden Markov models because it models $p(y|o)$ directly instead of modeling $p(o|y)$ BIBREF29. Using sequence-level tag information with a CRF has been shown to improve tag accuracy in the related tasks of POS tagging, chunking, and NER BIBREF30, BIBREF31. We use a linear-chain CRF to model the conditional distribution directly: where $Z(o)$ is the normalization function and $\theta $ is a learned parameter vector scaled by the set of transition feature functions $f$. Method ::: Training Training of the network parameters is performed using backpropagation. Using Keras, the backpropagation is automatically defined given the forward definition of the network. The defined loss function is sparse categorical cross entropy, in accordance with the real-valued probabilities given by the CRF output layer. Loss optimization is performed with the Adam optimizer BIBREF32. Adam was chosen because it adapts the learning rate on a parameter-to-parameter basis; strong convergence occurs at the end of optimization. Training is performed to a set number of epochs. Early stopping allows the network to conclude training if convergence is reached prior to reaching the epoch training limit BIBREF33. Materials The materials for this research comprises the software described above and several syllabified datasets. Materials ::: Software The implementation of our model was adapted from an open source code library designed for general-purpose sequence tagging and made available by BIBREF37. The modifications to this code include adding data preparation scripts and changing the model architecture to reflect the network architecture described above. Our code is made publicly available for future research at https://github.com/jacobkrantz/lstm-syllabify. Materials ::: Datasets To produce a language-agnostic syllabifier, it is crucial to test syllabification accuracy across different language families and language groupings within families. We selected six evaluation languages: English, Dutch, Italian, French, Basque, and Manipuri. These represent two language families (Indo-European, Sino-Tibetan), a language isolate thought to be unrelated to any existing language (Basque), and two different subfamilies within the Indo-European family (West Germanic, Romance). The primary constraint was the availability of syllabified datasets for training and testing. Table TABREF17 presents details of each dataset. Among the six languages we evaluate with, both English and Dutch are notable for the availability of rich datasets of phonetic and syllabic transcriptions. These are found in the CELEX (Dutch Centre for Lexical Information) database BIBREF16. CELEX was built jointly by the University of Nijmegen, the Institute for Dutch Lexicology in Leiden, the Max Planck Institute for Psycholinguistics in Nijmegen, and the Institute for Perception Research in Eindhoven. CELEX is maintained by the Max Planck Institute for Psycholinguistics. The CELEX database contains information on orthography, phonology, morphology, syntax and word frequency. It also contains syllabified words in Dutch and English transcribed using SAM-PA, CELEX, CPA, and DISC notations. The first three are variations of the International Phonetic Alphabet (IPA), in that each uses a standard ASCII character to represent each IPA character. DISC is different than the other three in that it maps a distinct ASCII character to each phone in the sound systems of Dutch, English, and German BIBREF38. Different phonetic transcriptions are used in different datasets. Part of the strength of our proposed syllabifier is that every transcription can be used as-is without any additional modification to the syllabifier or the input sequences. The other datasets were hand-syllabified by linguists with the exception of the IIT-Guwahat dataset and the Festival dataset. Both IIT-Guwahat and Festival were initially syllabified with a naive algorithm and then each entry was confirmed or corrected by hand. For each dataset used to evaluate the proposed model, we compare our results with published accuracies of existing syllabification systems. Table TABREF21 shows the performance of well known and state of the art syllabifiers for each dataset. Liang's hyphenation algorithm is commonly known for its usage in . The patgen program was used to learn the rules of syllable boundaries BIBREF39. What we call Entropy CRF is a method particular to Manipuri; a rule-based component estimates the entropy of phones and phone clusters while a data-driven CRF component treats syllabification as a sequence modeling task BIBREF35. Experiments Each dataset used to evaluate the model was split into three groups: training, development, and test. Each training epoch iterated over the training set to optimize the model parameters. The development set was used to tune the hyperparameters of the model, such as the batch size and the phone embedding dimension. The test set was exclusively used for reporting the model accuracy. The datasets were split randomly by percentages 80 (training), 10 (development), and 10 (test). For the English CELEX dataset of $89,402$ words, this resulted in $71,522$ words for training and $8,940$ words for each development and training. For each experiment, models were initialized with a random set of parameter weights. BIBREF37 showed that differences in random number generation produce statistically significant variances in the accuracy of LSTM-based models. Due to the stochastic nature of neural network training, we performed each experiment 20 times. We report model accuracy as a mean and standard deviation of these experiment repetitions. Experiments ::: Data Cleaning Prior to splitting each dataset, a simple cleaning process had to be performed to remove unwanted entries. This cleaning involved removing all entries that had at least one other entry with the same word. It is important to note that two words being different does not necessitate a different pronunciation or syllabification. These entries with different words but same pronunciations were kept in the dataset. No other cleaning was needed for the datasets other than mapping the syllabified phone sequence to an input-target pair usable by our model for training and evaluation. This cleaning process contributes to the language-agnostic nature of this research. The simplicity of the cleaning process is enabled by the fact that the model is end to end; no external phonetic features are gathered, and any phonetic transcription can be accommodated in the training process. Experiments ::: Hyperparameter Specification For all experiments, models were trained with a batch size of 64. A limit of 120 epochs was imposed with early stopping after 10 unimproved epochs. Dropout was used for the input connection to the BiLSTM layer at $25\%$ BIBREF41. The learned embeddings layer had dimension $d=300$. The LSTM outputs, $\overrightarrow{h_i}$ and $\overleftarrow{h_i}$, both had dimension $l=300$. The convolutional to max pooling component was repeated twice before concatenation with the BiLSTM output. 200 convolutional filters were used and each had a dimension of 3. Finally, when using the Adam optimizer, we scaled the gradient norm when it exceeded $1.0$ using the Keras clipnorm parameter. All training was performed on single GPU machines on Amazon Web Services (AWS) servers which provided more than enough compute power. The average training of a model on the English CELEX dataset took approximately 45 minutes to reach convergence. Experiments ::: Results We tested three model versions against all datasets. The model we call Base is the BiLSTM-CNN-CRF model described in Section SECREF2 with the associated hyperparameters. Another model, Small, uses the same architecture as Base but reduces the number of convolutional layers to 1, the convolutional filters to 40, the LSTM dimension $l$ to 50, and the phone embedding size $d$ to 100. We also tested a Base-Softmax model, which replaces the CRF output of the Base model with a softmax. A comparison of the results of these three models can be seen in Table TABREF25. This comparison empirically motivates the CRF output because Base almost always outperforms Base-Softmax. Of these three models, the Base model performed the best with the exception of the French and Manipuri datasets. The differences in the French results can be considered negligible because the accuracies are all near $100\%$. The Small model performed best on Manipuri, which may suggest that reducing the number of parameters of the Base model leads to better accuracy on smaller datasets. When comparing our model with previous syllabifiers, we consider the Base model exclusively. In Table TABREF26, a side-by-side comparison of our Base model to a selection of published syllabifiers shows that Base is near state-of-the art performance on English CELEX. For the Dutch dataset, we report an accuracy of $99.47 \pm 0.04\%$, which improves on the previously best-known accuracy of $99.16\%$ from the HMM-SVM of BIBREF12. Best-known results are also obtained on the Italian, French, and Basque datasets. Our reported accuracy of $94.9 \pm 0.3\%$ on the Manipuri dataset is furthest from state of the art. We suspect this to be due to having limited amounts of training data; the $97.5\%$ accurate system from BIBREF35 supplemented their data-driven approach with rules of syllabification. Discussion Examples from the outputs of the Base model can give us insight into what the model does well and what types of words it struggles with. The total number of sounds across languages is vast, but not infinite, as Ladefoged and Maddieson's The Sounds of the the World's Languages demonstrates BIBREF42. Different languages choose different inventories from the total producible by the human vocal apparatus. Within a language, sounds and patterns of sound vary widely in frequency, though with considerable regularity. This regularity has led a generation of linguists to attempt to uncover rules that describe not only syntax, but sound as well. Chomsky and Halle's The Sound Pattern of English is the classic effort, first appearing in 1968 BIBREF43. It is not surprising that the earliest attempts to produce automatic syllabifiers were based on just such rule collections. Nor is it surprising that the best-known rule-based syllabifier was inspired by a doctoral dissertation at MIT, Noam Chomsky's home institution for five decades. An alternative approach is to recognize that 1) rules can be reconceptualized as probabilities and 2) native speakers of a language have internalized those very probabilities. Nevertheless, where there is probability, there is ambiguity. With all of these caveats in mind, a few examples have been selected from our results to showcase the model as shown in Table TABREF27. The syllabification of misinterpretation illustrates the model's ability to process longer words. Containing 14 phones and 5 syllables, this word demonstrates that the model's pattern finding technique works well regardless of the location of phonetic and syllabic patterns in the word. The model can accurately handle prefixes, correctly syllabifying mis- as Table TABREF27 shows. Another word is achieved. Inflected languages, such as English, use morphemes to distinguish mood, tense, case, and number, among others. Thus, the verb achieve has several forms, or conjugates. The syllabifier correctly detected the stem and the past tense morpheme, ed. An odd aspect of the English CELEX dataset is the occurrence of entries, $22,393$ of which, that either have hyphens or are multiple entirely separate words, such as public-address systems. Because the phonetic representation does not denote hyphens or whitespace, the model has difficulties processing these words. Conclusion We proposed a sequential neural network model that is capable of syllabifying phonetic sequences. This model is independent of any hand-crafted linguistic knowledge. We showed that this model performs at or near state of the art levels on a variety of datasets sampled from two Indo-European, one Sino-Tibetan, and an apparently family-less language. Specifically, the proposed model achieved accuracies higher than any other we could find on datasets from Dutch, Italian, French, and Basque languages and close to the best-reported accuracy for English and Manipuri. Evaluating the performance of the syllabifier across diverse languages provides strong evidence that the proposed model is language-agnostic. Conclusion ::: Future Work With a language-agnostic syllabification system, any language can be syllabified given enough labeled training data. A problem is that many languages do not have large, labeled syllabification datasets. For example, we failed to find available and sufficient datasets in the Slavic languages of Russian and Serbian. This problem can be addressed either in a concentrated effort to create more labeled data or in the development of systems that require limited data. Acknowledgment This research was supported in part by a Gonzaga University McDonald Work Award by Robert and Claire McDonald and an Amazon Web Services (AWS) grant through the Cloud Credits for Research program.
Authors report their best models have following accuracy: English CELEX (98.5%), Dutch CELEX (99.47%), Festival (99.990%), OpenLexique (100%), IIT-Guwahat (95.4%), E-Hitz (99.83%)
d71cb7f3aa585e256ca14eebdc358edfc3a9539c
d71cb7f3aa585e256ca14eebdc358edfc3a9539c_0
Q: Which models achieve state-of-the-art performances? Text: Introduction Words can be considered compositions of syllables, which in turn are compositions of phones. Phones are units of sound producible by the human vocal apparatus. Syllables play an important role in prosody and are influential components of natural language understanding, speech production, and speech recognition systems. Text-to-speech (TTS) systems can rely heavily on automatically syllabified phone sequences BIBREF0. One prominent example is Festival, an open source TTS system that relies on a syllabification algorithm to organize speech production BIBREF1. Linguists have recognized since the late 1940s that the syllable is a hierarchical structure, present in most, if not all, languages (though there is some disagreement on this score. See, for example, BIBREF2). An optional consonant onset is followed by a rime, which may be further decomposed into a high sonority vowel nucleus followed by an optional consonant coda. All languages appear to have at least the single syllable vowel ($V$) and the two syllable vowel-consonant ($VC$) forms in their syllable inventories. For example, oh and so in English. Most languages supplement these with codas to form the $\lbrace V, CV, VC, CVC\rbrace $ syllable inventory. Sonority rises from the consonant onset to the vowel nucleus and falls toward the consonant coda, as in the English pig. The components of the syllable obey the phonotactic constraints of the language in which they occur, and therein lies the question that motivates this research. Phonologists agree that the human vocal apparatus produces speech sounds that form a sonority hierarchy, from highest to lowest: vowels, glides, liquids, nasals, and obstruents. Examples are, come, twist, lack, ring, and cat, respectively. English, and other languages with complex syllable inventories, supplement the basic forms in ways that are usually consistent with the sonority hierarchy, where usually is the operative word. Thus, English permits double consonant onsets, as in twist with a consonant lower in the hierarchy (t, an obstruent) followed by a consonant one higher in the hierarchy (w, a glide). So sonority rises to the vowel, i, falls to the fricative, s, an obstruent, and falls further to another obstruent, t, still lower in the hierarchy. Yet p and w do not form a double consonant onset in English, probably because English avoids grouping sounds that use the same articulators, the lips, in this instance. Constructing an automatic syllabifier could be the process of encoding all rules such as these in the language under investigation. Another approach, one more congenial to the rising tide of so-called usage-based linguists (e.g, BIBREF3), is to recognize that the regularities of language formulated as rules can be usefully expressed as probabilities BIBREF4, BIBREF5, BIBREF6. An automatic syllabifier is a computer program that, given a word as a sequence of phones, divides the word into its component syllables, where the syllables are legal in the language under investigation. Approaches take the form of dictionary-based look-up procedures, rule-based systems, data-driven systems, and hybrids thereof BIBREF7. Dictionary look-ups are limited to phone sequences previously seen and thus cannot handle new vocabulary BIBREF8. Rule-based approaches can process previously unseen phone sequences by encoding linguistic knowledge. Formalized language-specific rules are developed by hand, necessarily accompanied by many exceptions, such as the one noted in the previous paragraph. An important example is the syllabification package tsylb, developed at the National Institute of Standards and Technology (NIST), which is based on Daniel Kahn's 1979 MIT dissertation BIBREF9, BIBREF10. Language particularity is a stumbling block for rule-based and other formal approaches to language such as Optimality Theory (OT), however much they strive for universality. Thus, T.A. Hall argues that the OT approach to syllabification found in BIBREF11 is superior to previous OT research as well as to Kahn's rule-based work, because both postulate language-specific structures without cross-linguistic motivation. From Hall's perspective, previous systems do not capture important cross-linguistic features of the syllable. In a word, the earlier systems require kludges, an issue for both builders of automatic, language-agnostic syllabifiers and theoretical linguists like Hall. Data-driven syllabification methods, like the one to be presented in this paper, have the potential to function across languages and to process new, out of dictionary words. For languages that have transcribed syllable data, data-driven approaches often outperform rule-based ones. BIBREF12 used a combined support vector machine (SVM) and hidden Markov model (HMM) to maximize the classification margin between a correct and incorrect syllable boundary. BIBREF13 used segmental conditional random fields (SCRF). The SCRF hybrid method statistically leveraged general principles of syllabification such as legality, sonority and maximal onset. Many other HMM-based labeling structures exist, such as evolved phonetic categorization and high order n-gram models with back-off BIBREF14, BIBREF15. Data-driven models are evaluated by word accuracy against transcribed datasets. Commonly, only one language or languages of the same family are used. The CELEX lexical database from BIBREF16 contains syllabifications of phone sequences for English, Dutch, and German. These three languages fall into the West Germanic language family, so the phonologies of each are closely related. Evaluating a model solely on these three languages, the approach taken in BIBREF13 and others, does not adequately test a model's generalized ability to learn diverse syllable structures. In this paper, we present a neural network that can syllabify phone sequences without introducing any fixed principles or rules of syllabification. We show that this novel approach to syllabification is language-agnostic by evaluating it on datasets of six languages, five from two major language families, and one that appears to be unrelated to any existing language. Method Syllabification can be considered a sequence labeling task where each label delineates the existence or absence of a syllable boundary. As such, syllabification has much in common with well-researched topics such as part-of-speech tagging, named-entity recognition, and chunking BIBREF17. Neural networks have recently outpaced more traditional methods in sequence labeling tasks. These neural-based approaches are taking the place of HMMs, maximum entropy Markov models (MEMM), and conditional random fields (CRF) BIBREF18. In the following section and in Fig. FIGREF1, we present a neural network architecture that leverages both recurrence and one-dimensional convolutions. Recurrence enables our model to read a sequence much like a human would; a sequence with elements $abcd$ would be read one element at a time, updating a latent understanding after reading each $a$, $b$, $c$, and finally $d$. One-dimensional convolutions extract a spatial relationship between sequential elements. The $abcd$ example sequence may then be read as $ab$, $bc$, $cd$. Explicitly recognizing this spatial relationship is beneficial in syllabification because a syllable is a local sub-sequence of phones within a word. The input to the model is a sequence of phones that together represent a word. We pad each phone sequence to a length of $n$ where $n$ is the length of the longest phone sequence. All inputs then take the form Each phone $p_i$ is mapped to a $d$-dimensional embedding vector $x_i$ resulting in where $x$ has a dimension of $d\times n$. Taken together, the phone embeddings represent the relationships between phones in a real-valued vector space. The embedding dimension $d$ is optimized as a model hyperparameter and has a large impact on overall model performance BIBREF19. As such, we carefully tune $d$ for the proposed Base model and reduce it for our Small model as described in Section SECREF24. The vector values of the phone embeddings are learned during each model training. Using learned embeddings enables the model to have a custom embedding space for each language that it is trained on. This is desirable because phonetic patterns differ from language to language. Also, learned embeddings allow the model to be trained using the input of any phonetic transcription. For example, one training of the model can use IPA and one can use SAMPA without needing to specify a mapping of one alphabet to another. Method ::: Bidirectional LSTM Recurrent neural networks (RNNs) differ from standard feed-forward neural networks in their treatment of input order; each element is processed given the context of the input that came before. RNNs operate on sequential data and can take many forms. Our network leverages the long short-term memory (LSTM) cell which is a prominent RNN variant capable of capturing long-term sequential dependencies BIBREF20. The gated memory cells of LSTM are an improvement over the standard RNN because the standard RNN is often biased toward short-term dependencies BIBREF21, BIBREF22. At each time step, the LSTM cell determines what information is important to introduce, to keep, and to output. This is done using an input gate, a forget gate, and an output gate shown in Fig. FIGREF5. LSTM operates in a single direction through time. This can be a limitation when a time step has both past dependency and future dependency. For example, a consonant sound may be the coda of a syllable earlier in the sequence or the onset of a syllable later in the sequence. Thus, processing a phonetic sequence in both the forward and backwards directions provides an improved context for assigning syllable boundaries. A bidirectional LSTM (BiLSTM) is formed when an LSTM moving forward through time is concatenated with an LSTM moving backward through time BIBREF23. We use the LSTM network as follows. The $x$ vector is fed through the LSTM network which outputs a vector $\overrightarrow{h_i}$ for each time step $i$ from 0 to $n-1$. This is the forward LSTM. As we have access to the complete vector $x$, we can process a backward LSTM as well. This is done by computing a vector $\overleftarrow{h_i}$ for each time step $i$ from $n-1$ to 0. Finally, we concatenate the backward LSTM with the forward LSTM: Both $\overrightarrow{h_i}$ and $\overleftarrow{h_i}$ have a dimension of $l$, which is an optimized hyperparameter. The BiLSTM output $h$ thus has dimension $2l\times n$. Method ::: CNN Convolutional neural networks (CNNs) are traditionally used in computer vision, but perform well in many text processing tasks that benefit from position-invariant abstractions BIBREF24, BIBREF25. These abstractions depend exclusively on local neighboring features rather than the position of features in a global structure. According to a comparative study by BIBREF26, BiLSTMs tend to outperform CNNs in sequential tasks such as POS tagging, but CNNs tend to outperform BiLSTMs in global relation detection tasks such as keyphrase matching for question answering. We use both the BiLSTM and the CNN in our network so that the strengths of each are incorporated. CNNs have been combined with BiLSTMs to perform state-of-the-art sequence tagging in both POS tagging and NER. BIBREF27 used BiLSTMs to process the word sequence while each word's character sequence was processed with CNNs to provide a second representation. In textual syllabification, the only input is the phone sequence. Both our BiLSTM and CNN components process the same input: the $x$ vector. We pad $x$ with $w-1$ $d$-dimensional zero vectors before $x_0$. A 1-dimensional convolutional filter of width $w$ processes a window $x_{i-w+1},...,x_i$ for all $i$ from 0 to $n-1$. To determine the output vector $c$, the convolutional filter performs a nonlinear weight and bias computation. Due to the padding of $x$, the resulting dimension of $c$ is $f\times n$ where $f$ is the number of filters used. A 1-dimensional max pooling is performed over $c$ with a stride of 1 which keeps the dimensionality unaltered. The pool size is an optimized hyperparameter that determines how many adjacent elements are used in the $max$ operation. The convolutional and max pooling components can be repeated to compute higher-level abstractions. As the convolutional and max pooling output is conformant to the BiLSTM output, we can concatenate them to create a combined vector with dimension $(2l+f)\times n$: Method ::: Output: Conditional Random Field We introduce a time-distributed fully connected layer over vector $o$, taking $o$ from a dimension of $(2l+f)\times n$ down to a dimension of $2\times n$. We do this because there are two class labels: either a syllable boundary or no syllable boundary. The output of the model is a sequence When $y_i\equiv 0$, there is no syllable boundary predicted to follow the phone $p_i$. When $y_i\equiv 1$, there is a syllable boundary predicted to follow $p_i$. Intuitively, we seek an output sequence $y$ that gives the highest $p(y|o)$. One approach calculates the softmax for each $o_i$: The softmax normalizes each $o_i$ to a probability distribution over the two discrete class labels. We can then model $p(y|o)$ by multiplying the maximum of each $s_i$ together: When using the softmax, $p(y|o)$ is calculated under the limiting assumption that each $o_i$ is independent. To more accurately model $p(y|o)$, we replace the softmax classifier with a conditional random field (CRF) BIBREF28. Specifically, we use a linear-chain CRF which is a sequential model that leverages both past and future output tags to model the output probability. The linear-chain CRF can be considered a sequential generalization of logistic regression classifiers as well as a discriminative analogue of hidden Markov models because it models $p(y|o)$ directly instead of modeling $p(o|y)$ BIBREF29. Using sequence-level tag information with a CRF has been shown to improve tag accuracy in the related tasks of POS tagging, chunking, and NER BIBREF30, BIBREF31. We use a linear-chain CRF to model the conditional distribution directly: where $Z(o)$ is the normalization function and $\theta $ is a learned parameter vector scaled by the set of transition feature functions $f$. Method ::: Training Training of the network parameters is performed using backpropagation. Using Keras, the backpropagation is automatically defined given the forward definition of the network. The defined loss function is sparse categorical cross entropy, in accordance with the real-valued probabilities given by the CRF output layer. Loss optimization is performed with the Adam optimizer BIBREF32. Adam was chosen because it adapts the learning rate on a parameter-to-parameter basis; strong convergence occurs at the end of optimization. Training is performed to a set number of epochs. Early stopping allows the network to conclude training if convergence is reached prior to reaching the epoch training limit BIBREF33. Materials The materials for this research comprises the software described above and several syllabified datasets. Materials ::: Software The implementation of our model was adapted from an open source code library designed for general-purpose sequence tagging and made available by BIBREF37. The modifications to this code include adding data preparation scripts and changing the model architecture to reflect the network architecture described above. Our code is made publicly available for future research at https://github.com/jacobkrantz/lstm-syllabify. Materials ::: Datasets To produce a language-agnostic syllabifier, it is crucial to test syllabification accuracy across different language families and language groupings within families. We selected six evaluation languages: English, Dutch, Italian, French, Basque, and Manipuri. These represent two language families (Indo-European, Sino-Tibetan), a language isolate thought to be unrelated to any existing language (Basque), and two different subfamilies within the Indo-European family (West Germanic, Romance). The primary constraint was the availability of syllabified datasets for training and testing. Table TABREF17 presents details of each dataset. Among the six languages we evaluate with, both English and Dutch are notable for the availability of rich datasets of phonetic and syllabic transcriptions. These are found in the CELEX (Dutch Centre for Lexical Information) database BIBREF16. CELEX was built jointly by the University of Nijmegen, the Institute for Dutch Lexicology in Leiden, the Max Planck Institute for Psycholinguistics in Nijmegen, and the Institute for Perception Research in Eindhoven. CELEX is maintained by the Max Planck Institute for Psycholinguistics. The CELEX database contains information on orthography, phonology, morphology, syntax and word frequency. It also contains syllabified words in Dutch and English transcribed using SAM-PA, CELEX, CPA, and DISC notations. The first three are variations of the International Phonetic Alphabet (IPA), in that each uses a standard ASCII character to represent each IPA character. DISC is different than the other three in that it maps a distinct ASCII character to each phone in the sound systems of Dutch, English, and German BIBREF38. Different phonetic transcriptions are used in different datasets. Part of the strength of our proposed syllabifier is that every transcription can be used as-is without any additional modification to the syllabifier or the input sequences. The other datasets were hand-syllabified by linguists with the exception of the IIT-Guwahat dataset and the Festival dataset. Both IIT-Guwahat and Festival were initially syllabified with a naive algorithm and then each entry was confirmed or corrected by hand. For each dataset used to evaluate the proposed model, we compare our results with published accuracies of existing syllabification systems. Table TABREF21 shows the performance of well known and state of the art syllabifiers for each dataset. Liang's hyphenation algorithm is commonly known for its usage in . The patgen program was used to learn the rules of syllable boundaries BIBREF39. What we call Entropy CRF is a method particular to Manipuri; a rule-based component estimates the entropy of phones and phone clusters while a data-driven CRF component treats syllabification as a sequence modeling task BIBREF35. Experiments Each dataset used to evaluate the model was split into three groups: training, development, and test. Each training epoch iterated over the training set to optimize the model parameters. The development set was used to tune the hyperparameters of the model, such as the batch size and the phone embedding dimension. The test set was exclusively used for reporting the model accuracy. The datasets were split randomly by percentages 80 (training), 10 (development), and 10 (test). For the English CELEX dataset of $89,402$ words, this resulted in $71,522$ words for training and $8,940$ words for each development and training. For each experiment, models were initialized with a random set of parameter weights. BIBREF37 showed that differences in random number generation produce statistically significant variances in the accuracy of LSTM-based models. Due to the stochastic nature of neural network training, we performed each experiment 20 times. We report model accuracy as a mean and standard deviation of these experiment repetitions. Experiments ::: Data Cleaning Prior to splitting each dataset, a simple cleaning process had to be performed to remove unwanted entries. This cleaning involved removing all entries that had at least one other entry with the same word. It is important to note that two words being different does not necessitate a different pronunciation or syllabification. These entries with different words but same pronunciations were kept in the dataset. No other cleaning was needed for the datasets other than mapping the syllabified phone sequence to an input-target pair usable by our model for training and evaluation. This cleaning process contributes to the language-agnostic nature of this research. The simplicity of the cleaning process is enabled by the fact that the model is end to end; no external phonetic features are gathered, and any phonetic transcription can be accommodated in the training process. Experiments ::: Hyperparameter Specification For all experiments, models were trained with a batch size of 64. A limit of 120 epochs was imposed with early stopping after 10 unimproved epochs. Dropout was used for the input connection to the BiLSTM layer at $25\%$ BIBREF41. The learned embeddings layer had dimension $d=300$. The LSTM outputs, $\overrightarrow{h_i}$ and $\overleftarrow{h_i}$, both had dimension $l=300$. The convolutional to max pooling component was repeated twice before concatenation with the BiLSTM output. 200 convolutional filters were used and each had a dimension of 3. Finally, when using the Adam optimizer, we scaled the gradient norm when it exceeded $1.0$ using the Keras clipnorm parameter. All training was performed on single GPU machines on Amazon Web Services (AWS) servers which provided more than enough compute power. The average training of a model on the English CELEX dataset took approximately 45 minutes to reach convergence. Experiments ::: Results We tested three model versions against all datasets. The model we call Base is the BiLSTM-CNN-CRF model described in Section SECREF2 with the associated hyperparameters. Another model, Small, uses the same architecture as Base but reduces the number of convolutional layers to 1, the convolutional filters to 40, the LSTM dimension $l$ to 50, and the phone embedding size $d$ to 100. We also tested a Base-Softmax model, which replaces the CRF output of the Base model with a softmax. A comparison of the results of these three models can be seen in Table TABREF25. This comparison empirically motivates the CRF output because Base almost always outperforms Base-Softmax. Of these three models, the Base model performed the best with the exception of the French and Manipuri datasets. The differences in the French results can be considered negligible because the accuracies are all near $100\%$. The Small model performed best on Manipuri, which may suggest that reducing the number of parameters of the Base model leads to better accuracy on smaller datasets. When comparing our model with previous syllabifiers, we consider the Base model exclusively. In Table TABREF26, a side-by-side comparison of our Base model to a selection of published syllabifiers shows that Base is near state-of-the art performance on English CELEX. For the Dutch dataset, we report an accuracy of $99.47 \pm 0.04\%$, which improves on the previously best-known accuracy of $99.16\%$ from the HMM-SVM of BIBREF12. Best-known results are also obtained on the Italian, French, and Basque datasets. Our reported accuracy of $94.9 \pm 0.3\%$ on the Manipuri dataset is furthest from state of the art. We suspect this to be due to having limited amounts of training data; the $97.5\%$ accurate system from BIBREF35 supplemented their data-driven approach with rules of syllabification. Discussion Examples from the outputs of the Base model can give us insight into what the model does well and what types of words it struggles with. The total number of sounds across languages is vast, but not infinite, as Ladefoged and Maddieson's The Sounds of the the World's Languages demonstrates BIBREF42. Different languages choose different inventories from the total producible by the human vocal apparatus. Within a language, sounds and patterns of sound vary widely in frequency, though with considerable regularity. This regularity has led a generation of linguists to attempt to uncover rules that describe not only syntax, but sound as well. Chomsky and Halle's The Sound Pattern of English is the classic effort, first appearing in 1968 BIBREF43. It is not surprising that the earliest attempts to produce automatic syllabifiers were based on just such rule collections. Nor is it surprising that the best-known rule-based syllabifier was inspired by a doctoral dissertation at MIT, Noam Chomsky's home institution for five decades. An alternative approach is to recognize that 1) rules can be reconceptualized as probabilities and 2) native speakers of a language have internalized those very probabilities. Nevertheless, where there is probability, there is ambiguity. With all of these caveats in mind, a few examples have been selected from our results to showcase the model as shown in Table TABREF27. The syllabification of misinterpretation illustrates the model's ability to process longer words. Containing 14 phones and 5 syllables, this word demonstrates that the model's pattern finding technique works well regardless of the location of phonetic and syllabic patterns in the word. The model can accurately handle prefixes, correctly syllabifying mis- as Table TABREF27 shows. Another word is achieved. Inflected languages, such as English, use morphemes to distinguish mood, tense, case, and number, among others. Thus, the verb achieve has several forms, or conjugates. The syllabifier correctly detected the stem and the past tense morpheme, ed. An odd aspect of the English CELEX dataset is the occurrence of entries, $22,393$ of which, that either have hyphens or are multiple entirely separate words, such as public-address systems. Because the phonetic representation does not denote hyphens or whitespace, the model has difficulties processing these words. Conclusion We proposed a sequential neural network model that is capable of syllabifying phonetic sequences. This model is independent of any hand-crafted linguistic knowledge. We showed that this model performs at or near state of the art levels on a variety of datasets sampled from two Indo-European, one Sino-Tibetan, and an apparently family-less language. Specifically, the proposed model achieved accuracies higher than any other we could find on datasets from Dutch, Italian, French, and Basque languages and close to the best-reported accuracy for English and Manipuri. Evaluating the performance of the syllabifier across diverse languages provides strong evidence that the proposed model is language-agnostic. Conclusion ::: Future Work With a language-agnostic syllabification system, any language can be syllabified given enough labeled training data. A problem is that many languages do not have large, labeled syllabification datasets. For example, we failed to find available and sufficient datasets in the Slavic languages of Russian and Serbian. This problem can be addressed either in a concentrated effort to create more labeled data or in the development of systems that require limited data. Acknowledgment This research was supported in part by a Gonzaga University McDonald Work Award by Robert and Claire McDonald and an Amazon Web Services (AWS) grant through the Cloud Credits for Research program.
CELEX (Dutch and English) - SVM-HMM Festival, E-Hitz and OpenLexique - Liang hyphenation IIT-Guwahat - Entropy CRF
f6556d2a8b42b133eaa361f562745edbe56c0b51
f6556d2a8b42b133eaa361f562745edbe56c0b51_0
Q: Is the LSTM bidirectional? Text: Introduction Words can be considered compositions of syllables, which in turn are compositions of phones. Phones are units of sound producible by the human vocal apparatus. Syllables play an important role in prosody and are influential components of natural language understanding, speech production, and speech recognition systems. Text-to-speech (TTS) systems can rely heavily on automatically syllabified phone sequences BIBREF0. One prominent example is Festival, an open source TTS system that relies on a syllabification algorithm to organize speech production BIBREF1. Linguists have recognized since the late 1940s that the syllable is a hierarchical structure, present in most, if not all, languages (though there is some disagreement on this score. See, for example, BIBREF2). An optional consonant onset is followed by a rime, which may be further decomposed into a high sonority vowel nucleus followed by an optional consonant coda. All languages appear to have at least the single syllable vowel ($V$) and the two syllable vowel-consonant ($VC$) forms in their syllable inventories. For example, oh and so in English. Most languages supplement these with codas to form the $\lbrace V, CV, VC, CVC\rbrace $ syllable inventory. Sonority rises from the consonant onset to the vowel nucleus and falls toward the consonant coda, as in the English pig. The components of the syllable obey the phonotactic constraints of the language in which they occur, and therein lies the question that motivates this research. Phonologists agree that the human vocal apparatus produces speech sounds that form a sonority hierarchy, from highest to lowest: vowels, glides, liquids, nasals, and obstruents. Examples are, come, twist, lack, ring, and cat, respectively. English, and other languages with complex syllable inventories, supplement the basic forms in ways that are usually consistent with the sonority hierarchy, where usually is the operative word. Thus, English permits double consonant onsets, as in twist with a consonant lower in the hierarchy (t, an obstruent) followed by a consonant one higher in the hierarchy (w, a glide). So sonority rises to the vowel, i, falls to the fricative, s, an obstruent, and falls further to another obstruent, t, still lower in the hierarchy. Yet p and w do not form a double consonant onset in English, probably because English avoids grouping sounds that use the same articulators, the lips, in this instance. Constructing an automatic syllabifier could be the process of encoding all rules such as these in the language under investigation. Another approach, one more congenial to the rising tide of so-called usage-based linguists (e.g, BIBREF3), is to recognize that the regularities of language formulated as rules can be usefully expressed as probabilities BIBREF4, BIBREF5, BIBREF6. An automatic syllabifier is a computer program that, given a word as a sequence of phones, divides the word into its component syllables, where the syllables are legal in the language under investigation. Approaches take the form of dictionary-based look-up procedures, rule-based systems, data-driven systems, and hybrids thereof BIBREF7. Dictionary look-ups are limited to phone sequences previously seen and thus cannot handle new vocabulary BIBREF8. Rule-based approaches can process previously unseen phone sequences by encoding linguistic knowledge. Formalized language-specific rules are developed by hand, necessarily accompanied by many exceptions, such as the one noted in the previous paragraph. An important example is the syllabification package tsylb, developed at the National Institute of Standards and Technology (NIST), which is based on Daniel Kahn's 1979 MIT dissertation BIBREF9, BIBREF10. Language particularity is a stumbling block for rule-based and other formal approaches to language such as Optimality Theory (OT), however much they strive for universality. Thus, T.A. Hall argues that the OT approach to syllabification found in BIBREF11 is superior to previous OT research as well as to Kahn's rule-based work, because both postulate language-specific structures without cross-linguistic motivation. From Hall's perspective, previous systems do not capture important cross-linguistic features of the syllable. In a word, the earlier systems require kludges, an issue for both builders of automatic, language-agnostic syllabifiers and theoretical linguists like Hall. Data-driven syllabification methods, like the one to be presented in this paper, have the potential to function across languages and to process new, out of dictionary words. For languages that have transcribed syllable data, data-driven approaches often outperform rule-based ones. BIBREF12 used a combined support vector machine (SVM) and hidden Markov model (HMM) to maximize the classification margin between a correct and incorrect syllable boundary. BIBREF13 used segmental conditional random fields (SCRF). The SCRF hybrid method statistically leveraged general principles of syllabification such as legality, sonority and maximal onset. Many other HMM-based labeling structures exist, such as evolved phonetic categorization and high order n-gram models with back-off BIBREF14, BIBREF15. Data-driven models are evaluated by word accuracy against transcribed datasets. Commonly, only one language or languages of the same family are used. The CELEX lexical database from BIBREF16 contains syllabifications of phone sequences for English, Dutch, and German. These three languages fall into the West Germanic language family, so the phonologies of each are closely related. Evaluating a model solely on these three languages, the approach taken in BIBREF13 and others, does not adequately test a model's generalized ability to learn diverse syllable structures. In this paper, we present a neural network that can syllabify phone sequences without introducing any fixed principles or rules of syllabification. We show that this novel approach to syllabification is language-agnostic by evaluating it on datasets of six languages, five from two major language families, and one that appears to be unrelated to any existing language. Method Syllabification can be considered a sequence labeling task where each label delineates the existence or absence of a syllable boundary. As such, syllabification has much in common with well-researched topics such as part-of-speech tagging, named-entity recognition, and chunking BIBREF17. Neural networks have recently outpaced more traditional methods in sequence labeling tasks. These neural-based approaches are taking the place of HMMs, maximum entropy Markov models (MEMM), and conditional random fields (CRF) BIBREF18. In the following section and in Fig. FIGREF1, we present a neural network architecture that leverages both recurrence and one-dimensional convolutions. Recurrence enables our model to read a sequence much like a human would; a sequence with elements $abcd$ would be read one element at a time, updating a latent understanding after reading each $a$, $b$, $c$, and finally $d$. One-dimensional convolutions extract a spatial relationship between sequential elements. The $abcd$ example sequence may then be read as $ab$, $bc$, $cd$. Explicitly recognizing this spatial relationship is beneficial in syllabification because a syllable is a local sub-sequence of phones within a word. The input to the model is a sequence of phones that together represent a word. We pad each phone sequence to a length of $n$ where $n$ is the length of the longest phone sequence. All inputs then take the form Each phone $p_i$ is mapped to a $d$-dimensional embedding vector $x_i$ resulting in where $x$ has a dimension of $d\times n$. Taken together, the phone embeddings represent the relationships between phones in a real-valued vector space. The embedding dimension $d$ is optimized as a model hyperparameter and has a large impact on overall model performance BIBREF19. As such, we carefully tune $d$ for the proposed Base model and reduce it for our Small model as described in Section SECREF24. The vector values of the phone embeddings are learned during each model training. Using learned embeddings enables the model to have a custom embedding space for each language that it is trained on. This is desirable because phonetic patterns differ from language to language. Also, learned embeddings allow the model to be trained using the input of any phonetic transcription. For example, one training of the model can use IPA and one can use SAMPA without needing to specify a mapping of one alphabet to another. Method ::: Bidirectional LSTM Recurrent neural networks (RNNs) differ from standard feed-forward neural networks in their treatment of input order; each element is processed given the context of the input that came before. RNNs operate on sequential data and can take many forms. Our network leverages the long short-term memory (LSTM) cell which is a prominent RNN variant capable of capturing long-term sequential dependencies BIBREF20. The gated memory cells of LSTM are an improvement over the standard RNN because the standard RNN is often biased toward short-term dependencies BIBREF21, BIBREF22. At each time step, the LSTM cell determines what information is important to introduce, to keep, and to output. This is done using an input gate, a forget gate, and an output gate shown in Fig. FIGREF5. LSTM operates in a single direction through time. This can be a limitation when a time step has both past dependency and future dependency. For example, a consonant sound may be the coda of a syllable earlier in the sequence or the onset of a syllable later in the sequence. Thus, processing a phonetic sequence in both the forward and backwards directions provides an improved context for assigning syllable boundaries. A bidirectional LSTM (BiLSTM) is formed when an LSTM moving forward through time is concatenated with an LSTM moving backward through time BIBREF23. We use the LSTM network as follows. The $x$ vector is fed through the LSTM network which outputs a vector $\overrightarrow{h_i}$ for each time step $i$ from 0 to $n-1$. This is the forward LSTM. As we have access to the complete vector $x$, we can process a backward LSTM as well. This is done by computing a vector $\overleftarrow{h_i}$ for each time step $i$ from $n-1$ to 0. Finally, we concatenate the backward LSTM with the forward LSTM: Both $\overrightarrow{h_i}$ and $\overleftarrow{h_i}$ have a dimension of $l$, which is an optimized hyperparameter. The BiLSTM output $h$ thus has dimension $2l\times n$. Method ::: CNN Convolutional neural networks (CNNs) are traditionally used in computer vision, but perform well in many text processing tasks that benefit from position-invariant abstractions BIBREF24, BIBREF25. These abstractions depend exclusively on local neighboring features rather than the position of features in a global structure. According to a comparative study by BIBREF26, BiLSTMs tend to outperform CNNs in sequential tasks such as POS tagging, but CNNs tend to outperform BiLSTMs in global relation detection tasks such as keyphrase matching for question answering. We use both the BiLSTM and the CNN in our network so that the strengths of each are incorporated. CNNs have been combined with BiLSTMs to perform state-of-the-art sequence tagging in both POS tagging and NER. BIBREF27 used BiLSTMs to process the word sequence while each word's character sequence was processed with CNNs to provide a second representation. In textual syllabification, the only input is the phone sequence. Both our BiLSTM and CNN components process the same input: the $x$ vector. We pad $x$ with $w-1$ $d$-dimensional zero vectors before $x_0$. A 1-dimensional convolutional filter of width $w$ processes a window $x_{i-w+1},...,x_i$ for all $i$ from 0 to $n-1$. To determine the output vector $c$, the convolutional filter performs a nonlinear weight and bias computation. Due to the padding of $x$, the resulting dimension of $c$ is $f\times n$ where $f$ is the number of filters used. A 1-dimensional max pooling is performed over $c$ with a stride of 1 which keeps the dimensionality unaltered. The pool size is an optimized hyperparameter that determines how many adjacent elements are used in the $max$ operation. The convolutional and max pooling components can be repeated to compute higher-level abstractions. As the convolutional and max pooling output is conformant to the BiLSTM output, we can concatenate them to create a combined vector with dimension $(2l+f)\times n$: Method ::: Output: Conditional Random Field We introduce a time-distributed fully connected layer over vector $o$, taking $o$ from a dimension of $(2l+f)\times n$ down to a dimension of $2\times n$. We do this because there are two class labels: either a syllable boundary or no syllable boundary. The output of the model is a sequence When $y_i\equiv 0$, there is no syllable boundary predicted to follow the phone $p_i$. When $y_i\equiv 1$, there is a syllable boundary predicted to follow $p_i$. Intuitively, we seek an output sequence $y$ that gives the highest $p(y|o)$. One approach calculates the softmax for each $o_i$: The softmax normalizes each $o_i$ to a probability distribution over the two discrete class labels. We can then model $p(y|o)$ by multiplying the maximum of each $s_i$ together: When using the softmax, $p(y|o)$ is calculated under the limiting assumption that each $o_i$ is independent. To more accurately model $p(y|o)$, we replace the softmax classifier with a conditional random field (CRF) BIBREF28. Specifically, we use a linear-chain CRF which is a sequential model that leverages both past and future output tags to model the output probability. The linear-chain CRF can be considered a sequential generalization of logistic regression classifiers as well as a discriminative analogue of hidden Markov models because it models $p(y|o)$ directly instead of modeling $p(o|y)$ BIBREF29. Using sequence-level tag information with a CRF has been shown to improve tag accuracy in the related tasks of POS tagging, chunking, and NER BIBREF30, BIBREF31. We use a linear-chain CRF to model the conditional distribution directly: where $Z(o)$ is the normalization function and $\theta $ is a learned parameter vector scaled by the set of transition feature functions $f$. Method ::: Training Training of the network parameters is performed using backpropagation. Using Keras, the backpropagation is automatically defined given the forward definition of the network. The defined loss function is sparse categorical cross entropy, in accordance with the real-valued probabilities given by the CRF output layer. Loss optimization is performed with the Adam optimizer BIBREF32. Adam was chosen because it adapts the learning rate on a parameter-to-parameter basis; strong convergence occurs at the end of optimization. Training is performed to a set number of epochs. Early stopping allows the network to conclude training if convergence is reached prior to reaching the epoch training limit BIBREF33. Materials The materials for this research comprises the software described above and several syllabified datasets. Materials ::: Software The implementation of our model was adapted from an open source code library designed for general-purpose sequence tagging and made available by BIBREF37. The modifications to this code include adding data preparation scripts and changing the model architecture to reflect the network architecture described above. Our code is made publicly available for future research at https://github.com/jacobkrantz/lstm-syllabify. Materials ::: Datasets To produce a language-agnostic syllabifier, it is crucial to test syllabification accuracy across different language families and language groupings within families. We selected six evaluation languages: English, Dutch, Italian, French, Basque, and Manipuri. These represent two language families (Indo-European, Sino-Tibetan), a language isolate thought to be unrelated to any existing language (Basque), and two different subfamilies within the Indo-European family (West Germanic, Romance). The primary constraint was the availability of syllabified datasets for training and testing. Table TABREF17 presents details of each dataset. Among the six languages we evaluate with, both English and Dutch are notable for the availability of rich datasets of phonetic and syllabic transcriptions. These are found in the CELEX (Dutch Centre for Lexical Information) database BIBREF16. CELEX was built jointly by the University of Nijmegen, the Institute for Dutch Lexicology in Leiden, the Max Planck Institute for Psycholinguistics in Nijmegen, and the Institute for Perception Research in Eindhoven. CELEX is maintained by the Max Planck Institute for Psycholinguistics. The CELEX database contains information on orthography, phonology, morphology, syntax and word frequency. It also contains syllabified words in Dutch and English transcribed using SAM-PA, CELEX, CPA, and DISC notations. The first three are variations of the International Phonetic Alphabet (IPA), in that each uses a standard ASCII character to represent each IPA character. DISC is different than the other three in that it maps a distinct ASCII character to each phone in the sound systems of Dutch, English, and German BIBREF38. Different phonetic transcriptions are used in different datasets. Part of the strength of our proposed syllabifier is that every transcription can be used as-is without any additional modification to the syllabifier or the input sequences. The other datasets were hand-syllabified by linguists with the exception of the IIT-Guwahat dataset and the Festival dataset. Both IIT-Guwahat and Festival were initially syllabified with a naive algorithm and then each entry was confirmed or corrected by hand. For each dataset used to evaluate the proposed model, we compare our results with published accuracies of existing syllabification systems. Table TABREF21 shows the performance of well known and state of the art syllabifiers for each dataset. Liang's hyphenation algorithm is commonly known for its usage in . The patgen program was used to learn the rules of syllable boundaries BIBREF39. What we call Entropy CRF is a method particular to Manipuri; a rule-based component estimates the entropy of phones and phone clusters while a data-driven CRF component treats syllabification as a sequence modeling task BIBREF35. Experiments Each dataset used to evaluate the model was split into three groups: training, development, and test. Each training epoch iterated over the training set to optimize the model parameters. The development set was used to tune the hyperparameters of the model, such as the batch size and the phone embedding dimension. The test set was exclusively used for reporting the model accuracy. The datasets were split randomly by percentages 80 (training), 10 (development), and 10 (test). For the English CELEX dataset of $89,402$ words, this resulted in $71,522$ words for training and $8,940$ words for each development and training. For each experiment, models were initialized with a random set of parameter weights. BIBREF37 showed that differences in random number generation produce statistically significant variances in the accuracy of LSTM-based models. Due to the stochastic nature of neural network training, we performed each experiment 20 times. We report model accuracy as a mean and standard deviation of these experiment repetitions. Experiments ::: Data Cleaning Prior to splitting each dataset, a simple cleaning process had to be performed to remove unwanted entries. This cleaning involved removing all entries that had at least one other entry with the same word. It is important to note that two words being different does not necessitate a different pronunciation or syllabification. These entries with different words but same pronunciations were kept in the dataset. No other cleaning was needed for the datasets other than mapping the syllabified phone sequence to an input-target pair usable by our model for training and evaluation. This cleaning process contributes to the language-agnostic nature of this research. The simplicity of the cleaning process is enabled by the fact that the model is end to end; no external phonetic features are gathered, and any phonetic transcription can be accommodated in the training process. Experiments ::: Hyperparameter Specification For all experiments, models were trained with a batch size of 64. A limit of 120 epochs was imposed with early stopping after 10 unimproved epochs. Dropout was used for the input connection to the BiLSTM layer at $25\%$ BIBREF41. The learned embeddings layer had dimension $d=300$. The LSTM outputs, $\overrightarrow{h_i}$ and $\overleftarrow{h_i}$, both had dimension $l=300$. The convolutional to max pooling component was repeated twice before concatenation with the BiLSTM output. 200 convolutional filters were used and each had a dimension of 3. Finally, when using the Adam optimizer, we scaled the gradient norm when it exceeded $1.0$ using the Keras clipnorm parameter. All training was performed on single GPU machines on Amazon Web Services (AWS) servers which provided more than enough compute power. The average training of a model on the English CELEX dataset took approximately 45 minutes to reach convergence. Experiments ::: Results We tested three model versions against all datasets. The model we call Base is the BiLSTM-CNN-CRF model described in Section SECREF2 with the associated hyperparameters. Another model, Small, uses the same architecture as Base but reduces the number of convolutional layers to 1, the convolutional filters to 40, the LSTM dimension $l$ to 50, and the phone embedding size $d$ to 100. We also tested a Base-Softmax model, which replaces the CRF output of the Base model with a softmax. A comparison of the results of these three models can be seen in Table TABREF25. This comparison empirically motivates the CRF output because Base almost always outperforms Base-Softmax. Of these three models, the Base model performed the best with the exception of the French and Manipuri datasets. The differences in the French results can be considered negligible because the accuracies are all near $100\%$. The Small model performed best on Manipuri, which may suggest that reducing the number of parameters of the Base model leads to better accuracy on smaller datasets. When comparing our model with previous syllabifiers, we consider the Base model exclusively. In Table TABREF26, a side-by-side comparison of our Base model to a selection of published syllabifiers shows that Base is near state-of-the art performance on English CELEX. For the Dutch dataset, we report an accuracy of $99.47 \pm 0.04\%$, which improves on the previously best-known accuracy of $99.16\%$ from the HMM-SVM of BIBREF12. Best-known results are also obtained on the Italian, French, and Basque datasets. Our reported accuracy of $94.9 \pm 0.3\%$ on the Manipuri dataset is furthest from state of the art. We suspect this to be due to having limited amounts of training data; the $97.5\%$ accurate system from BIBREF35 supplemented their data-driven approach with rules of syllabification. Discussion Examples from the outputs of the Base model can give us insight into what the model does well and what types of words it struggles with. The total number of sounds across languages is vast, but not infinite, as Ladefoged and Maddieson's The Sounds of the the World's Languages demonstrates BIBREF42. Different languages choose different inventories from the total producible by the human vocal apparatus. Within a language, sounds and patterns of sound vary widely in frequency, though with considerable regularity. This regularity has led a generation of linguists to attempt to uncover rules that describe not only syntax, but sound as well. Chomsky and Halle's The Sound Pattern of English is the classic effort, first appearing in 1968 BIBREF43. It is not surprising that the earliest attempts to produce automatic syllabifiers were based on just such rule collections. Nor is it surprising that the best-known rule-based syllabifier was inspired by a doctoral dissertation at MIT, Noam Chomsky's home institution for five decades. An alternative approach is to recognize that 1) rules can be reconceptualized as probabilities and 2) native speakers of a language have internalized those very probabilities. Nevertheless, where there is probability, there is ambiguity. With all of these caveats in mind, a few examples have been selected from our results to showcase the model as shown in Table TABREF27. The syllabification of misinterpretation illustrates the model's ability to process longer words. Containing 14 phones and 5 syllables, this word demonstrates that the model's pattern finding technique works well regardless of the location of phonetic and syllabic patterns in the word. The model can accurately handle prefixes, correctly syllabifying mis- as Table TABREF27 shows. Another word is achieved. Inflected languages, such as English, use morphemes to distinguish mood, tense, case, and number, among others. Thus, the verb achieve has several forms, or conjugates. The syllabifier correctly detected the stem and the past tense morpheme, ed. An odd aspect of the English CELEX dataset is the occurrence of entries, $22,393$ of which, that either have hyphens or are multiple entirely separate words, such as public-address systems. Because the phonetic representation does not denote hyphens or whitespace, the model has difficulties processing these words. Conclusion We proposed a sequential neural network model that is capable of syllabifying phonetic sequences. This model is independent of any hand-crafted linguistic knowledge. We showed that this model performs at or near state of the art levels on a variety of datasets sampled from two Indo-European, one Sino-Tibetan, and an apparently family-less language. Specifically, the proposed model achieved accuracies higher than any other we could find on datasets from Dutch, Italian, French, and Basque languages and close to the best-reported accuracy for English and Manipuri. Evaluating the performance of the syllabifier across diverse languages provides strong evidence that the proposed model is language-agnostic. Conclusion ::: Future Work With a language-agnostic syllabification system, any language can be syllabified given enough labeled training data. A problem is that many languages do not have large, labeled syllabification datasets. For example, we failed to find available and sufficient datasets in the Slavic languages of Russian and Serbian. This problem can be addressed either in a concentrated effort to create more labeled data or in the development of systems that require limited data. Acknowledgment This research was supported in part by a Gonzaga University McDonald Work Award by Robert and Claire McDonald and an Amazon Web Services (AWS) grant through the Cloud Credits for Research program.
Yes
def3d623578bf84139d920886aa3bd6cdaaa7c41
def3d623578bf84139d920886aa3bd6cdaaa7c41_0
Q: What are the three languages studied in the paper? Text: Introduction Neural machine translation (NMT) systems are conventionally trained based on the approach of maximizing the log-likelihood on a training corpus in order to learn distributed representations of words according to their sentence context, which is highly demanding in terms of training data as well as the network capacity. Under conditions of lexical sparsity, which may include the cases when the amount of training examples is insufficient to observe words in different context, and particularly in translation of morphologically-rich languages, where the same word can have exponentially many different surface realizations due to syntactic conditions, which are often rarely or ever observed in any set of collected examples, the model may suffer in learning accurate representations of words. The standard approach to overcome this limitation is to replace the word representations in the model with subword units that are shared among words, which are, in principle, more reliable as they are observed more frequently in varying context BIBREF0, BIBREF1. One drawback related to this approach, however, is that the estimation of the subword vocabulary relies on word segmentation methods optimized using corpus-dependent statistics, disregarding any linguistic notion and the translation objective, which may result in morphological errors during splitting, resulting in subword units that are semantically ambiguous as they might be used in far too many lexical contexts BIBREF2. Moreover, the words are generated predicting multiple subword units, which makes generalizing to unseen word forms more difficult, where some of the subword units that could be used to reconstruct a given word may be unlikely in the given context. To alleviate the sub-optimal effects of using explicit segmentation and generalize better to new morphological forms, recent studies explored the idea of extending the same approach to model translation directly at the level of characters BIBREF3, BIBREF4, which, in turn, have demonstrated the requirement of using comparably deeper networks, as the network would then need to learn longer distance grammatical dependencies BIBREF5. In this paper, we explore the benefit of explicitly modeling variations in the surface forms of words using methods from deep latent variable modeling in order to improve the translation accuracy in low-resource and morphologically-rich languages. Latent variable models allow us to inject inductive biases relevant to the task, which, in our case, is word formation, and we believe that follows a certain hierarchical procedure. Our model translates words one character at a time based on word representations learned compositionally from sub-lexical components, which are parameterized by a hierarchical latent variable model mimicking the process of morphological inflection, consisting of a continuous-space dense vector capturing the lexical semantics, and a set of (approximately) discrete features, representing the morphosyntactic role of the word in a given sentence. Each word representation during decoding is reformulated based on the shared latent morphological features, aiding in learning more reliable representations of words under sparse settings by generalizing across their different surface forms. We evaluate our method in translating English into three morphologically-rich languages each with a distinct morphological typology: Arabic, Czech and Turkish, and show that our model is able to obtain better translation accuracy and generalization capacity than conventional approaches to open-vocabulary NMT. Evaluation ::: Models We evaluate our model by comparing it in machine translation against three baselines which constitute the conventional open-vocabulary NMT methods, including architectures using atomic parameterization either with subword units segmented with BPE BIBREF0 or characters, and the hierarchical parameterization method employed for generating all words in the output. We implement all architectures using Pytorch BIBREF6 within the OpenNMT-py framework BIBREF7. Evaluation ::: Data and Languages In order to evaluate our model we design two sets of experiments. The experiments in §SECREF8 aim to evaluate different methods under low-resource settings, for languages with different morphological typology. We model the machine translation task from English into three languages with distinct morphological characteristics: Arabic (templatic), Czech (fusional), and Turkish (agglutinative). We use the TED Talks corpora BIBREF8 for training the NMT models for these experiments. In §SECREF10, we conduct more experiments in Turkish to demonstrate the case of increased data sparsity using multi-domain training corpora, where we extend the training set using corpora from EU Bookshop BIBREF9, Global Voices, Gnome, Tatoeba, Ubuntu BIBREF10, KDE4 BIBREF11, Open Subtitles BIBREF12 and SETIMES BIBREF13. The statistical characteristics of the training sets are given in Tables TABREF16 and TABREF17. We use the official evaluation sets of the IWSLT for validating and testing the accuracy of the models. In order to increase the number of unknown and rare words in the evaluation sets we measure accuracy on large test sets combining evaluation sets from many years (Table TABREF18 presents the evaluation sets used for development and testing). The accuracy of each model output is measured using BLEU BIBREF15 and chrF3 BIBREF16 metrics, whereas the significance of the improvements are computed using bootstrap hypothesis testing BIBREF17. Evaluation ::: Training Settings All models are implemented using gated recurrent units (GRU) BIBREF18, and have a single-layer bi-RNN encoder. The source sides of the data used for training all NMT models, and the target sides of the data used in training the subword-level NMT models are segmented using BPE with 16,000 merge rules. We implement all decoders using a comparable number of GRU parameters, including 3-layer stacked-GRU subword and character-level decoders, where the attention is computed after the 1st layer BIBREF19 and a 3-layer hierarchical decoder which implements the attention mechanism after the 2nd layer. All models use an embedding dimension and GRU size of 512. The latent morphology model uses the same hierarchical GRU architecture, where the middle layer is augmented using 4 multi-layer perceptrons with 256 hidden units. We use a lemma vector dimension of 150, 10 inflectional features (See §SECREF21 for experiments conducted to tune the feature dimensions) and set the regularization constant to $\rho =0.4$. All models are trained using the Adam optimizer BIBREF20 with a batch size of 100, dropout rate of 0.2, learning rate of 0.0004 and learning rate decay of 0.8, applied when the perplexity does not decrease at a given epoch. Translations are generated with beam search with a beam size of 5, where the hierarchical models implement the hierarchical beam search BIBREF21. Evaluation ::: Results ::: The Effect of Morphological Typology The experiment results given in Table TABREF9 shows the performance of each model in translating English into Arabic, Czech and Turkish. In Turkish, the most sparse target language in our benchmark, using character-based decoding shows to be more advantageous compared to the subword-level and hierarchical models, due to the fact that reduced granularity in the vocabulary units might aid in better predicting words under conditions of high data sparsity. In Arabic, on the other hand, using a hierarchical decoding model shows to be advantageous compared to the character-level decoder, as it might be useful in better learning syntactic dependencies, whereas it also outperforms the subword-level decoder. Using the latent morphology model provides improvements of 0.51 and 0.30 BLEU points in Arabic and Turkish over the best performing baselines, respectively. The fact that our model can efficiently work in both Arabic and Turkish suggests that it can handle the generation of both concatenative and non-concatenative morphological transformations. The results in the English-to-Czech translation direction do not indicate a specific advantage of using either method for generating fusional morphology, where morphemes are already optimized at the surface level, although our model is still able to achieve translation accuracy comparable to the character-level model. Evaluation ::: Results ::: The Effect of Data Size The experiment conducted in the English-to-Turkish translation direction by increasing the amount of training data with multi-domain corpora demonstrates a more challenging case, where there is a greater possibility of observing rare words, either in the form of morphological inflections due to the complex agglutinative morphology of Turkish, or ambiguous terminology raising from the multi-domain characteristics. In this experiment, the character-level model experiences a drop in performance and its accuracy is much lower than the subword-level one, suggesting that its capacity cannot cope with the increased amount of sparsity. Empirical results suggest that with increased capacity, character-level models carry the potential to reach comparable performance to subword-level models BIBREF4. Our model reaches a much larger improvement of 0.82 BLEU points over the subword-level and 2.54 BLEU points over the character-level decoders, suggesting that it could make use of the increased sparsity in learning more accurate representations. Evaluation ::: Results ::: Predicting Unseen Words In addition to general evaluation using automatic metrics, we perform a more focused analysis to illustrate the performance of different methods in predicting unseen words. We sample the sentences from the development sets which contain out-of-vocabulary words, and compute the average perplexity per character on these sentences using different NMT models, as suggested by BIBREF22. In general, the highest perplexities are obtained using the subword-based model, suggesting that generating unseen words using subword units is indeed increasing the difficulty of prediction, compared to the character-level which obtains the lowest perplexity. This result indicates that increased granularity aids in reducing the uncertainty during prediction. Similar to the results in §SECREF8, in Czech the values are almost comparable. Due to its stochastic nature, our model yields higher perplexity values compared to the hierarchical model, whereas the values range between subword and character-based models, possibly finding an optimal level of granularity between the two solutions. Evaluation ::: Results ::: Feature Variations In order to understand whether the latent inflectional features in fact capture information about variations related to morphological transformations, we try generating different surface forms of the same lemma by assigning different values to the inflectional features. We use the latent morphology model based decoder to translate the English word `go', and after sampling the lemma, we fix its value and vary the values of the inflectional features at random positions for generating different outputs. Table TABREF14 presents different sets of feature values and the corresponding outputs generated by the decoder. The model generates different surface forms for different sets of features, confirming that latent variables encode information related to the infinitive form of the verb, as well as its formality conditions, prepositions, person, number and tense. We also observe that many trials based on different feature combinations may result in the same outputs, although some feature values may not be set in a single-word context. Varying the features individually does not necessarily yield distinct changes in the output, suggesting that some features may act jointly in determining the word form. Conclusion In this paper we presented a novel decoding architecture for NMT employing a hierarchical latent variable model to promote sparsity in lexical representations, which demonstrated promising application for morphologically-rich and low-resource languages. Our model generates words one character at a time by composing two latent features representing their lemmas and inflectional features. We evaluate our model against conventional open-vocabulary NMT solutions such as subword and character-level decoding methods in translationg English into three morphologically-rich languages with different morphological typologies under low to mid-resource settings. Our results show that our model can significantly outperform subword-level NMT models, whereas demonstrates better capacity than character-level models in coping with increased amounts of data sparsity. We also conduct ablation studies on the effect of feature variations to the predictions, which prove that despite being completely unsupervised, our model can in fact capture morphosyntactic information and generalize to different surface forms of words. Acknowledgments This project received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreements 825299 (GoURMET) and 688139 (SUMMA). Appendix ::: The Effect of Feature Dimensions We investigate the optimal lemma and inflectional feature sizes by measuring the accuracy in English-to-Turkish translation using different feature vector dimensions. The results given in Figure FIGREF22 show that gradually compressing the word representations computed by recurrent hidden states, with an original dimension of 512, from 500 to 100, leads to increased output accuracy, suggesting that encoding more compact representations might provide the model with a better generalization capability. Our results also show that using a feature dimension of 10 is sufficient in reaching the best accuracy.
Arabic, Czech and Turkish
d51069595f67a3a53c044c8a37bae23facbfa45d
d51069595f67a3a53c044c8a37bae23facbfa45d_0
Q: Do they use pretrained models as part of their parser? Text: Introduction Semantic parsing aims to solve the problem of canonicalizing language and representing its meaning: given an input sentence, it aims to extract a semantic representation of that sentence. Abstract meaning representation BIBREF0 , or AMR for short, allows us to do that with the inclusion of most of the shallow-semantic natural language processing (NLP) tasks that are usually addressed separately, such as named entity recognition, semantic role labeling and co-reference resolution. AMR is partially motivated by the need to provide the NLP community with a single dataset that includes basic disambiguation information, instead of having to rely on different datasets for each disambiguation problem. The annotation process is straightforward, enabling the development of large datasets. Alternative semantic representations have been developed and studied, such as CCG BIBREF1 , BIBREF2 and UCCA BIBREF3 . Several parsers for AMR have been recently developed BIBREF4 , BIBREF5 , BIBREF6 , BIBREF7 , BIBREF8 , BIBREF9 , BIBREF10 , BIBREF11 , BIBREF12 , BIBREF13 . This line of research is new and current results suggest a large room for improvement. Greedy transition-based methods BIBREF14 are one of the most popular choices for dependency parsing, because of their good balance between efficiency and accuracy. These methods seem promising also for AMR, due to the similarity between dependency trees and AMR structures, i.e., both representations use graphs with nodes that have lexical content and edges that represent linguistic relations. A transition system is an abstract machine characterized by a set of configurations and transitions between them. The basic components of a configuration are a stack of partially processed words and a buffer of unseen input words. Starting from an initial configuration, the system applies transitions until a terminal configuration is reached. The sentence is scanned left to right, with linear time complexity for dependency parsing. This is made possible by the use of a greedy classifier that chooses the transition to be applied at each step. In this paper we introduce a parser for AMR that is inspired by the ArcEager dependency transition system of nivre2004. The main difference between our system and ArcEager is that we need to account for the mapping from word tokens to AMR nodes, non-projectivity of AMR structures and reentrant nodes (multiple incoming edges). Our AMR parser brings closer dependency parsing and AMR parsing by showing that dependency parsing algorithms, with some modifications, can be used for AMR. Key properties such as working left-to-right, incrementality and linear complexity further strengthen its relevance. The AMR parser of wang2boosting, called CAMR, also defines a transition system. It differs from ours because we process the sentence left-to-right while they first acquire the entire dependency tree and then process it bottom-up. More recently emnlp2016 presented a non-greedy transition system for AMR parsing, based on ArcStandard BIBREF15 . Our transition system is also related to an adaptation of ArcEager for directed acyclic graphs (DAGs), introduced by sagae2008shift. This is also the basis for ribeyre2015because, a transition system used to parse dependency graphs. Similarly, du2014peking also address dependency graph parsing by means of transition systems. Analogously to dependency trees, dependency graphs have the property that their nodes consist of the word tokens, which is not true for AMR. As such, these transition systems are more closely related to traditional transition systems for dependency parsing. Our contributions in this paper are as follows: Transition-Based AMR Parsing Similarly to dependency parsing, AMR parsing is partially based on the identification of predicate-argument structures. Much of the dependency parsing literature focuses on transition-based dependency parsing—an approach to parsing that scans the sentence from left to right in linear time and updates an intermediate structure that eventually ends up being a dependency tree. The two most common transition systems for greedy dependency parsing are ArcStandard and ArcEager. With ArcStandard, a stack is maintained along with a buffer on which the left-to-right scan is performed. At each step, the parser chooses to scan a word in the buffer and shift it onto the stack, or else to create an arc between the two top-most elements in the stack and pop the dependent. ArcStandard parses a sentence in a pure bottom-up, left-to-right fashion (similarly to shift-reduce context-free grammar parsers), and must delay the construction of right arcs until all the dependent node has been completed. This imposes strong limitations on the degree of incrementality of the parser. The ArcEager system was designed to improve on ArcStandard by mixing bottom up and top-down strategies. More precisely, in the ArcEager parser left arcs are constructed bottom-up and right arcs are constructed top-down, so that right dependents can be attached to their heads even if some of their own dependents are not identified yet. In this way arcs are constructed as soon as the head and the dependent are available in the stack. Because of the similarity of AMR structures to dependency structures, transition systems are also helpful for AMR parsing. Starting from the ArcEager system, we develop here a novel transition system, called AmrEager that parses sentences into AMR structures. There are three key differences between AMRs and dependency trees that require further adjustments for dependency parsers to be used with AMRs. A key difference between English dependency trees and AMR structures is projectivity. Dependency trees in English are usually projective, roughly meaning that there are no crossing arcs if the edges are drawn in the semi-plane above the words. While this restriction is empirically motivated in syntactic theories for English, it is no longer motivated for AMR structures. The notion of projectivity can be generalized to AMR graphs as follows. The intuition is that we can use the alignment INLINEFORM0 to map AMR edges back to the sentence INLINEFORM1 , and test whether there exist pairs of crossing edges. Figure FIGREF13 shows this mapping for the AMR of Figure FIGREF7 , where the edge connecting excuse to I crosses another edge. More formally, consider an AMR edge INLINEFORM2 . Let INLINEFORM3 and INLINEFORM4 , so that INLINEFORM5 is aligned with INLINEFORM6 and INLINEFORM7 is aligned with INLINEFORM8 . The spanning set for INLINEFORM9 , written INLINEFORM10 , is the set of all nodes INLINEFORM11 such that INLINEFORM12 and INLINEFORM13 if INLINEFORM14 or INLINEFORM15 if INLINEFORM16 . We say that INLINEFORM17 is projective if, for every node INLINEFORM18 , all of its parent and child nodes are in INLINEFORM19 ; otherwise, we say that INLINEFORM20 is non-projective. An AMR is projective if all of its edges are projective, and is non-projective otherwise. This corresponds to the intuitive definition of projectivity for DAGs introduced in sagae2008shift and is closely related to the definition of non-crossing graphs of kuhlmann2015parsing. Table TABREF15 demonstrates that a relatively small percentage of all AMR edges are non-projective. Yet, 35% of the sentences contain at least one non-projective edge. https://github.com/jflanigan/jamr/blob/master/docs/Hand_Alignments.md AMRs are graphs rather than trees because they can have nodes with multiple parents, called reentrant nodes, as in the node you for the AMR of Figure FIGREF7 . There are two phenomena that cause reentrancies in AMR: control, where a reentrant edge appears between siblings of a control verb, and co-reference, where multiple mentions correspond to the same concept. In contrast, dependency trees do not have nodes with multiple parents. Therefore, when creating a new arc, transition systems for dependency parsing check that the dependent does not already have a head node, preventing the node from having additional parents. To handle reentrancy, which is not uncommon in AMR structures as shown in Table TABREF15 , we drop this constraint. Another main difference with dependency parsing is that in AMR there is no straightforward mapping between a word in the sentence and a node in the graph: words may generate no nodes, one node or multiple nodes. In addition, the labels at the nodes are often not easily determined by the word in the sentence. For instance expectation translates to expect-01 and teacher translates to the two nodes teach-01 and person, connected through an :ARG0 edge, expressing that a teacher is a person who teaches. A mechanism of concept identification is therefore required to map each token INLINEFORM0 to a subgraph with the correct labels at its nodes and edges: if INLINEFORM1 is the gold alignment, this should be the subgraph INLINEFORM2 defined in Equation ( EQREF11 ). To obtain alignments between the tokens in the sentence and the nodes in the AMR graph of our training data, we run the JAMR aligner. Transition system for AMR Parsing A stack INLINEFORM0 is a list of nodes of the partially constructed AMR graph, with the top element INLINEFORM1 at the right. We use the symbol ` INLINEFORM2 ' as the concatenation operator. A buffer INLINEFORM3 is a list of indices from INLINEFORM4 , with the first element INLINEFORM5 at the left, representing the word tokens from the input still to be processed. A configuration of our parser is a triple INLINEFORM6 , where INLINEFORM7 is the set of AMR edges that have been constructed up to this point. In order to introduce the transition actions of our parser we need some additional notation. We use a function INLINEFORM0 that maps indices from INLINEFORM1 to AMR graph fragments. For each INLINEFORM2 , INLINEFORM3 is a graph INLINEFORM4 , with single root INLINEFORM5 , representing the semantic contribution of word INLINEFORM6 to the AMR for INLINEFORM7 . As already mentioned, INLINEFORM8 can have a single node representing the concept associated with INLINEFORM9 , or it can have several nodes in case INLINEFORM10 denotes a complex concept, or it can be empty. The transition Shift is used to decide if and what to push on the stack after consuming a token from the buffer. Intuitively, the graph fragment INLINEFORM0 obtained from the token INLINEFORM1 , if not empty, is “merged” with the graph we have constructed so far. We then push onto the stack the node INLINEFORM2 for further processing. LArc INLINEFORM3 creates an edge with label INLINEFORM4 between the top-most node and the second top-most node in the stack, and pops the latter. RArc INLINEFORM5 is the symmetric operation, but does not pop any node from the stack. Finally, Reduce pops the top-most node from the stack, and it also recovers reentrant edges between its sibling nodes, capturing for instance several control verb patterns. To accomplish this, Reduce decides whether to create an additional edge between the node being removed and the previously created sibling in the partial graph. This way of handling control verbs is similar to the REENTRANCE transition of wang2boosting. The choice of popping the dependent in the LArc transition is inspired by ArcEager, where left-arcs are constructed bottom-up to increase the incrementality of the transition system BIBREF15 . This affects our ability to recover some reentrant edges: consider a node INLINEFORM0 with two parents INLINEFORM1 and INLINEFORM2 , where the arc INLINEFORM3 is a left-arc and INLINEFORM4 is any arc. If the first arc to be processed is INLINEFORM5 , we use LArc that pops INLINEFORM6 , hence making it impossible to create the second arc INLINEFORM7 . Nevertheless, we discovered that this approach works better than a completely unrestricted allowance of reentrancy. The reason is that if we do not remove dependents at all when first attached to a node, the stack becomes larger, and nodes which should be connected end up being distant from each other, and as such, are never connected. The initial configuration of the system has a INLINEFORM0 node (representing the root) in the stack and the entire sentence in the buffer. The terminal configuration consists of an empty buffer and a stack with only the INLINEFORM1 node. The transitions required to parse the sentence The boy and the girl are shown in Table TABREF20 , where the first line shows the initial configuration and the last line shows the terminal configuration. Similarly to the transitions of the ArcEager, the above transitions construct edges as soon as the head and the dependent are available in the stack, with the aim of maximizing the parser incrementality. We now show that our greedy transition-based AMR parser is linear-time in INLINEFORM0 , the length of the input sentence INLINEFORM1 . We first claim that the output graph has size INLINEFORM2 . Each token in INLINEFORM3 is mapped to a constant number of nodes in the graph by Shift. Thus the number of nodes is INLINEFORM4 . Furthermore, each node can have at most three parent nodes, created by transitions RArc, LArc and Reduce, respectively. Thus the number of edges is also INLINEFORM5 . It is possible to bound the maximum number of transitions required to parse INLINEFORM6 : the number of Shift is bounded by INLINEFORM7 , and the number of Reduce, LArc and RArc is bounded by the size of the graph, which is INLINEFORM8 . Since each transition can be carried out in constant time, we conclude that our parser runs in linear time. Training the System Several components have to be learned: (1) a transition classifier that predicts the next transition given the current configuration, (2) a binary classifier that decides whether or not to create a reentrancy after a Reduce, (3) a concept identification step for each Shift to compute INLINEFORM0 , and 3) another classifier to label edges after each LArc or RArc. Oracle Training our system from data requires an oracle—an algorithm that given a gold-standard AMR graph and a sentence returns transition sequences that maximize the overlap between the gold-standard graph and the graph dictated by the sequence of transitions. We adopt a shortest stack, static oracle similar to manningfast. Informally, static means that if the actual configuration of the parser has no mistakes, the oracle provides a transition that does not introduce any mistake. Shortest stack means that the oracle prefers transitions where the number of items in the stack is minimized. Given the current configuration INLINEFORM0 and the gold-standard graph INLINEFORM1 , the oracle is defined as follows, where we test the conditions in the given order and apply the action associated with the first match: if INLINEFORM0 then LArc( INLINEFORM1 ); if INLINEFORM0 then RArc( INLINEFORM1 ); if INLINEFORM0 then Reduce; Shift otherwise. The oracle first checks whether some gold-standard edge can be constructed from the two elements at the top of the stack (conditions 1 and 2). If LArc or RArc are not possible, the oracle checks whether all possible edges in the gold graph involving INLINEFORM0 have already been processed, in which case it chooses Reduce (conditions 3). To this end, it suffices to check the buffer, since LArc and RArc have already been excluded and elements in the stack deeper than position two can no longer be accessed by the parser. If Reduce is not possible, Shift is chosen. Besides deciding on the next transition, the oracle also needs the alignments, which we generate with JAMR, in order to know how to map the next token in the sentence to its AMR subgraph INLINEFORM0 defined in ( EQREF11 ). Transition Classifier Like all other transition systems of this kind, our transition system has a “controller” that predicts a transition given the current configuration (among Shift, LArc, RArc and Reduce). The examples from which we learn this controller are based on features extracted from the oracle transition sequences, where the oracle is applied on the training data. As a classifier, we use a feed-forward neural network with two hidden layers of 200 tanh units and learning rate set to 0.1, with linear decaying. The input to the network consists of the concatenation of embeddings for words, POS tags and Stanford parser dependencies, one-hot vectors for named entities and additional sparse features, extracted from the current configuration of the transition system; this is reported in more details in Table TABREF27 . The embeddings for words and POS tags were pre-trained on a large unannotated corpus consisting of the first 1 billion characters from Wikipedia. For lexical information, we also extract the leftmost (in the order of the aligned words) child (c), leftmost parent (p) and leftmost grandchild (cc). Leftmost and rightmost items are common features for transition-based parsers BIBREF17 , BIBREF18 but we found only leftmost to be helpful in our case. All POS tags, dependencies and named entities are generated using Stanford CoreNLP BIBREF19 . The accuracy of this classifier on the development set is 84%. Similarly, we train a binary classifier for deciding whether or not to create a reentrant edge after a Reduce: in this case we use word and POS embeddings for the two nodes being connected and their parent as well as dependency label embeddings for the arcs between them. Concept Identification This routine is called every time the transition classifier decides to do a Shift; it is denoted by INLINEFORM0 in § SECREF3 . This component could be learned in a supervised manner, but we were not able to improve on a simple heuristic, which works as follows: during training, for each Shift decided by the oracle, we store the pair INLINEFORM1 in a phrase-table. During parsing, the most frequent graph INLINEFORM2 for the given token is then chosen. In other words, INLINEFORM3 approximates INLINEFORM4 by means of the graph most frequently seen among all occurrences of token INLINEFORM5 in the training set. An obvious problem with the phrase-table approach is that it does not generalize to unseen words. In addition, our heuristic relies on the fact that the mappings observed in the data are correct, which is not the case when the JAMR-generated alignments contain a mistake. In order to alleviate this problem we observe that there are classes of words such as named entities and numeric quantities that can be disambiguated in a deterministic manner. We therefore implement a set of “hooks” that are triggered by the named entity tag of the next token in the sentence. These hooks override the normal Shift mechanism and apply a fixed rule instead. For instance, when we see the token New York (the two tokens are collapsed in a single one at preprocessing) we generate the subgraph of Figure FIGREF30 and push its root onto the stack. Similar subgraphs are generated for all states, cities, countries and people. We also use hooks for ordinal numbers, percentages, money and dates. Edge Labeling Edge labeling determines the labels for the edges being created. Every time the transition classifier decides to take an LArc or RArc operation, the edge labeler needs to decide on a label for it. There are more than 100 possible labels such as :ARG0, :ARG0-of, :ARG1, :location, :time and :polarity. We use a feed-forward neural network similar to the one we trained for the transition classier, with features shown in Table TABREF32 . The accuracy of this classifier on the development set is 77%. We constrain the labels predicted by the neural network in order to satisfy requirements of AMR. For instance, the label :top can only be applied when the node from which the edge starts is the special INLINEFORM0 node. Other constraints are used for the :polarity label and for edges attaching to numeric quantities. Sometimes the label predicted by the neural network is not a label that satisfies the requirements of AMR. For instance, the label :top can only be applied when the node from which the edge starts is the special INLINEFORM0 node. In order to avoid generating such erroneous labels, we use a set of rules, shown in Table TABREF34 . These rules determine which labels are allowed for the newly created edge so that we only consider those during prediction. Also ARG roles cannot always be applied: each Propbank frame allows a limited number of arguments. For example, while add-01 and add-02 allow for :ARG1 and :ARG2 (and their inverse :ARG1-of and :ARG2-of), add-03 and add-04 only allow :ARG2 (and :ARG2-of). Fine-grained Evaluation Until now, AMR parsers were evaluated using the Smatch score. Given the candidate graphs and the gold graphs in the form of AMR annotations, Smatch first tries to find the best alignments between the variable names for each pair of graphs and it then computes precision, recall and F1 of the concepts and relations. We note that the Smatch score has two flaws: (1) while AMR parsing involves a large number of subtasks, the Smatch score consists of a single number that does not assess the quality of each subtasks separately; (2) the Smatch score weighs different types of errors in a way which is not necessarily useful for solving a specific NLP problem. For example, for a specific problem concept detection might be deemed more important than edge detection, or guessing the wrong sense for a concept might be considered less severe than guessing the wrong verb altogether. Consider the two parses for the sentence Silvio Berlusconi gave Lucio Stanca his current role of modernizing Italy's bureaucracy in Figure FIGREF36 . At the top, we show the output of a parser (Parse 1) that is not able to deal with named entities. At the bottom, we show the output of a parser (Parse 2) which, except for :name, :op and :wiki, always uses the edge label :ARG0. The Smatch scores for the two parses are 56 and 78 respectively. Both parses make obvious mistakes but the three named entity errors in Parse 1 are considered more important than the six wrong labels in Parse 2. However, without further analysis, it is not advisable to conclude that Parse 2 is better than Parse 1. In order to better understand the limitations of the different parsers, find their strengths and gain insight in which downstream tasks they may be helpful, we compute a set of metrics on the test set. Unlabeled is the Smatch score computed on the predicted graphs after removing all edge labels. In this way, we only assess the node labels and the graph topology, which may be enough to benefit several NLP tasks because it identifies basic predicate-argument structure. For instance, we may be interested in knowing whether two events or entities are related to each other, while not being concerned with the precise type of relation holding between them. No WSD gives a score that does not take into account word sense disambiguation errors. By ignoring the sense specified by the Propbank frame used (e.g., duck-01 vs duck-02) we have a score that does not take into account this additional complexity in the parsing procedure. To compute this score, we simply strip off the suffixes from all Propbank frames and calculate the Smatch score. Following sawai, we also evaluate the parsers using the Smatch score on noun phrases only (NP-only), by extracting from the AMR dataset all noun phrases that do not include further NPs. As we previously discussed, reentrancy is a very important characteristic of AMR graphs and it is not trivial to handle. We therefore implement a test for it (Reentrancy), where we compute the Smatch score only on reentrant edges. Concept identification is another critical component of the parsing process and we therefore compute the F-score on the list of predicted concepts (Concepts) too. Identifying the correct concepts is fundamental: if a concept is not identified, it will not be possible to retrieve any edge involving that concept, with likely significant consequences on accuracy. This metric is therefore quite important to score highly on. Similarly to our score for concepts, we further compute an F-score on the named entities (Named Ent.) and wiki roles for named entities (Wikification) that consider edges labeled with :name and :wiki respectively. These two metrics are strictly related to the concept score. However, since named entity recognition is the focus of dedicated research, we believe it is important to define a metric that specifically assesses this problem. Negation detection is another task which has received some attention. An F-score for this (Negations) is also defined, where we find all negated concepts by looking for the :polarity role. The reason we can compute a simple F-score instead of using Smatch for these metrics is that there are no variable names involved. Finally we compute the Smatch score on :ARG edges only, in order to have a score for semantic role labeling (SRL), which is another extremely important subtask of AMR, as it is based on the identification of predicate-argument structures. Using this evaluation suite we can evaluate AMRs on a wide range of metrics that can help us find strengths and weakness of each parser, hence speeding up the research in this area. Table TABREF37 reports the scores for the two parses of Figure FIGREF36 , where we see that Parse 1 gets a high score for semantic role labeling while Parse 2 is optimal for named entity recognition. Moreover, we can make additional observations such as that Parse 2 is optimal with respect to unlabeled score and that Parse 1 recovers more reentrancies. Experiments We compare our parser against two available parsers: JAMR BIBREF4 and CAMR BIBREF20 , BIBREF5 , using the LDC2015E86 dataset for evaluation. Both parsers are available online and were recently updated for SemEval-2016 Task 8 BIBREF21 , BIBREF22 . However, CAMR's SemEval system, which reports a Smatch score of 67, is not publicly available. CAMR has a quadratic worst-case complexity (although linear in practice). In JAMR, the concept identification step is quadratic and the relation identification step is INLINEFORM0 , with INLINEFORM1 being the set of nodes in the AMR graph. Table TABREF40 shows the results obtained by the parsers on all metrics previously introduced. On Smatch, our system does not give state-of-the-art results. However, we do obtain the best results for Unlabeled and Concept and outperform the other parses for Named Ent. and Negations. Our score of Reentrancy is also close the best scoring system, which is particularly relevant given the importance of reentrancies in AMR. The use of the Reduce transition, which targets reentrancies caused by control verbs, is critical in order to achieve this result. The relatively high results we obtain for the unlabeled case suggests that our parser has difficulty in labeling the arcs. Our score for concept identification, which is on par with the best result from the other parsers, demonstrates that there is a relatively low level of token ambiguity. State-of-the-art results for this problem can be obtained by choosing the most frequent subgraph for a given token based on a phrase-table constructed from JAMR alignments on the training data. The scores for named entities and wikification are heavily dependent on the hooks mentioned in § SECREF29 , which in turn relies on the named entity recognizer to make the correct predictions. In order to alleviate the problem of wrong automatic alignments with respect to polarity and better detect negation, we performed a post-processing step on the aligner output where we align the AMR constant - (minus) with words bearing negative polarity such as not, illegitimate and asymmetry. Our experiments demonstrate that there is no parser for AMR yet that conclusively does better than all other parsers on all metrics. Advantages of our parser are the worst-case linear complexity and the fact that is possible to perform incremental AMR parsing, which is both helpful for real-time applications and to investigate how meaning of English sentences can be built incrementally left-to-right. Related Work The first data-driven AMR parser is due to carbonell2014discriminative. The problem is addressed in two separate stages: concept identification and relation identification. They use a sequence labeling algorithm to identify concepts and frame the relation prediction task as a constrained combinatorial optimization problem. werling2015robust notice that the difficult bit is the concept identification and propose a better way to handle that task: an action classifier to generate concepts by applying predetermined actions. Other proposals involve a synchronous hyperedge replacement grammar solution BIBREF6 , a syntax-based machine translation approach BIBREF7 where a grammar of string-to-tree rules is created after reducing AMR graphs to trees by removing all reentrancies, a CCG system that first parses sentences into lambda-calculus representations BIBREF11 . A systematic translation from AMR to first order logic formulas, with a special treatment for quantification, reentrancy and negation, is discussed in bos2016expressive. In microsoft, a pre-existing logical form parser is used and the output is then converted into AMR graphs. Yet another solution is proposed by searnamr who discuss a parser that uses SEARN BIBREF23 , a “learning to search” algorithm. Transition-based algorithms for AMR parsing are compelling because traditional graph-based techniques are computationally expensive. wang and wang2boosting propose a framework that parses a sentence into its AMR structure through a two-stage process: a dependency tree is generated from the input sentence through a transition-based parser and then another transition-based parser is used to generate the AMR. The main benefit of this approach is that the dependency parser can be trained on a training set much larger than the training set for the tree-to-graph algorithm. Others further built on this parser: goodman2016noise use imitation learning to alleviate the probem of error propagation in the greedy parser, while barzdins2016riga create a wrapper around it to fix frequent mistakes and investigate ensembles with a character level neural parser. More recently emnlp2016 presented a non-greedy transition system for AMR parsing, based on ArcStandard BIBREF15 . AMR parsing as a whole is a complex task because it involves many subtasks including named entity recognition, co-reference resolution and semantic role labeling. sawai do not attempt at parsing AMR graphs for entire sentences but they instead handle simple noun phrases (NPs). They extract NPs from the AMR dataset only when they do not include further NPs, do not include pronouns nor named entities. Due to these restrictions, the AMRs are mostly trees and easier to handle than the original AMR graphs. They approach this task using a transition based system inspired by ArcStandard. AMR is not the only way to represent meaning in natural language sentences. Alternative semantic representations have been developed and studied, such as Boxer BIBREF24 , CCG BIBREF1 , BIBREF2 and UCCA BIBREF3 . Conclusion We presented a transition system that builds AMR graphs in linear time by processing the sentences left-to-right, trained with feed-forward neural networks. The parser demonstrates that it is possible to perform AMR parsing using techniques inspired by dependency parsing. We also noted that it is less informative to evaluate the entire parsing process with Smatch than to use a collection of metrics aimed at evaluating the various subproblems in the parsing process. We further showed that our left-to-right transition system is competitive with publicly available state-of-the-art parsers. Although we do not outperform the best baseline in terms of Smatch score, we show on par or better results for several of the metrics proposed. We hope that moving away from a single-metric evaluation will further speed up progress in AMR parsing. Acknowledgments The authors would like to thank the three anonymous reviewers and Sameer Bansal, Jeff Flanigan, Sorcha Gilroy, Adam Lopez, Nikos Papasarantopoulos, Nathan Schneider, Mark Steedman, Sam Thomson, Clara Vania and Chuan Wang for their help and comments. This research was supported by a grant from Bloomberg and by the H2020 project SUMMA, under grant agreement 688139.
Yes
1a6e2bd41ee43df83fef2a1c1941e6f95a619ae8
1a6e2bd41ee43df83fef2a1c1941e6f95a619ae8_0
Q: Which subtasks do they evaluate on? Text: Introduction Semantic parsing aims to solve the problem of canonicalizing language and representing its meaning: given an input sentence, it aims to extract a semantic representation of that sentence. Abstract meaning representation BIBREF0 , or AMR for short, allows us to do that with the inclusion of most of the shallow-semantic natural language processing (NLP) tasks that are usually addressed separately, such as named entity recognition, semantic role labeling and co-reference resolution. AMR is partially motivated by the need to provide the NLP community with a single dataset that includes basic disambiguation information, instead of having to rely on different datasets for each disambiguation problem. The annotation process is straightforward, enabling the development of large datasets. Alternative semantic representations have been developed and studied, such as CCG BIBREF1 , BIBREF2 and UCCA BIBREF3 . Several parsers for AMR have been recently developed BIBREF4 , BIBREF5 , BIBREF6 , BIBREF7 , BIBREF8 , BIBREF9 , BIBREF10 , BIBREF11 , BIBREF12 , BIBREF13 . This line of research is new and current results suggest a large room for improvement. Greedy transition-based methods BIBREF14 are one of the most popular choices for dependency parsing, because of their good balance between efficiency and accuracy. These methods seem promising also for AMR, due to the similarity between dependency trees and AMR structures, i.e., both representations use graphs with nodes that have lexical content and edges that represent linguistic relations. A transition system is an abstract machine characterized by a set of configurations and transitions between them. The basic components of a configuration are a stack of partially processed words and a buffer of unseen input words. Starting from an initial configuration, the system applies transitions until a terminal configuration is reached. The sentence is scanned left to right, with linear time complexity for dependency parsing. This is made possible by the use of a greedy classifier that chooses the transition to be applied at each step. In this paper we introduce a parser for AMR that is inspired by the ArcEager dependency transition system of nivre2004. The main difference between our system and ArcEager is that we need to account for the mapping from word tokens to AMR nodes, non-projectivity of AMR structures and reentrant nodes (multiple incoming edges). Our AMR parser brings closer dependency parsing and AMR parsing by showing that dependency parsing algorithms, with some modifications, can be used for AMR. Key properties such as working left-to-right, incrementality and linear complexity further strengthen its relevance. The AMR parser of wang2boosting, called CAMR, also defines a transition system. It differs from ours because we process the sentence left-to-right while they first acquire the entire dependency tree and then process it bottom-up. More recently emnlp2016 presented a non-greedy transition system for AMR parsing, based on ArcStandard BIBREF15 . Our transition system is also related to an adaptation of ArcEager for directed acyclic graphs (DAGs), introduced by sagae2008shift. This is also the basis for ribeyre2015because, a transition system used to parse dependency graphs. Similarly, du2014peking also address dependency graph parsing by means of transition systems. Analogously to dependency trees, dependency graphs have the property that their nodes consist of the word tokens, which is not true for AMR. As such, these transition systems are more closely related to traditional transition systems for dependency parsing. Our contributions in this paper are as follows: Transition-Based AMR Parsing Similarly to dependency parsing, AMR parsing is partially based on the identification of predicate-argument structures. Much of the dependency parsing literature focuses on transition-based dependency parsing—an approach to parsing that scans the sentence from left to right in linear time and updates an intermediate structure that eventually ends up being a dependency tree. The two most common transition systems for greedy dependency parsing are ArcStandard and ArcEager. With ArcStandard, a stack is maintained along with a buffer on which the left-to-right scan is performed. At each step, the parser chooses to scan a word in the buffer and shift it onto the stack, or else to create an arc between the two top-most elements in the stack and pop the dependent. ArcStandard parses a sentence in a pure bottom-up, left-to-right fashion (similarly to shift-reduce context-free grammar parsers), and must delay the construction of right arcs until all the dependent node has been completed. This imposes strong limitations on the degree of incrementality of the parser. The ArcEager system was designed to improve on ArcStandard by mixing bottom up and top-down strategies. More precisely, in the ArcEager parser left arcs are constructed bottom-up and right arcs are constructed top-down, so that right dependents can be attached to their heads even if some of their own dependents are not identified yet. In this way arcs are constructed as soon as the head and the dependent are available in the stack. Because of the similarity of AMR structures to dependency structures, transition systems are also helpful for AMR parsing. Starting from the ArcEager system, we develop here a novel transition system, called AmrEager that parses sentences into AMR structures. There are three key differences between AMRs and dependency trees that require further adjustments for dependency parsers to be used with AMRs. A key difference between English dependency trees and AMR structures is projectivity. Dependency trees in English are usually projective, roughly meaning that there are no crossing arcs if the edges are drawn in the semi-plane above the words. While this restriction is empirically motivated in syntactic theories for English, it is no longer motivated for AMR structures. The notion of projectivity can be generalized to AMR graphs as follows. The intuition is that we can use the alignment INLINEFORM0 to map AMR edges back to the sentence INLINEFORM1 , and test whether there exist pairs of crossing edges. Figure FIGREF13 shows this mapping for the AMR of Figure FIGREF7 , where the edge connecting excuse to I crosses another edge. More formally, consider an AMR edge INLINEFORM2 . Let INLINEFORM3 and INLINEFORM4 , so that INLINEFORM5 is aligned with INLINEFORM6 and INLINEFORM7 is aligned with INLINEFORM8 . The spanning set for INLINEFORM9 , written INLINEFORM10 , is the set of all nodes INLINEFORM11 such that INLINEFORM12 and INLINEFORM13 if INLINEFORM14 or INLINEFORM15 if INLINEFORM16 . We say that INLINEFORM17 is projective if, for every node INLINEFORM18 , all of its parent and child nodes are in INLINEFORM19 ; otherwise, we say that INLINEFORM20 is non-projective. An AMR is projective if all of its edges are projective, and is non-projective otherwise. This corresponds to the intuitive definition of projectivity for DAGs introduced in sagae2008shift and is closely related to the definition of non-crossing graphs of kuhlmann2015parsing. Table TABREF15 demonstrates that a relatively small percentage of all AMR edges are non-projective. Yet, 35% of the sentences contain at least one non-projective edge. https://github.com/jflanigan/jamr/blob/master/docs/Hand_Alignments.md AMRs are graphs rather than trees because they can have nodes with multiple parents, called reentrant nodes, as in the node you for the AMR of Figure FIGREF7 . There are two phenomena that cause reentrancies in AMR: control, where a reentrant edge appears between siblings of a control verb, and co-reference, where multiple mentions correspond to the same concept. In contrast, dependency trees do not have nodes with multiple parents. Therefore, when creating a new arc, transition systems for dependency parsing check that the dependent does not already have a head node, preventing the node from having additional parents. To handle reentrancy, which is not uncommon in AMR structures as shown in Table TABREF15 , we drop this constraint. Another main difference with dependency parsing is that in AMR there is no straightforward mapping between a word in the sentence and a node in the graph: words may generate no nodes, one node or multiple nodes. In addition, the labels at the nodes are often not easily determined by the word in the sentence. For instance expectation translates to expect-01 and teacher translates to the two nodes teach-01 and person, connected through an :ARG0 edge, expressing that a teacher is a person who teaches. A mechanism of concept identification is therefore required to map each token INLINEFORM0 to a subgraph with the correct labels at its nodes and edges: if INLINEFORM1 is the gold alignment, this should be the subgraph INLINEFORM2 defined in Equation ( EQREF11 ). To obtain alignments between the tokens in the sentence and the nodes in the AMR graph of our training data, we run the JAMR aligner. Transition system for AMR Parsing A stack INLINEFORM0 is a list of nodes of the partially constructed AMR graph, with the top element INLINEFORM1 at the right. We use the symbol ` INLINEFORM2 ' as the concatenation operator. A buffer INLINEFORM3 is a list of indices from INLINEFORM4 , with the first element INLINEFORM5 at the left, representing the word tokens from the input still to be processed. A configuration of our parser is a triple INLINEFORM6 , where INLINEFORM7 is the set of AMR edges that have been constructed up to this point. In order to introduce the transition actions of our parser we need some additional notation. We use a function INLINEFORM0 that maps indices from INLINEFORM1 to AMR graph fragments. For each INLINEFORM2 , INLINEFORM3 is a graph INLINEFORM4 , with single root INLINEFORM5 , representing the semantic contribution of word INLINEFORM6 to the AMR for INLINEFORM7 . As already mentioned, INLINEFORM8 can have a single node representing the concept associated with INLINEFORM9 , or it can have several nodes in case INLINEFORM10 denotes a complex concept, or it can be empty. The transition Shift is used to decide if and what to push on the stack after consuming a token from the buffer. Intuitively, the graph fragment INLINEFORM0 obtained from the token INLINEFORM1 , if not empty, is “merged” with the graph we have constructed so far. We then push onto the stack the node INLINEFORM2 for further processing. LArc INLINEFORM3 creates an edge with label INLINEFORM4 between the top-most node and the second top-most node in the stack, and pops the latter. RArc INLINEFORM5 is the symmetric operation, but does not pop any node from the stack. Finally, Reduce pops the top-most node from the stack, and it also recovers reentrant edges between its sibling nodes, capturing for instance several control verb patterns. To accomplish this, Reduce decides whether to create an additional edge between the node being removed and the previously created sibling in the partial graph. This way of handling control verbs is similar to the REENTRANCE transition of wang2boosting. The choice of popping the dependent in the LArc transition is inspired by ArcEager, where left-arcs are constructed bottom-up to increase the incrementality of the transition system BIBREF15 . This affects our ability to recover some reentrant edges: consider a node INLINEFORM0 with two parents INLINEFORM1 and INLINEFORM2 , where the arc INLINEFORM3 is a left-arc and INLINEFORM4 is any arc. If the first arc to be processed is INLINEFORM5 , we use LArc that pops INLINEFORM6 , hence making it impossible to create the second arc INLINEFORM7 . Nevertheless, we discovered that this approach works better than a completely unrestricted allowance of reentrancy. The reason is that if we do not remove dependents at all when first attached to a node, the stack becomes larger, and nodes which should be connected end up being distant from each other, and as such, are never connected. The initial configuration of the system has a INLINEFORM0 node (representing the root) in the stack and the entire sentence in the buffer. The terminal configuration consists of an empty buffer and a stack with only the INLINEFORM1 node. The transitions required to parse the sentence The boy and the girl are shown in Table TABREF20 , where the first line shows the initial configuration and the last line shows the terminal configuration. Similarly to the transitions of the ArcEager, the above transitions construct edges as soon as the head and the dependent are available in the stack, with the aim of maximizing the parser incrementality. We now show that our greedy transition-based AMR parser is linear-time in INLINEFORM0 , the length of the input sentence INLINEFORM1 . We first claim that the output graph has size INLINEFORM2 . Each token in INLINEFORM3 is mapped to a constant number of nodes in the graph by Shift. Thus the number of nodes is INLINEFORM4 . Furthermore, each node can have at most three parent nodes, created by transitions RArc, LArc and Reduce, respectively. Thus the number of edges is also INLINEFORM5 . It is possible to bound the maximum number of transitions required to parse INLINEFORM6 : the number of Shift is bounded by INLINEFORM7 , and the number of Reduce, LArc and RArc is bounded by the size of the graph, which is INLINEFORM8 . Since each transition can be carried out in constant time, we conclude that our parser runs in linear time. Training the System Several components have to be learned: (1) a transition classifier that predicts the next transition given the current configuration, (2) a binary classifier that decides whether or not to create a reentrancy after a Reduce, (3) a concept identification step for each Shift to compute INLINEFORM0 , and 3) another classifier to label edges after each LArc or RArc. Oracle Training our system from data requires an oracle—an algorithm that given a gold-standard AMR graph and a sentence returns transition sequences that maximize the overlap between the gold-standard graph and the graph dictated by the sequence of transitions. We adopt a shortest stack, static oracle similar to manningfast. Informally, static means that if the actual configuration of the parser has no mistakes, the oracle provides a transition that does not introduce any mistake. Shortest stack means that the oracle prefers transitions where the number of items in the stack is minimized. Given the current configuration INLINEFORM0 and the gold-standard graph INLINEFORM1 , the oracle is defined as follows, where we test the conditions in the given order and apply the action associated with the first match: if INLINEFORM0 then LArc( INLINEFORM1 ); if INLINEFORM0 then RArc( INLINEFORM1 ); if INLINEFORM0 then Reduce; Shift otherwise. The oracle first checks whether some gold-standard edge can be constructed from the two elements at the top of the stack (conditions 1 and 2). If LArc or RArc are not possible, the oracle checks whether all possible edges in the gold graph involving INLINEFORM0 have already been processed, in which case it chooses Reduce (conditions 3). To this end, it suffices to check the buffer, since LArc and RArc have already been excluded and elements in the stack deeper than position two can no longer be accessed by the parser. If Reduce is not possible, Shift is chosen. Besides deciding on the next transition, the oracle also needs the alignments, which we generate with JAMR, in order to know how to map the next token in the sentence to its AMR subgraph INLINEFORM0 defined in ( EQREF11 ). Transition Classifier Like all other transition systems of this kind, our transition system has a “controller” that predicts a transition given the current configuration (among Shift, LArc, RArc and Reduce). The examples from which we learn this controller are based on features extracted from the oracle transition sequences, where the oracle is applied on the training data. As a classifier, we use a feed-forward neural network with two hidden layers of 200 tanh units and learning rate set to 0.1, with linear decaying. The input to the network consists of the concatenation of embeddings for words, POS tags and Stanford parser dependencies, one-hot vectors for named entities and additional sparse features, extracted from the current configuration of the transition system; this is reported in more details in Table TABREF27 . The embeddings for words and POS tags were pre-trained on a large unannotated corpus consisting of the first 1 billion characters from Wikipedia. For lexical information, we also extract the leftmost (in the order of the aligned words) child (c), leftmost parent (p) and leftmost grandchild (cc). Leftmost and rightmost items are common features for transition-based parsers BIBREF17 , BIBREF18 but we found only leftmost to be helpful in our case. All POS tags, dependencies and named entities are generated using Stanford CoreNLP BIBREF19 . The accuracy of this classifier on the development set is 84%. Similarly, we train a binary classifier for deciding whether or not to create a reentrant edge after a Reduce: in this case we use word and POS embeddings for the two nodes being connected and their parent as well as dependency label embeddings for the arcs between them. Concept Identification This routine is called every time the transition classifier decides to do a Shift; it is denoted by INLINEFORM0 in § SECREF3 . This component could be learned in a supervised manner, but we were not able to improve on a simple heuristic, which works as follows: during training, for each Shift decided by the oracle, we store the pair INLINEFORM1 in a phrase-table. During parsing, the most frequent graph INLINEFORM2 for the given token is then chosen. In other words, INLINEFORM3 approximates INLINEFORM4 by means of the graph most frequently seen among all occurrences of token INLINEFORM5 in the training set. An obvious problem with the phrase-table approach is that it does not generalize to unseen words. In addition, our heuristic relies on the fact that the mappings observed in the data are correct, which is not the case when the JAMR-generated alignments contain a mistake. In order to alleviate this problem we observe that there are classes of words such as named entities and numeric quantities that can be disambiguated in a deterministic manner. We therefore implement a set of “hooks” that are triggered by the named entity tag of the next token in the sentence. These hooks override the normal Shift mechanism and apply a fixed rule instead. For instance, when we see the token New York (the two tokens are collapsed in a single one at preprocessing) we generate the subgraph of Figure FIGREF30 and push its root onto the stack. Similar subgraphs are generated for all states, cities, countries and people. We also use hooks for ordinal numbers, percentages, money and dates. Edge Labeling Edge labeling determines the labels for the edges being created. Every time the transition classifier decides to take an LArc or RArc operation, the edge labeler needs to decide on a label for it. There are more than 100 possible labels such as :ARG0, :ARG0-of, :ARG1, :location, :time and :polarity. We use a feed-forward neural network similar to the one we trained for the transition classier, with features shown in Table TABREF32 . The accuracy of this classifier on the development set is 77%. We constrain the labels predicted by the neural network in order to satisfy requirements of AMR. For instance, the label :top can only be applied when the node from which the edge starts is the special INLINEFORM0 node. Other constraints are used for the :polarity label and for edges attaching to numeric quantities. Sometimes the label predicted by the neural network is not a label that satisfies the requirements of AMR. For instance, the label :top can only be applied when the node from which the edge starts is the special INLINEFORM0 node. In order to avoid generating such erroneous labels, we use a set of rules, shown in Table TABREF34 . These rules determine which labels are allowed for the newly created edge so that we only consider those during prediction. Also ARG roles cannot always be applied: each Propbank frame allows a limited number of arguments. For example, while add-01 and add-02 allow for :ARG1 and :ARG2 (and their inverse :ARG1-of and :ARG2-of), add-03 and add-04 only allow :ARG2 (and :ARG2-of). Fine-grained Evaluation Until now, AMR parsers were evaluated using the Smatch score. Given the candidate graphs and the gold graphs in the form of AMR annotations, Smatch first tries to find the best alignments between the variable names for each pair of graphs and it then computes precision, recall and F1 of the concepts and relations. We note that the Smatch score has two flaws: (1) while AMR parsing involves a large number of subtasks, the Smatch score consists of a single number that does not assess the quality of each subtasks separately; (2) the Smatch score weighs different types of errors in a way which is not necessarily useful for solving a specific NLP problem. For example, for a specific problem concept detection might be deemed more important than edge detection, or guessing the wrong sense for a concept might be considered less severe than guessing the wrong verb altogether. Consider the two parses for the sentence Silvio Berlusconi gave Lucio Stanca his current role of modernizing Italy's bureaucracy in Figure FIGREF36 . At the top, we show the output of a parser (Parse 1) that is not able to deal with named entities. At the bottom, we show the output of a parser (Parse 2) which, except for :name, :op and :wiki, always uses the edge label :ARG0. The Smatch scores for the two parses are 56 and 78 respectively. Both parses make obvious mistakes but the three named entity errors in Parse 1 are considered more important than the six wrong labels in Parse 2. However, without further analysis, it is not advisable to conclude that Parse 2 is better than Parse 1. In order to better understand the limitations of the different parsers, find their strengths and gain insight in which downstream tasks they may be helpful, we compute a set of metrics on the test set. Unlabeled is the Smatch score computed on the predicted graphs after removing all edge labels. In this way, we only assess the node labels and the graph topology, which may be enough to benefit several NLP tasks because it identifies basic predicate-argument structure. For instance, we may be interested in knowing whether two events or entities are related to each other, while not being concerned with the precise type of relation holding between them. No WSD gives a score that does not take into account word sense disambiguation errors. By ignoring the sense specified by the Propbank frame used (e.g., duck-01 vs duck-02) we have a score that does not take into account this additional complexity in the parsing procedure. To compute this score, we simply strip off the suffixes from all Propbank frames and calculate the Smatch score. Following sawai, we also evaluate the parsers using the Smatch score on noun phrases only (NP-only), by extracting from the AMR dataset all noun phrases that do not include further NPs. As we previously discussed, reentrancy is a very important characteristic of AMR graphs and it is not trivial to handle. We therefore implement a test for it (Reentrancy), where we compute the Smatch score only on reentrant edges. Concept identification is another critical component of the parsing process and we therefore compute the F-score on the list of predicted concepts (Concepts) too. Identifying the correct concepts is fundamental: if a concept is not identified, it will not be possible to retrieve any edge involving that concept, with likely significant consequences on accuracy. This metric is therefore quite important to score highly on. Similarly to our score for concepts, we further compute an F-score on the named entities (Named Ent.) and wiki roles for named entities (Wikification) that consider edges labeled with :name and :wiki respectively. These two metrics are strictly related to the concept score. However, since named entity recognition is the focus of dedicated research, we believe it is important to define a metric that specifically assesses this problem. Negation detection is another task which has received some attention. An F-score for this (Negations) is also defined, where we find all negated concepts by looking for the :polarity role. The reason we can compute a simple F-score instead of using Smatch for these metrics is that there are no variable names involved. Finally we compute the Smatch score on :ARG edges only, in order to have a score for semantic role labeling (SRL), which is another extremely important subtask of AMR, as it is based on the identification of predicate-argument structures. Using this evaluation suite we can evaluate AMRs on a wide range of metrics that can help us find strengths and weakness of each parser, hence speeding up the research in this area. Table TABREF37 reports the scores for the two parses of Figure FIGREF36 , where we see that Parse 1 gets a high score for semantic role labeling while Parse 2 is optimal for named entity recognition. Moreover, we can make additional observations such as that Parse 2 is optimal with respect to unlabeled score and that Parse 1 recovers more reentrancies. Experiments We compare our parser against two available parsers: JAMR BIBREF4 and CAMR BIBREF20 , BIBREF5 , using the LDC2015E86 dataset for evaluation. Both parsers are available online and were recently updated for SemEval-2016 Task 8 BIBREF21 , BIBREF22 . However, CAMR's SemEval system, which reports a Smatch score of 67, is not publicly available. CAMR has a quadratic worst-case complexity (although linear in practice). In JAMR, the concept identification step is quadratic and the relation identification step is INLINEFORM0 , with INLINEFORM1 being the set of nodes in the AMR graph. Table TABREF40 shows the results obtained by the parsers on all metrics previously introduced. On Smatch, our system does not give state-of-the-art results. However, we do obtain the best results for Unlabeled and Concept and outperform the other parses for Named Ent. and Negations. Our score of Reentrancy is also close the best scoring system, which is particularly relevant given the importance of reentrancies in AMR. The use of the Reduce transition, which targets reentrancies caused by control verbs, is critical in order to achieve this result. The relatively high results we obtain for the unlabeled case suggests that our parser has difficulty in labeling the arcs. Our score for concept identification, which is on par with the best result from the other parsers, demonstrates that there is a relatively low level of token ambiguity. State-of-the-art results for this problem can be obtained by choosing the most frequent subgraph for a given token based on a phrase-table constructed from JAMR alignments on the training data. The scores for named entities and wikification are heavily dependent on the hooks mentioned in § SECREF29 , which in turn relies on the named entity recognizer to make the correct predictions. In order to alleviate the problem of wrong automatic alignments with respect to polarity and better detect negation, we performed a post-processing step on the aligner output where we align the AMR constant - (minus) with words bearing negative polarity such as not, illegitimate and asymmetry. Our experiments demonstrate that there is no parser for AMR yet that conclusively does better than all other parsers on all metrics. Advantages of our parser are the worst-case linear complexity and the fact that is possible to perform incremental AMR parsing, which is both helpful for real-time applications and to investigate how meaning of English sentences can be built incrementally left-to-right. Related Work The first data-driven AMR parser is due to carbonell2014discriminative. The problem is addressed in two separate stages: concept identification and relation identification. They use a sequence labeling algorithm to identify concepts and frame the relation prediction task as a constrained combinatorial optimization problem. werling2015robust notice that the difficult bit is the concept identification and propose a better way to handle that task: an action classifier to generate concepts by applying predetermined actions. Other proposals involve a synchronous hyperedge replacement grammar solution BIBREF6 , a syntax-based machine translation approach BIBREF7 where a grammar of string-to-tree rules is created after reducing AMR graphs to trees by removing all reentrancies, a CCG system that first parses sentences into lambda-calculus representations BIBREF11 . A systematic translation from AMR to first order logic formulas, with a special treatment for quantification, reentrancy and negation, is discussed in bos2016expressive. In microsoft, a pre-existing logical form parser is used and the output is then converted into AMR graphs. Yet another solution is proposed by searnamr who discuss a parser that uses SEARN BIBREF23 , a “learning to search” algorithm. Transition-based algorithms for AMR parsing are compelling because traditional graph-based techniques are computationally expensive. wang and wang2boosting propose a framework that parses a sentence into its AMR structure through a two-stage process: a dependency tree is generated from the input sentence through a transition-based parser and then another transition-based parser is used to generate the AMR. The main benefit of this approach is that the dependency parser can be trained on a training set much larger than the training set for the tree-to-graph algorithm. Others further built on this parser: goodman2016noise use imitation learning to alleviate the probem of error propagation in the greedy parser, while barzdins2016riga create a wrapper around it to fix frequent mistakes and investigate ensembles with a character level neural parser. More recently emnlp2016 presented a non-greedy transition system for AMR parsing, based on ArcStandard BIBREF15 . AMR parsing as a whole is a complex task because it involves many subtasks including named entity recognition, co-reference resolution and semantic role labeling. sawai do not attempt at parsing AMR graphs for entire sentences but they instead handle simple noun phrases (NPs). They extract NPs from the AMR dataset only when they do not include further NPs, do not include pronouns nor named entities. Due to these restrictions, the AMRs are mostly trees and easier to handle than the original AMR graphs. They approach this task using a transition based system inspired by ArcStandard. AMR is not the only way to represent meaning in natural language sentences. Alternative semantic representations have been developed and studied, such as Boxer BIBREF24 , CCG BIBREF1 , BIBREF2 and UCCA BIBREF3 . Conclusion We presented a transition system that builds AMR graphs in linear time by processing the sentences left-to-right, trained with feed-forward neural networks. The parser demonstrates that it is possible to perform AMR parsing using techniques inspired by dependency parsing. We also noted that it is less informative to evaluate the entire parsing process with Smatch than to use a collection of metrics aimed at evaluating the various subproblems in the parsing process. We further showed that our left-to-right transition system is competitive with publicly available state-of-the-art parsers. Although we do not outperform the best baseline in terms of Smatch score, we show on par or better results for several of the metrics proposed. We hope that moving away from a single-metric evaluation will further speed up progress in AMR parsing. Acknowledgments The authors would like to thank the three anonymous reviewers and Sameer Bansal, Jeff Flanigan, Sorcha Gilroy, Adam Lopez, Nikos Papasarantopoulos, Nathan Schneider, Mark Steedman, Sam Thomson, Clara Vania and Chuan Wang for their help and comments. This research was supported by a grant from Bloomberg and by the H2020 project SUMMA, under grant agreement 688139.
entity recognition, semantic role labeling and co-reference resolution
e6c163f80a11bd057bbd0b6e1451ac82edddc78d
e6c163f80a11bd057bbd0b6e1451ac82edddc78d_0
Q: Do they test their approach on large-resource tasks? Text: Introduction In recent years, Deep Neural Networks (DNNs) have been successfully applied to Automatic Speech Recognition (ASR) for many well-resourced languages including Mandarin and English BIBREF0 , BIBREF1 . However, only a small portion of languages have clean speech labeled corpus. As a result, there is an increasing interest in building speech recognition systems for low-resource languages. To address this issue, researchers have successfully exploited multilingual speech recognition models by taking advantage of labeled corpora in other languages BIBREF2 , BIBREF3 . Multilingual speech recognition enables acoustic models to share parameters across multiple languages, therefore low-resource acoustic models can benefit from rich resources. While low-resource multilingual works have proposed various acoustic models, those works tend to combine several low-resource corpora together without paying attention to the variety of corpora themselves. One common training approach here is to first pretrain a multilingual model by combining all training corpora, then the pretrained model is fine-tuned on the target corpus BIBREF4 . During the training process, each corpus in the training set is treated equally and sampled uniformly. We argue, however, this approach does not take account of the characteristics of each corpus, therefore it fails to take advantage of the relations between them. For example, a conversation corpus might be more beneficial to another conversation corpus rather than an audio book corpus. In this work, we propose an effective sampling strategy (Corpus Relatedness Sampling) to take advantage of relations among corpora. Firstly, we introduce the corpus-level embedding which can be used to compute the similarity between corpora. The embedding can be estimated by being jointly trained with the acoustic model. Next, we compute the similarity between each corpus and the target corpus, the similarity is then used to optimize the model with respect to the target corpus. During the training process, we start by uniformly sampling from each corpus, then the sampling distribution is gradually updated so that more related corpora would be sampled more frequently. Eventually, only the target corpus would be sampled from the training set as the target corpus is the most related corpus to itself. While our approach differs from the pretrained model and the fine-tuned model, we can prove that those models are special cases of our sampling strategy. To evaluate our sampling strategy, we compare it with the pretrained model and fine-tuned model on 16 different corpora. The results show that our approach outperforms those baselines on all corpora: it achieves 1.6% lower phone error rate on average. Additionally, we demonstrate that our corpus-level embeddings are able to capture the characteristics of each corpus, especially the language and domain information. The main contributions of this paper are as follows: Related Work Multilingual speech recognition has explored various models to share parameters across languages in different ways. For example, parameters can be shared by using posterior features from other languages BIBREF5 , applying the same GMM components across different HMM states BIBREF6 , training shared hidden layers in DNNs BIBREF2 , BIBREF3 or LSTM BIBREF4 , using language independent bottleneck features BIBREF7 , BIBREF8 . Some models only share their hidden layers, but use separate output layers to predict their phones BIBREF2 , BIBREF3 . Other models have only one shared output layer to predict the universal phone set shared by all languages BIBREF9 , BIBREF10 , BIBREF11 . While those works proposed the multilingual models in different ways, few of them have explicitly exploited the relatedness across various languages and corpora. In contrast, our work computes the relatedness between different corpora using the embedding representations and exploits them efficiently. The embedding representations have been heavily used in multiple fields. In particular, embeddings of multiple granularities have been explored in many NLP tasks. To name a few, character embedding BIBREF12 , subword embedding BIBREF13 , sentence embedding BIBREF14 and document embedding BIBREF15 . However, there are few works exploring the corpus level embeddings. The main reason is that the number of corpora involved in most experiments is usually limited and it is not useful to compute corpus embeddings. The only exception is the multitask learning where many tasks and corpora are combined together. For instance, the language level (corpus level) embedding can be generated along with the model in machine translation BIBREF16 and speech recognition BIBREF17 . However, those embeddings are only used as an auxiliary feature to the model, few works continue to exploit those embeddings themselves. Another important aspect of our work is that we focused on the sampling strategy for speech recognition. While most of the previous speech works mainly emphasized the acoustic modeling side, there are also some attempts focusing on the sampling strategies. For instance, curriculum learning would train the acoustic model by starting from easy training samples and increasingly adapt it to more difficult samples BIBREF0 , BIBREF18 . Active learning is an approach trying to minimize human costs to collect transcribed speech data BIBREF19 . Furthermore, sampling strategies can also be helpful to speed up the training process BIBREF20 . However, the goals of most strategies are to improve the acoustic model by modifying the sampling distribution within a single speech corpus for a single language. On the contrary, our approach aims to optimize the multilingual acoustic model by modifying distributions across all the training corpora. Approach In this section, we describe our approach to compute the corpus embedding and our Corpus Relatedness Sampling strategy. Corpus Embedding Suppose that INLINEFORM0 is the target low-resource corpus, we are interested in optimizing the acoustic model with a much larger training corpora set INLINEFORM1 where INLINEFORM2 is the number of corpora and INLINEFORM3 . Each corpus INLINEFORM4 is a collection of INLINEFORM5 pairs where INLINEFORM6 is the input features and INLINEFORM7 is its target. Our purpose here is to compute the embedding INLINEFORM0 for each corpus INLINEFORM1 where INLINEFORM2 is expected to encode information about its corpus INLINEFORM3 . Those embeddings can be jointly trained with the standard multilingual model BIBREF4 . First, the embedding matrix INLINEFORM4 for all corpora is initialized, the INLINEFORM5 -th row of INLINEFORM6 is corresponding to the embedding INLINEFORM7 of the corpus INLINEFORM8 . Next, during the training phase, INLINEFORM9 can be used to bias the input feature INLINEFORM10 as follows. DISPLAYFORM0 where INLINEFORM0 is an utterance sampled randomly from INLINEFORM1 , INLINEFORM2 is its hidden features, INLINEFORM3 is the parameter of the acoustic model and Encoder is the stacked bidirectional LSTM as shown in Figure. FIGREF5 . Next, we apply the language specific softmax to compute logits INLINEFORM4 and optimize them with the CTC objective BIBREF29 . The embedding matrix INLINEFORM5 can be optimized together with the model during the training process. Corpus Relatedness Sampling With the embedding INLINEFORM0 of each corpus INLINEFORM1 , we can compute the similarity score between any two corpora using the cosine similarity. DISPLAYFORM0 As the similarity reflects the relatedness between corpora in the training set, we would like to sample the training set based on this similarity: those corpora which have a higher similarity with the target corpus INLINEFORM0 should be sampled more frequently. Therefore, we assume those similarity scores to be the sampling logits and they should be normalized with softmax. DISPLAYFORM0 where INLINEFORM0 is the probability to sample INLINEFORM1 from INLINEFORM2 , and INLINEFORM3 is the temperature to normalize the distribution during the training phase. We argue that different temperatures could create different training conditions. The model with a lower temperature tends to sample each corpus equally like uniform sampling. In contrast, a higher temperature means that the sampling distribution should be biased toward the target corpus like the fine-tuning. Next, we prove that both the pretrained model and the fine-tuned model can be realized with specific temperatures. In the case of the pretrained model, each corpus should be sampled equally. This can be implemented by setting INLINEFORM0 to be 0. DISPLAYFORM0 On the other hand, the fine-tuned model should only consider samples from the target corpus INLINEFORM0 , while ignoring all other corpora. We argue that this condition can be approximated by setting INLINEFORM1 to a very large number. As INLINEFORM2 and INLINEFORM3 if INLINEFORM4 , we can prove the statement as follows: DISPLAYFORM0 While both the pretrained model and the fine-tuned model are special cases of our approach, we note that our approach is more flexible to sample from related corpora by interpolating between those two extreme temperatures. In practice, we would like to start with a low temperature to sample broadly in the early training phase. Then we gradually increase the temperature so that it can focus more on the related corpora. Eventually, the temperature would be high enough so that the model is automatically fine-tuned on the target corpus. Specifically, in our experiment, we start training with a very low temperature INLINEFORM0 , and increase its value every epoch INLINEFORM1 as follows. DISPLAYFORM0 where INLINEFORM0 is the temperature of epoch INLINEFORM1 and INLINEFORM2 is a hyperparameter to control the growth rate of the temperature. Experiments To demonstrate that our sampling approach could improve the multilingual model, we conduct experiments on 16 corpora to compare our approach with the pretrained model and fine-tuned model. Datasets We first describe our corpus collection. Table. TABREF3 lists all corpora we used in the experiments. There are 16 corpora from 10 languages. To increase the variety of corpus, we selected 4 English corpora and 4 Mandarin corpora in addition to the low resource language corpora. As the target of this experiment is low resource speech recognition, we only randomly select 100,000 utterances even if there are more in each corpus. All corpora are available in LDC, voxforge, openSLR or other public websites. Each corpus is manually assigned one domain based on its speech style. Specifically, the domain candidates are telephone, read and broadcast. Experiment Settings We use EESEN BIBREF30 for the acoustic modeling and epitran BIBREF31 as the g2p tool in this work. Every utterance in the corpora is firstly re-sampled into 8000Hz, and then we extract 40 dimension MFCCs features from each audio. We use a recent multilingual CTC model as our acoustic architecture BIBREF4 : The architecture is a 6 layer bidirectional LSTM model with 320 cells in each layer. We use this architecture for both the baseline models and the proposed model. Our baseline model is the fine-tuned model: we first pretrained a model by uniformly sampling from all corpora. After the loss converges, we fine-tune the model on each of our target corpora. To compare it with our sampling approach, we first train an acoustic model to compute the embeddings of all corpora, then the embeddings are used to estimate the similarity as described in the previous section. The initial temperature INLINEFORM0 is set to 0.01, and the growth rate is INLINEFORM1 . We evaluated all models using the phone error rate (PER) instead of the word error rate (WER). The reason is that we mainly focus on the acoustic model in this experiment. Additionally, some corpora (e.g.: Dutch voxforge) in this experiment have very few amounts of texts, therefore it is difficult to create a reasonable language model without augmenting texts using other corpora, which is beyond the scope of this work. Results Table. TABREF16 shows the results of our evaluation. We compare our approach with the baseline using all corpora. The left-most column of Table. TABREF16 shows the corpus we used for each experiment, the remaining columns are corresponding to the phone error rate of the pretrained model, the fine-tuned model and our proposed model. First, we can easily confirm that the fine-tuned model outperforms the pretrained model on all corpora. For instance, the fine-tuned model outperforms the pretrained model by 4.7% on the Amharic corpus. The result is reasonable as the pretrained model is optimized with the entire training set, while the fine-tuned model is further adapted to the target corpus. Next, the table suggests our Corpus Relatedness Sampling approach achieves better results than the fine-tuned model on all test corpora. For instance, the phone error rate is improved from 40.7% to 36.9% on Amharic and is improved from 41.9% to 40.0% on Bengali. On average, our approach outperforms the fine-tuned model by 1.6% phone error rate. The results demonstrate that our sampling approach is more effective at optimizing the acoustic model on the target corpus. We also train baseline models by appending corpus embeddings to input features, but the proposed model outperforms those baselines similarly. One interesting trend we observed in the table is that the improvements differ across the target corpora. For instance, the improvement on the Dutch corpus is 3.4%, on the other hand, its improvement of 0.6% is relatively smaller on the Zulu dataset. We believe the difference in improvements can be explained by the size of each corpus. The size of Dutch corpus is very small as shown in Table. TABREF3 , therefore the fine-tuned model is prone to overfit to the dataset very quickly. In contrast, it is less likely for a larger corpus to overfit. Compared with the fine-tuned model, our approach optimizes the model by gradually changing the temperature without quick overfitting. This mechanism could be interpreted as a built-in regularization. As a result, our model can achieve much better performance in small corpora by preventing the overfitting effect. To understand how our corpus embeddings contribute to our approach, we rank those embeddings and show the top-2 similar corpora for each corpus in Table. TABREF17 . We note that the target corpus itself is removed from the rankings because it is the most related corpus to itself. The results of the top half show very clearly that our embeddings can capture the language level information: For most English and Mandarin corpora, the most related corpus is another English or Mandarin corpus. Additionally, the bottom half of the table indicates that our embeddings are able to capture domain level information as well. For instance, the top 2 related corpus for Amharic is Bengali and Swahili. According to Table. TABREF3 , those three corpora belong to the telephone domain. In addition, Dutch is a read corpus, its top 2 related corpora are also from the same domain. This also explains why the 1st related corpus of Mandarin (hk) is Bengali: because both of them are from the same telephone domain. To further investigate the domain information contained in the corpus embeddings, we train the corpus embeddings with an even larger corpora collection (36 corpora) and plot all of them in Figure. FIGREF18 . To create the plot, the dimension of each corpus embedding is reduced to 2 with t-SNE BIBREF32 . The figure demonstrates clearly that our corpus embeddings are capable of capturing the domain information: all corpora with the same domain are clustered together. This result also means that our approach improves the model by sampling more frequently from the corpora of the same speech domain. Conclusion In this work, we propose an approach to compute corpus-level embeddings. We also introduce Corpus Relatedness Sampling approach to train multilingual speech recognition models based on those corpus embeddings. Our experiment shows that our approach outperforms the fine-tuned multilingual models in all 16 test corpora by 1.6 phone error rate on average. Additionally, we demonstrate that our corpus embeddings can capture both language and domain information of each corpus. Acknowledgements This project was sponsored by the Defense Advanced Research Projects Agency (DARPA) Information Innovation Office (I2O), program: Low Resource Languages for Emergent Incidents (LORELEI), issued by DARPA/I2O under Contract No. HR0011-15-C-0114.
Yes
6adfa9eee76b96953a76c03356bf41d8a9378851
6adfa9eee76b96953a76c03356bf41d8a9378851_0
Q: By how much do they, on average, outperform the baseline multilingual model on 16 low-resource tasks? Text: Introduction In recent years, Deep Neural Networks (DNNs) have been successfully applied to Automatic Speech Recognition (ASR) for many well-resourced languages including Mandarin and English BIBREF0 , BIBREF1 . However, only a small portion of languages have clean speech labeled corpus. As a result, there is an increasing interest in building speech recognition systems for low-resource languages. To address this issue, researchers have successfully exploited multilingual speech recognition models by taking advantage of labeled corpora in other languages BIBREF2 , BIBREF3 . Multilingual speech recognition enables acoustic models to share parameters across multiple languages, therefore low-resource acoustic models can benefit from rich resources. While low-resource multilingual works have proposed various acoustic models, those works tend to combine several low-resource corpora together without paying attention to the variety of corpora themselves. One common training approach here is to first pretrain a multilingual model by combining all training corpora, then the pretrained model is fine-tuned on the target corpus BIBREF4 . During the training process, each corpus in the training set is treated equally and sampled uniformly. We argue, however, this approach does not take account of the characteristics of each corpus, therefore it fails to take advantage of the relations between them. For example, a conversation corpus might be more beneficial to another conversation corpus rather than an audio book corpus. In this work, we propose an effective sampling strategy (Corpus Relatedness Sampling) to take advantage of relations among corpora. Firstly, we introduce the corpus-level embedding which can be used to compute the similarity between corpora. The embedding can be estimated by being jointly trained with the acoustic model. Next, we compute the similarity between each corpus and the target corpus, the similarity is then used to optimize the model with respect to the target corpus. During the training process, we start by uniformly sampling from each corpus, then the sampling distribution is gradually updated so that more related corpora would be sampled more frequently. Eventually, only the target corpus would be sampled from the training set as the target corpus is the most related corpus to itself. While our approach differs from the pretrained model and the fine-tuned model, we can prove that those models are special cases of our sampling strategy. To evaluate our sampling strategy, we compare it with the pretrained model and fine-tuned model on 16 different corpora. The results show that our approach outperforms those baselines on all corpora: it achieves 1.6% lower phone error rate on average. Additionally, we demonstrate that our corpus-level embeddings are able to capture the characteristics of each corpus, especially the language and domain information. The main contributions of this paper are as follows: Related Work Multilingual speech recognition has explored various models to share parameters across languages in different ways. For example, parameters can be shared by using posterior features from other languages BIBREF5 , applying the same GMM components across different HMM states BIBREF6 , training shared hidden layers in DNNs BIBREF2 , BIBREF3 or LSTM BIBREF4 , using language independent bottleneck features BIBREF7 , BIBREF8 . Some models only share their hidden layers, but use separate output layers to predict their phones BIBREF2 , BIBREF3 . Other models have only one shared output layer to predict the universal phone set shared by all languages BIBREF9 , BIBREF10 , BIBREF11 . While those works proposed the multilingual models in different ways, few of them have explicitly exploited the relatedness across various languages and corpora. In contrast, our work computes the relatedness between different corpora using the embedding representations and exploits them efficiently. The embedding representations have been heavily used in multiple fields. In particular, embeddings of multiple granularities have been explored in many NLP tasks. To name a few, character embedding BIBREF12 , subword embedding BIBREF13 , sentence embedding BIBREF14 and document embedding BIBREF15 . However, there are few works exploring the corpus level embeddings. The main reason is that the number of corpora involved in most experiments is usually limited and it is not useful to compute corpus embeddings. The only exception is the multitask learning where many tasks and corpora are combined together. For instance, the language level (corpus level) embedding can be generated along with the model in machine translation BIBREF16 and speech recognition BIBREF17 . However, those embeddings are only used as an auxiliary feature to the model, few works continue to exploit those embeddings themselves. Another important aspect of our work is that we focused on the sampling strategy for speech recognition. While most of the previous speech works mainly emphasized the acoustic modeling side, there are also some attempts focusing on the sampling strategies. For instance, curriculum learning would train the acoustic model by starting from easy training samples and increasingly adapt it to more difficult samples BIBREF0 , BIBREF18 . Active learning is an approach trying to minimize human costs to collect transcribed speech data BIBREF19 . Furthermore, sampling strategies can also be helpful to speed up the training process BIBREF20 . However, the goals of most strategies are to improve the acoustic model by modifying the sampling distribution within a single speech corpus for a single language. On the contrary, our approach aims to optimize the multilingual acoustic model by modifying distributions across all the training corpora. Approach In this section, we describe our approach to compute the corpus embedding and our Corpus Relatedness Sampling strategy. Corpus Embedding Suppose that INLINEFORM0 is the target low-resource corpus, we are interested in optimizing the acoustic model with a much larger training corpora set INLINEFORM1 where INLINEFORM2 is the number of corpora and INLINEFORM3 . Each corpus INLINEFORM4 is a collection of INLINEFORM5 pairs where INLINEFORM6 is the input features and INLINEFORM7 is its target. Our purpose here is to compute the embedding INLINEFORM0 for each corpus INLINEFORM1 where INLINEFORM2 is expected to encode information about its corpus INLINEFORM3 . Those embeddings can be jointly trained with the standard multilingual model BIBREF4 . First, the embedding matrix INLINEFORM4 for all corpora is initialized, the INLINEFORM5 -th row of INLINEFORM6 is corresponding to the embedding INLINEFORM7 of the corpus INLINEFORM8 . Next, during the training phase, INLINEFORM9 can be used to bias the input feature INLINEFORM10 as follows. DISPLAYFORM0 where INLINEFORM0 is an utterance sampled randomly from INLINEFORM1 , INLINEFORM2 is its hidden features, INLINEFORM3 is the parameter of the acoustic model and Encoder is the stacked bidirectional LSTM as shown in Figure. FIGREF5 . Next, we apply the language specific softmax to compute logits INLINEFORM4 and optimize them with the CTC objective BIBREF29 . The embedding matrix INLINEFORM5 can be optimized together with the model during the training process. Corpus Relatedness Sampling With the embedding INLINEFORM0 of each corpus INLINEFORM1 , we can compute the similarity score between any two corpora using the cosine similarity. DISPLAYFORM0 As the similarity reflects the relatedness between corpora in the training set, we would like to sample the training set based on this similarity: those corpora which have a higher similarity with the target corpus INLINEFORM0 should be sampled more frequently. Therefore, we assume those similarity scores to be the sampling logits and they should be normalized with softmax. DISPLAYFORM0 where INLINEFORM0 is the probability to sample INLINEFORM1 from INLINEFORM2 , and INLINEFORM3 is the temperature to normalize the distribution during the training phase. We argue that different temperatures could create different training conditions. The model with a lower temperature tends to sample each corpus equally like uniform sampling. In contrast, a higher temperature means that the sampling distribution should be biased toward the target corpus like the fine-tuning. Next, we prove that both the pretrained model and the fine-tuned model can be realized with specific temperatures. In the case of the pretrained model, each corpus should be sampled equally. This can be implemented by setting INLINEFORM0 to be 0. DISPLAYFORM0 On the other hand, the fine-tuned model should only consider samples from the target corpus INLINEFORM0 , while ignoring all other corpora. We argue that this condition can be approximated by setting INLINEFORM1 to a very large number. As INLINEFORM2 and INLINEFORM3 if INLINEFORM4 , we can prove the statement as follows: DISPLAYFORM0 While both the pretrained model and the fine-tuned model are special cases of our approach, we note that our approach is more flexible to sample from related corpora by interpolating between those two extreme temperatures. In practice, we would like to start with a low temperature to sample broadly in the early training phase. Then we gradually increase the temperature so that it can focus more on the related corpora. Eventually, the temperature would be high enough so that the model is automatically fine-tuned on the target corpus. Specifically, in our experiment, we start training with a very low temperature INLINEFORM0 , and increase its value every epoch INLINEFORM1 as follows. DISPLAYFORM0 where INLINEFORM0 is the temperature of epoch INLINEFORM1 and INLINEFORM2 is a hyperparameter to control the growth rate of the temperature. Experiments To demonstrate that our sampling approach could improve the multilingual model, we conduct experiments on 16 corpora to compare our approach with the pretrained model and fine-tuned model. Datasets We first describe our corpus collection. Table. TABREF3 lists all corpora we used in the experiments. There are 16 corpora from 10 languages. To increase the variety of corpus, we selected 4 English corpora and 4 Mandarin corpora in addition to the low resource language corpora. As the target of this experiment is low resource speech recognition, we only randomly select 100,000 utterances even if there are more in each corpus. All corpora are available in LDC, voxforge, openSLR or other public websites. Each corpus is manually assigned one domain based on its speech style. Specifically, the domain candidates are telephone, read and broadcast. Experiment Settings We use EESEN BIBREF30 for the acoustic modeling and epitran BIBREF31 as the g2p tool in this work. Every utterance in the corpora is firstly re-sampled into 8000Hz, and then we extract 40 dimension MFCCs features from each audio. We use a recent multilingual CTC model as our acoustic architecture BIBREF4 : The architecture is a 6 layer bidirectional LSTM model with 320 cells in each layer. We use this architecture for both the baseline models and the proposed model. Our baseline model is the fine-tuned model: we first pretrained a model by uniformly sampling from all corpora. After the loss converges, we fine-tune the model on each of our target corpora. To compare it with our sampling approach, we first train an acoustic model to compute the embeddings of all corpora, then the embeddings are used to estimate the similarity as described in the previous section. The initial temperature INLINEFORM0 is set to 0.01, and the growth rate is INLINEFORM1 . We evaluated all models using the phone error rate (PER) instead of the word error rate (WER). The reason is that we mainly focus on the acoustic model in this experiment. Additionally, some corpora (e.g.: Dutch voxforge) in this experiment have very few amounts of texts, therefore it is difficult to create a reasonable language model without augmenting texts using other corpora, which is beyond the scope of this work. Results Table. TABREF16 shows the results of our evaluation. We compare our approach with the baseline using all corpora. The left-most column of Table. TABREF16 shows the corpus we used for each experiment, the remaining columns are corresponding to the phone error rate of the pretrained model, the fine-tuned model and our proposed model. First, we can easily confirm that the fine-tuned model outperforms the pretrained model on all corpora. For instance, the fine-tuned model outperforms the pretrained model by 4.7% on the Amharic corpus. The result is reasonable as the pretrained model is optimized with the entire training set, while the fine-tuned model is further adapted to the target corpus. Next, the table suggests our Corpus Relatedness Sampling approach achieves better results than the fine-tuned model on all test corpora. For instance, the phone error rate is improved from 40.7% to 36.9% on Amharic and is improved from 41.9% to 40.0% on Bengali. On average, our approach outperforms the fine-tuned model by 1.6% phone error rate. The results demonstrate that our sampling approach is more effective at optimizing the acoustic model on the target corpus. We also train baseline models by appending corpus embeddings to input features, but the proposed model outperforms those baselines similarly. One interesting trend we observed in the table is that the improvements differ across the target corpora. For instance, the improvement on the Dutch corpus is 3.4%, on the other hand, its improvement of 0.6% is relatively smaller on the Zulu dataset. We believe the difference in improvements can be explained by the size of each corpus. The size of Dutch corpus is very small as shown in Table. TABREF3 , therefore the fine-tuned model is prone to overfit to the dataset very quickly. In contrast, it is less likely for a larger corpus to overfit. Compared with the fine-tuned model, our approach optimizes the model by gradually changing the temperature without quick overfitting. This mechanism could be interpreted as a built-in regularization. As a result, our model can achieve much better performance in small corpora by preventing the overfitting effect. To understand how our corpus embeddings contribute to our approach, we rank those embeddings and show the top-2 similar corpora for each corpus in Table. TABREF17 . We note that the target corpus itself is removed from the rankings because it is the most related corpus to itself. The results of the top half show very clearly that our embeddings can capture the language level information: For most English and Mandarin corpora, the most related corpus is another English or Mandarin corpus. Additionally, the bottom half of the table indicates that our embeddings are able to capture domain level information as well. For instance, the top 2 related corpus for Amharic is Bengali and Swahili. According to Table. TABREF3 , those three corpora belong to the telephone domain. In addition, Dutch is a read corpus, its top 2 related corpora are also from the same domain. This also explains why the 1st related corpus of Mandarin (hk) is Bengali: because both of them are from the same telephone domain. To further investigate the domain information contained in the corpus embeddings, we train the corpus embeddings with an even larger corpora collection (36 corpora) and plot all of them in Figure. FIGREF18 . To create the plot, the dimension of each corpus embedding is reduced to 2 with t-SNE BIBREF32 . The figure demonstrates clearly that our corpus embeddings are capable of capturing the domain information: all corpora with the same domain are clustered together. This result also means that our approach improves the model by sampling more frequently from the corpora of the same speech domain. Conclusion In this work, we propose an approach to compute corpus-level embeddings. We also introduce Corpus Relatedness Sampling approach to train multilingual speech recognition models based on those corpus embeddings. Our experiment shows that our approach outperforms the fine-tuned multilingual models in all 16 test corpora by 1.6 phone error rate on average. Additionally, we demonstrate that our corpus embeddings can capture both language and domain information of each corpus. Acknowledgements This project was sponsored by the Defense Advanced Research Projects Agency (DARPA) Information Innovation Office (I2O), program: Low Resource Languages for Emergent Incidents (LORELEI), issued by DARPA/I2O under Contract No. HR0011-15-C-0114.
1.6% lower phone error rate on average
450a359d117bcfa2de4ffd987f787945f25b3b25
450a359d117bcfa2de4ffd987f787945f25b3b25_0
Q: How do they compute corpus-level embeddings? Text: Introduction In recent years, Deep Neural Networks (DNNs) have been successfully applied to Automatic Speech Recognition (ASR) for many well-resourced languages including Mandarin and English BIBREF0 , BIBREF1 . However, only a small portion of languages have clean speech labeled corpus. As a result, there is an increasing interest in building speech recognition systems for low-resource languages. To address this issue, researchers have successfully exploited multilingual speech recognition models by taking advantage of labeled corpora in other languages BIBREF2 , BIBREF3 . Multilingual speech recognition enables acoustic models to share parameters across multiple languages, therefore low-resource acoustic models can benefit from rich resources. While low-resource multilingual works have proposed various acoustic models, those works tend to combine several low-resource corpora together without paying attention to the variety of corpora themselves. One common training approach here is to first pretrain a multilingual model by combining all training corpora, then the pretrained model is fine-tuned on the target corpus BIBREF4 . During the training process, each corpus in the training set is treated equally and sampled uniformly. We argue, however, this approach does not take account of the characteristics of each corpus, therefore it fails to take advantage of the relations between them. For example, a conversation corpus might be more beneficial to another conversation corpus rather than an audio book corpus. In this work, we propose an effective sampling strategy (Corpus Relatedness Sampling) to take advantage of relations among corpora. Firstly, we introduce the corpus-level embedding which can be used to compute the similarity between corpora. The embedding can be estimated by being jointly trained with the acoustic model. Next, we compute the similarity between each corpus and the target corpus, the similarity is then used to optimize the model with respect to the target corpus. During the training process, we start by uniformly sampling from each corpus, then the sampling distribution is gradually updated so that more related corpora would be sampled more frequently. Eventually, only the target corpus would be sampled from the training set as the target corpus is the most related corpus to itself. While our approach differs from the pretrained model and the fine-tuned model, we can prove that those models are special cases of our sampling strategy. To evaluate our sampling strategy, we compare it with the pretrained model and fine-tuned model on 16 different corpora. The results show that our approach outperforms those baselines on all corpora: it achieves 1.6% lower phone error rate on average. Additionally, we demonstrate that our corpus-level embeddings are able to capture the characteristics of each corpus, especially the language and domain information. The main contributions of this paper are as follows: Related Work Multilingual speech recognition has explored various models to share parameters across languages in different ways. For example, parameters can be shared by using posterior features from other languages BIBREF5 , applying the same GMM components across different HMM states BIBREF6 , training shared hidden layers in DNNs BIBREF2 , BIBREF3 or LSTM BIBREF4 , using language independent bottleneck features BIBREF7 , BIBREF8 . Some models only share their hidden layers, but use separate output layers to predict their phones BIBREF2 , BIBREF3 . Other models have only one shared output layer to predict the universal phone set shared by all languages BIBREF9 , BIBREF10 , BIBREF11 . While those works proposed the multilingual models in different ways, few of them have explicitly exploited the relatedness across various languages and corpora. In contrast, our work computes the relatedness between different corpora using the embedding representations and exploits them efficiently. The embedding representations have been heavily used in multiple fields. In particular, embeddings of multiple granularities have been explored in many NLP tasks. To name a few, character embedding BIBREF12 , subword embedding BIBREF13 , sentence embedding BIBREF14 and document embedding BIBREF15 . However, there are few works exploring the corpus level embeddings. The main reason is that the number of corpora involved in most experiments is usually limited and it is not useful to compute corpus embeddings. The only exception is the multitask learning where many tasks and corpora are combined together. For instance, the language level (corpus level) embedding can be generated along with the model in machine translation BIBREF16 and speech recognition BIBREF17 . However, those embeddings are only used as an auxiliary feature to the model, few works continue to exploit those embeddings themselves. Another important aspect of our work is that we focused on the sampling strategy for speech recognition. While most of the previous speech works mainly emphasized the acoustic modeling side, there are also some attempts focusing on the sampling strategies. For instance, curriculum learning would train the acoustic model by starting from easy training samples and increasingly adapt it to more difficult samples BIBREF0 , BIBREF18 . Active learning is an approach trying to minimize human costs to collect transcribed speech data BIBREF19 . Furthermore, sampling strategies can also be helpful to speed up the training process BIBREF20 . However, the goals of most strategies are to improve the acoustic model by modifying the sampling distribution within a single speech corpus for a single language. On the contrary, our approach aims to optimize the multilingual acoustic model by modifying distributions across all the training corpora. Approach In this section, we describe our approach to compute the corpus embedding and our Corpus Relatedness Sampling strategy. Corpus Embedding Suppose that INLINEFORM0 is the target low-resource corpus, we are interested in optimizing the acoustic model with a much larger training corpora set INLINEFORM1 where INLINEFORM2 is the number of corpora and INLINEFORM3 . Each corpus INLINEFORM4 is a collection of INLINEFORM5 pairs where INLINEFORM6 is the input features and INLINEFORM7 is its target. Our purpose here is to compute the embedding INLINEFORM0 for each corpus INLINEFORM1 where INLINEFORM2 is expected to encode information about its corpus INLINEFORM3 . Those embeddings can be jointly trained with the standard multilingual model BIBREF4 . First, the embedding matrix INLINEFORM4 for all corpora is initialized, the INLINEFORM5 -th row of INLINEFORM6 is corresponding to the embedding INLINEFORM7 of the corpus INLINEFORM8 . Next, during the training phase, INLINEFORM9 can be used to bias the input feature INLINEFORM10 as follows. DISPLAYFORM0 where INLINEFORM0 is an utterance sampled randomly from INLINEFORM1 , INLINEFORM2 is its hidden features, INLINEFORM3 is the parameter of the acoustic model and Encoder is the stacked bidirectional LSTM as shown in Figure. FIGREF5 . Next, we apply the language specific softmax to compute logits INLINEFORM4 and optimize them with the CTC objective BIBREF29 . The embedding matrix INLINEFORM5 can be optimized together with the model during the training process. Corpus Relatedness Sampling With the embedding INLINEFORM0 of each corpus INLINEFORM1 , we can compute the similarity score between any two corpora using the cosine similarity. DISPLAYFORM0 As the similarity reflects the relatedness between corpora in the training set, we would like to sample the training set based on this similarity: those corpora which have a higher similarity with the target corpus INLINEFORM0 should be sampled more frequently. Therefore, we assume those similarity scores to be the sampling logits and they should be normalized with softmax. DISPLAYFORM0 where INLINEFORM0 is the probability to sample INLINEFORM1 from INLINEFORM2 , and INLINEFORM3 is the temperature to normalize the distribution during the training phase. We argue that different temperatures could create different training conditions. The model with a lower temperature tends to sample each corpus equally like uniform sampling. In contrast, a higher temperature means that the sampling distribution should be biased toward the target corpus like the fine-tuning. Next, we prove that both the pretrained model and the fine-tuned model can be realized with specific temperatures. In the case of the pretrained model, each corpus should be sampled equally. This can be implemented by setting INLINEFORM0 to be 0. DISPLAYFORM0 On the other hand, the fine-tuned model should only consider samples from the target corpus INLINEFORM0 , while ignoring all other corpora. We argue that this condition can be approximated by setting INLINEFORM1 to a very large number. As INLINEFORM2 and INLINEFORM3 if INLINEFORM4 , we can prove the statement as follows: DISPLAYFORM0 While both the pretrained model and the fine-tuned model are special cases of our approach, we note that our approach is more flexible to sample from related corpora by interpolating between those two extreme temperatures. In practice, we would like to start with a low temperature to sample broadly in the early training phase. Then we gradually increase the temperature so that it can focus more on the related corpora. Eventually, the temperature would be high enough so that the model is automatically fine-tuned on the target corpus. Specifically, in our experiment, we start training with a very low temperature INLINEFORM0 , and increase its value every epoch INLINEFORM1 as follows. DISPLAYFORM0 where INLINEFORM0 is the temperature of epoch INLINEFORM1 and INLINEFORM2 is a hyperparameter to control the growth rate of the temperature. Experiments To demonstrate that our sampling approach could improve the multilingual model, we conduct experiments on 16 corpora to compare our approach with the pretrained model and fine-tuned model. Datasets We first describe our corpus collection. Table. TABREF3 lists all corpora we used in the experiments. There are 16 corpora from 10 languages. To increase the variety of corpus, we selected 4 English corpora and 4 Mandarin corpora in addition to the low resource language corpora. As the target of this experiment is low resource speech recognition, we only randomly select 100,000 utterances even if there are more in each corpus. All corpora are available in LDC, voxforge, openSLR or other public websites. Each corpus is manually assigned one domain based on its speech style. Specifically, the domain candidates are telephone, read and broadcast. Experiment Settings We use EESEN BIBREF30 for the acoustic modeling and epitran BIBREF31 as the g2p tool in this work. Every utterance in the corpora is firstly re-sampled into 8000Hz, and then we extract 40 dimension MFCCs features from each audio. We use a recent multilingual CTC model as our acoustic architecture BIBREF4 : The architecture is a 6 layer bidirectional LSTM model with 320 cells in each layer. We use this architecture for both the baseline models and the proposed model. Our baseline model is the fine-tuned model: we first pretrained a model by uniformly sampling from all corpora. After the loss converges, we fine-tune the model on each of our target corpora. To compare it with our sampling approach, we first train an acoustic model to compute the embeddings of all corpora, then the embeddings are used to estimate the similarity as described in the previous section. The initial temperature INLINEFORM0 is set to 0.01, and the growth rate is INLINEFORM1 . We evaluated all models using the phone error rate (PER) instead of the word error rate (WER). The reason is that we mainly focus on the acoustic model in this experiment. Additionally, some corpora (e.g.: Dutch voxforge) in this experiment have very few amounts of texts, therefore it is difficult to create a reasonable language model without augmenting texts using other corpora, which is beyond the scope of this work. Results Table. TABREF16 shows the results of our evaluation. We compare our approach with the baseline using all corpora. The left-most column of Table. TABREF16 shows the corpus we used for each experiment, the remaining columns are corresponding to the phone error rate of the pretrained model, the fine-tuned model and our proposed model. First, we can easily confirm that the fine-tuned model outperforms the pretrained model on all corpora. For instance, the fine-tuned model outperforms the pretrained model by 4.7% on the Amharic corpus. The result is reasonable as the pretrained model is optimized with the entire training set, while the fine-tuned model is further adapted to the target corpus. Next, the table suggests our Corpus Relatedness Sampling approach achieves better results than the fine-tuned model on all test corpora. For instance, the phone error rate is improved from 40.7% to 36.9% on Amharic and is improved from 41.9% to 40.0% on Bengali. On average, our approach outperforms the fine-tuned model by 1.6% phone error rate. The results demonstrate that our sampling approach is more effective at optimizing the acoustic model on the target corpus. We also train baseline models by appending corpus embeddings to input features, but the proposed model outperforms those baselines similarly. One interesting trend we observed in the table is that the improvements differ across the target corpora. For instance, the improvement on the Dutch corpus is 3.4%, on the other hand, its improvement of 0.6% is relatively smaller on the Zulu dataset. We believe the difference in improvements can be explained by the size of each corpus. The size of Dutch corpus is very small as shown in Table. TABREF3 , therefore the fine-tuned model is prone to overfit to the dataset very quickly. In contrast, it is less likely for a larger corpus to overfit. Compared with the fine-tuned model, our approach optimizes the model by gradually changing the temperature without quick overfitting. This mechanism could be interpreted as a built-in regularization. As a result, our model can achieve much better performance in small corpora by preventing the overfitting effect. To understand how our corpus embeddings contribute to our approach, we rank those embeddings and show the top-2 similar corpora for each corpus in Table. TABREF17 . We note that the target corpus itself is removed from the rankings because it is the most related corpus to itself. The results of the top half show very clearly that our embeddings can capture the language level information: For most English and Mandarin corpora, the most related corpus is another English or Mandarin corpus. Additionally, the bottom half of the table indicates that our embeddings are able to capture domain level information as well. For instance, the top 2 related corpus for Amharic is Bengali and Swahili. According to Table. TABREF3 , those three corpora belong to the telephone domain. In addition, Dutch is a read corpus, its top 2 related corpora are also from the same domain. This also explains why the 1st related corpus of Mandarin (hk) is Bengali: because both of them are from the same telephone domain. To further investigate the domain information contained in the corpus embeddings, we train the corpus embeddings with an even larger corpora collection (36 corpora) and plot all of them in Figure. FIGREF18 . To create the plot, the dimension of each corpus embedding is reduced to 2 with t-SNE BIBREF32 . The figure demonstrates clearly that our corpus embeddings are capable of capturing the domain information: all corpora with the same domain are clustered together. This result also means that our approach improves the model by sampling more frequently from the corpora of the same speech domain. Conclusion In this work, we propose an approach to compute corpus-level embeddings. We also introduce Corpus Relatedness Sampling approach to train multilingual speech recognition models based on those corpus embeddings. Our experiment shows that our approach outperforms the fine-tuned multilingual models in all 16 test corpora by 1.6 phone error rate on average. Additionally, we demonstrate that our corpus embeddings can capture both language and domain information of each corpus. Acknowledgements This project was sponsored by the Defense Advanced Research Projects Agency (DARPA) Information Innovation Office (I2O), program: Low Resource Languages for Emergent Incidents (LORELEI), issued by DARPA/I2O under Contract No. HR0011-15-C-0114.
First, the embedding matrix INLINEFORM4 for all corpora is initialized, during the training phase, INLINEFORM9 can be used to bias the input feature, Next, we apply the language specific softmax to compute logits INLINEFORM4 and optimize them with the CTC objective
70f84c73172211186de1a27b98f5f5ae25a94e55
70f84c73172211186de1a27b98f5f5ae25a94e55_0
Q: Which dataset do they use? Text: Introduction Deep models have been shown to be vulnerable against adversarial input perturbations BIBREF0, BIBREF1. Small, semantically invariant input alterations can lead to drastic changes in predictions, leading to poor performance on adversarially chosen samples. Recent work BIBREF2, BIBREF3, BIBREF4 also exposed the vulnerabilities of neural NLP models, e.g. with small character perturbations BIBREF5 or paraphrases BIBREF6, BIBREF7. These adversarial attacks highlight often unintuitive model failure modes and present a challenge to deploying NLP models. Common attempts to mitigate the issue are adversarial training BIBREF5 and data augmentation BIBREF3, BIBREF8, which lead to improved accuracy on adversarial examples. However, this might cause a false sense of security, as there is generally no guarantee that stronger adversaries could not circumvent defenses to find other successful attacks BIBREF9, BIBREF10, BIBREF11. Rather than continuing the race with adversaries, formal verification BIBREF12, BIBREF13, BIBREF14 offers a different approach: it aims at providing provable guarantees to a given model specification. In the case of adversarial robustness, such a specification can be formulated as prediction consistency under any altered – but semantically invariant – input change. In this paper, we study verifiable robustness, i.e., providing a certificate that for a given network and test input, no attack or perturbation under the specification can change predictions, using the example of text classification tasks, Stanford Sentiment Treebank (SST) BIBREF15 and AG News BIBREF16. The specification against which we verify is that a text classification model should preserve its prediction under character (or synonym) substitutions in a character (or word) based model. We propose modeling these input perturbations as a simplex and then using Interval Bound Propagation (IBP) BIBREF17, BIBREF18, BIBREF19 to compute worst case bounds on specification satisfaction, as illustrated in Figure FIGREF1. Since these bounds can be computed efficiently, we can furthermore derive an auxiliary objective for models to become verifiable. The resulting classifiers are efficiently verifiable and improve robustness on adversarial examples, while maintaining comparable performance in terms of nominal test accuracy. The contributions of this paper are twofold: To the best of our knowledge, this paper is the first to introduce verification and verifiable training for neural networks in natural language processing (§SECREF3). Through a series of experiments (§SECREF4), we demonstrate (a) the effectiveness of modeling input perturbations as a simplex and using simplex bounds with IBP for training and testing, (b) the weakness of adversarial training under exhaustive verification, (c) the effects of perturbation space on the performance of different methods, and (d) the impact of using GloVe and counter-fitted embeddings on the IBP verification bounds. Related Work ::: Adversarial Examples in NLP. Creating adversarial examples for NLP systems requires identifying semantically invariant text transformations to define an input perturbation space. In this paper, given our specification, we study word- and character-level HotFlip attacks BIBREF5 – which consist of character and synonym replacements – on text classification tasks. We compare our verifiable approach to other defenses including adversarial training BIBREF20 and data augmentation BIBREF8, BIBREF3. Note that some existing adversarial perturbations such as syntactically controlled paraphrasing BIBREF7, exploiting backtranslation systems BIBREF6, or using targeted keyword attack BIBREF21 are beyond the specification in this paper. Related Work ::: Formal Verification of Neural Networks. Formal verification provides a provable guarantee that models are consistent with a specification for all possible model inputs. Previous work can be categorised into complete methods that use Mixed-Integer Programming (MIP) BIBREF22, BIBREF23 or Satisfiability Modulo Theory (SMT) BIBREF14, BIBREF24, and incomplete methods that solve a convex relaxation of the verification problem BIBREF25, BIBREF26, BIBREF27. Complete methods perform exhaustive enumeration to find the worst case. Hence, complete methods are expensive and difficult to scale, though they provide exact robustness bounds. Incomplete methods provide loose robustness bounds, but can be more scalable and used inside the training loop for training models to be robust and verifiable BIBREF28, BIBREF26, BIBREF19, BIBREF17. Our work is the first to extend incomplete verification to text classification, considering input perturbations on a simplex and minimising worst case bounds to adversarial attacks in text classification. We highlight that the verification of neural networks is an extremely challenging task, and that scaling complete and incomplete methods to large models remains an open challenge. Related Work ::: Representations of Combinatorial Spaces. Word lattices and hypergraphs are data structures that have often been used to efficiently represent and process exponentially large numbers of sentences without exhaustively enumerating them. Applications include automatic speech recognition (ASR) output rescoring BIBREF29, machine translation of ASR outputs BIBREF30, paraphrase variants BIBREF31, and word segmentation alternatives BIBREF32. The specifications used to characterise the space of adversarial attacks are likewise a compact representation, and the algorithms discussed below operate on them without exhaustive enumeration. Methodology We assume a fixed initial vector representation $\mathbf {z} _0$ of a given input sentence $z$ (e.g. the concatenation of pretrained word embeddings) and use a neural network model, i.e. a series of differentiable transformations $h_k$: where $\mathbf {z} _k$ is the vector of activations in the $k$-th layer and the final output $\mathbf {z} _K$ consists of the logits for each class. Typically each $h_k$ will be an affine transformation followed by an activation function (e.g. ReLU or sigmoid). The affine transformation can be a convolution (with the inputs and outputs having an implied 2D structure) of a vector of activations at each point in a sequence; in what follows these activations will be concatenated along the sequence to form a vector $\mathbf {z} _k$. Methodology ::: Verification Verification is the process of examining whether the output of a model satisfies a given specification. Formally, this means establishing whether the following holds true for a given normal model input $\mathbf {x} _0$: $\forall \mathbf {z} _0 \in \mathcal {X}_\mathrm {in}(\mathbf {x} _0):~ \mathbf {z} _K \in \mathcal {X}_\mathrm {out}$, where $\mathcal {X}_\mathrm {out}$ characterizes a constraint on the outputs, and $\mathcal {X}_\mathrm {in}(\mathbf {x} _0)$ defines a neighbourhood of $\mathbf {x} _0$ throughout which the constraint should be satisfied. In our concrete use case, we consider a specification of robustness against adversarial attacks which are defined by bounded input perturbations (synonym flips up to $\delta $ words, or character flips up to $\delta $ characters) of the original sentence $x$. The attack space $\mathcal {X}_\mathrm {in} (\mathbf {x} _0)$ is the set of vector representations (embeddings) of all such perturbed sentences. Denoting by $z_{K,y}$ the logit of label $y$, we formulate the output constraint that for all classes $y: z_{K,y_\textrm {true}} \ge z_{K,y}$. This specification establishes that the prediction of all perturbed sentences $\mathbf {z} _0 \in \mathcal {X}_\mathrm {in}(\mathbf {x} _0)$ should correspond to the correct label $y_\textrm {true}$. This specification may equivalently be formulated as a set of half-space constraints on the logits: for each class $y$ where $\mathbf {e}_{i}$ is a one-hot vector with 1 in the $i$-th position. In other words, the true class logit should be greater or equal than those for all other classes $y$, which means the prediction remains constant. Methodology ::: Verification as Optimisation Verifying the specification in Eq. (DISPLAY_FORM10) can be done by solving the following constrained optimisation problem to find the input that would most strongly violate it: where $\mathbf {c} $ is a vector with entries $c_y = 1$, $c_{y_\textrm {true}} = -1$ and 0 everywhere else. If the optimal value of the above optimisation problem is smaller than 0, then the specification in Eq. (DISPLAY_FORM10) is satisfied, otherwise a counter-example has been found. In our case, this corresponds to a successful adversarial attack. Methodology ::: Modeling Input Perturbations using Simplices In the interests of computational feasibility, we will actually attempt to verify the specification on a larger, but more tractable input perturbation space $\bar{\mathcal {X}}_\mathrm {in} \supseteq \mathcal {X}_\mathrm {in}$. Any data point that is verifiable on this larger input perturbation space is necessarily verifiable with respect to the original specification. In the domain of image classification, $\mathcal {X}_\mathrm {in}$ is often modeled as an $L_\infty $-ball, corresponding to input perturbations in which each pixel may be independently varied within a small interval. However, using such interval bounds is unsuitable for our situation of perturbations consisting of a small number $\delta $ of symbol substitutions. Although we could construct an axis-aligned bounding box $\bar{\mathcal {X}}_\mathrm {in}$ in embedding space that encompasses all of $\mathcal {X}_\mathrm {in}$, it would over-approximate the perturbation space to such an extent that it would contain perturbations where all symbols in the sentence have been substituted simultaneously. To remedy this, we propose a tighter over-approximation in the form of a `simplex' in embedding space. We first define this for the special case $\delta =1$, in which $\mathcal {X}_\mathrm {in} = \lbrace \mathbf {x} _0\rbrace \cup \lbrace \mathbf {p} ^{(m)}_0 : 1\le m\le M\rbrace $ consists of the representations of all $M$ sentences $p^{(m)}$ derived from $x$ by performing a single synonym (or character) substitution, together with the unperturbed sentence $x$ itself. In this case we define $\bar{\mathcal {X}}_\mathrm {in}$ to be the convex hull $\mathcal {S}_1$ of $\mathcal {X}_\mathrm {in}$. Note we are not considering contextual embeddings BIBREF33 here. Each `vertex' $\mathbf {p} ^{(m)}_0$ is a sequence of embedding vectors that differs from $\mathbf {x} _0$ at only one word (or character) position. For a larger perturbation radius $\delta >1$, the cardinality of $\mathcal {X}_\mathrm {in}$ grows exponentially, so manipulating its convex hull becomes infeasible. However, dilating $\mathcal {S}_1$ centered at $\mathbf {x} _0$, scaling it up by a factor of $\delta $, yields a simplex $\mathcal {S}_\delta $ with $M+1$ vertices that contains $\mathcal {X}_\mathrm {in}$. More formally, we define a region in the input embedding space based on the $M$ `elementary' perturbations $\lbrace \mathbf {p} ^{(m)}_0: m = 1 \ldots M\rbrace $ of $\mathbf {x} _0$ defined earlier for the $\delta =1$ case. For perturbations of up to $\delta $ substitutions, we define $\bar{\mathcal {X}}_\mathrm {in}(\mathbf {x} _0)$ as the convex hull of $\lbrace \mathbf {z} ^{(m)}_0: m = 0 \ldots M\rbrace $, where $\mathbf {z} ^{(0)}_0=\mathbf {x} _0$ denotes the original (unperturbed) sentence representation and, for $m\ge 1$, $\mathbf {z} ^{(m)}_0 = \mathbf {x} _0+\delta \cdot (\mathbf {p} ^{(m)}_0-\mathbf {x} _0)$. The convex hull is an over-approximation of $\mathcal {X}_\mathrm {in}(\mathbf {x} _0)$: it contains the representations of all sentences derived from $x$ by performing up to $\delta $ substitutions at distinct word (or character) positions. Methodology ::: Interval Bound Propagation To estimate the optimal value of the problem (DISPLAY_FORM12), given an input $\mathbf {z} _0$, we can propagate the upper/lower bounds on the activations $\mathbf {z} _k$ of each layer using interval arithmetic BIBREF17. We begin by computing interval bounds on the first layer's activations. Recall that any input $\mathbf {z} _0 \in \mathcal {X}_\mathrm {in}$ will lie within the convex hull of certain vertices $\lbrace \mathbf {z} ^{(m)}_0: m = 0 \ldots M\rbrace $. Then, assuming that the first layer $h_1$ is an affine transformation (e.g. linear or convolutional) followed by a monotonic activation function, the lower and upper bounds on the components $z_{1,i}$ of the first layer's activations $\mathbf {z} _1$ are as follows: Note that these bounds are efficient to compute (by passing each perturbation $\mathbf {z} ^{(m)}_0$ through the first layer); in particular there is no need to compute the convex hull polytope. For subsequent layers $k>1$, the bounds on the components $z_{k,i}$ of $\mathbf {z} _k$ are: The above optimisation problems can be solved in closed form quickly for affine layers and monotonic activation functions, as illustrated in IBP. Finally, the lower and upper bounds of the output logits $\mathbf {z} _K$ can be used to construct an upper bound on the solution of (DISPLAY_FORM12): Methodology ::: Interval Bound Propagation ::: Verifiable Training. The upper bound in (DISPLAY_FORM17) is fast to compute (only requires two forward passes for upper and lower bounds through the network). Hence, we can define a loss to optimise models such that the models are trained to be verifiable. Solving (DISPLAY_FORM17) is equivalent to finding the worst-case logit difference, and this is achieved when the logit of the true class is equal to its lower bound, and all other logits equal to their upper bounds. Concretely, for each class $y \ne y_\textrm {true} $: $\hat{\mathbf {z}}_{K,y}(\delta ) = \overline{\mathbf {z}}_{K,y} (\delta ) $, and $\hat{\mathbf {z}}_{K,y_\textrm {true}}(\delta ) = \underline{\mathbf {z}}_{K,y_\textrm {true}} (\delta ) $. The training loss can then be formulated as where $\ell $ is the cross-entropy loss, $\kappa $ a hyperparameter that controls the relative weights between the classification loss $L_\textrm {normal}$ and specification loss $L_\textrm {spec}$. If $\delta = 0$ then $\mathbf {z} _K = \hat{\mathbf {z}}_K(\delta )$, and thus $L$ reduces to a standard classification loss. Empirically, we found that a curriculum-based training, starting with $\kappa $=1 and linearly decreasing to 0.25, is effective for verifiable training. Experiments We conduct verification experiments on two text classification datasets, Stanford Sentiment Treebank (SST) BIBREF15 and AG News corpus, processed in BIBREF16. We focus on word-level and character-level experiments on SST and character-level experiments on AG News. Our specification is that models should preserve their prediction against up to $\delta $ synonym substitutions or character typos, respectively. Experiments ::: A Motivating Example We provide an example from Table TABREF29 to highlight different evaluation metrics and training methods. Given a sentence, “you ' ve seen them a million times .”, that is predicted correctly (called Nominal Accuracy) by a classification model, we want to further examine whether the model is robust against character typos (e.g., up to $\delta =3$ typos) to this example. One way is to use some heuristic to search for a valid example with up to 3 typos that can change the prediction the most (called adversarial example). We evaluate the model using this adversarial example and report the performance (called Adversarial Accuracy). However, even if the adversarial example is predicted correctly, one can still ask: is the model truly robust against any typos (up to 3) to this example? In order to have a certificate that the prediction will not change under any $\delta =3$ character typos (called verifiably robust), we could in theory exhaustively search over all possible cases and check whether any of the predictions is changed (called Oracle Accuracy). If we only allow a character to be replaced by another character nearby on the keyboard, already for this short sentence we need to exhaustively search over 2,951 possible perturbations. To avoid this combinatorial growth, we can instead model all possible perturbations using the proposed simplex bounds and propagate the bounds through IBP at the cost of two forward passes. Following Eq. (DISPLAY_FORM12), we can check whether this example can be verified to be robust against all perturbations (called IBP-Verified Accuracy). There are also a number of ways in which the training procedure can be enhanced to improve the verifiable robustness of a model against typos to the sentence. The baseline is to train the model with the original/normal sentence directly (called Normal Training). Another way is to randomly sample typo sentences among the 2,951 possible perturbations and add these sentences to the training data (called Data Augmentation Training). Yet another way is to find, at each training iteration, the adversarial example among the (subset of) 2,951 possible perturbations that can change the prediction the most; we then use the adversarial example alongside the training example (called Adversarial Training). Finally, as simplex bounds with IBP is efficient to run, we can train a model to be verifiable by minimising Eq. (DISPLAY_FORM19) (called Verifiable Training). Experiments ::: Baselines In this section we detail our baseline models. Experiments ::: Baselines ::: Adversarial Training. In adversarial training BIBREF34, BIBREF20, the goal is to optimise the following saddle point problem: where the inner maximisation problem is to find an adversarial perturbation $\mathbf {z} _0\in \mathcal {X}_\mathrm {in}(\mathbf {x} _0)$ that can maximise the loss. In the inner maximisation problem, we use HotFlip BIBREF5 with perturbation budget $\delta $ to find the adversarial example. The outer minimisation problem aims to update model parameters such that the adversarial risk of (DISPLAY_FORM24) is minimised. To balance between the adversarial robustness and nominal accuracy, we use an interpolation weight of 0.5 between the original cross-entropy loss and the adversarial risk. Experiments ::: Baselines ::: Data Augmentation Training. In the data augmentation setup, we randomly sample a valid perturbation $z$ with perturbation budget $\delta $ from a normal input $x$, and minimise the cross-entropy loss given the perturbed sample $z$ (denoted as data augmentation loss). We also set the interpolation weight between the data augmentation loss and the original normal cross-entropy loss to 0.5. Experiments ::: Baselines ::: Normal Training. In normal training, we use the likelihood-based training using the normal training input $x$. Experiments ::: Setup We use a shallow convolutional network with a small number of fully-connected layers for SST and AG News experiments. The detailed model architectures and hyperparameter details are introduced in the supplementary material. Although we use shallow models for ease of verifiable training, our nominal accuracy is on par with previous work such as BIBREF15 (85.4%) and BIBREF35 (84.3%) in SST and BIBREF16 (87.18%) in AG News. During training, we set the maximum number of perturbations to $\delta =3$, and evaluate performance with the maximum number of perturbations from $\delta =1$ to 6 at test time. For word-level experiments, we construct the synonym pairs using the PPDB database BIBREF36 and filter the synonyms with fine-grained part-of-speech tags using Spacy BIBREF37. For character-level experiments, we use synthetic keyboard typos from BIBREF3, and allow one possible alteration per character that is adjacent to it on an American keyboard. The allowable input perturbation space is much larger than for word-level synonym substitutions, as shown in Table TABREF48. Experiments ::: Evaluation Metrics We use the following four metrics to evaluate our models: i) test set accuracy (called Acc.), ii) adversarial test accuracy (called Adv. Acc.), which uses samples generated by HotFlip attacks on the original test examples, iii) verifiable accuracy under IBP verification (called IBP-verified), that is, the ratio of test samples for which IBP can verify that the specification is not violated, and iv) exhaustively verified accuracy (called Oracle), computed by enumerating all possible perturbations given the perturbation budget $\delta $, where a sample is verifiably robust if the prediction is unchanged under all valid perturbations. Experiments ::: Results Table TABREF28 shows the results of IBP training and baseline models under $\delta =3$ and $\delta =2$ perturbations on SST and AG News, respectively. Figures FIGREF31 and FIGREF36 show the character- and word-level results with $\delta $ between 1 and 6 under four metrics on the SST test set; similar figures for SST word-level (adversarial training, data augmentation) models and AG News dataset can be found in the supplementary material. Experiments ::: Results ::: Oracle Accuracy and Adversarial Accuracy. In Table TABREF28, comparing adversarial accuracy with exhaustive verification accuracy (oracle), we observe that although adversarial training is effective at defending against HotFlip attacks (74.9 / 76.8 / 85.5%), the oracle adversarial accuracy under exhaustive testing (25.8 / 74.6 / 81.6%) is much lower in SST-character / SST-word / AG-character level, respectively. For illustration, we show some concrete adversarial examples from the HotFlip attack in Table TABREF29. For some samples, even though the model is robust with respect to HotFlip attacks, its predictions are incorrect for stronger adversarial examples obtained using the exhaustive verification oracle. This underscores the need for verification, as robustness with respect to suboptimal adversarial attacks alone might give a false sense of security. Experiments ::: Results ::: Effectiveness of Simplex Bounds with IBP. Rather than sampling individual points from the perturbation space, IBP training covers the full space at once. The resulting models achieve the highest exhaustively verified accuracy at the cost of only moderate deterioration in nominal accuracy (Table TABREF28). At test time, IBP allows for constant-time verification with arbitrary $\delta $, whereas exhaustive verification requires evaluation over an exponentially growing search space. Experiments ::: Results ::: Perturbation Space Size. In Table TABREF28, when the perturbation space is larger (SST character-level vs. SST word-level), (a) across models, there is a larger gap in adversarial accuracy and true robustness (oracle); (b) the difference in oracle robustness between IBP and adversarial training is even larger (73.1% vs. 25.8% and 76.5% vs. 74.6%). Experiments ::: Results ::: Perturbation Budget. In Figures FIGREF31 and FIGREF36, we compare normal training, adversarial training, data augmentation, and verifiable training models with four metrics under various perturbation budgets on the SST dataset. Overall, as the perturbation budget increases, the adversarial accuracy, oracle accuracy, and IBP-verified accuracy decrease. We can observe that even for large perturbation budgets, verifiably trained models are still able to verify a sizable number of samples. Again, although adversarial accuracy flattens for larger perturbation budgets in the word level experiments, oracle verification can further find counterexamples to change the prediction. Note that exhaustive verification becomes intractable with large perturbation sizes. Experiments ::: Results ::: Computational Cost of Exhaustive Verification. The perturbation space in NLP problems is discrete and finite, and a valid option to verify the specification is to exhaustively generate predictions for all $\mathbf {z} _0 \in \mathcal {X}_\mathrm {in} (\mathbf {x} _0)$, and then check if at least one does not match the correct label. Conversely, such an exhaustive (oracle) approach can also identify the strongest possible attack. But the size of $\mathcal {X}_\mathrm {in}$ grows exponentially with $\delta $, and exhaustive verification quickly becomes prohibitively expensive. In Table TABREF48, we show the maximum perturbation space size in the SST and AG News test set for different perturbation radii $\delta $. This number grows exponentially as $\delta $ increases. To further illustrate this, Figure FIGREF49 shows the number of forward passes required to verify a given proportion of the SST test set for an IBP-trained model using exhaustive verification and IBP verification. IBP reaches verification levels comparable to an exhaustive verification oracle, but requires only two forward passes to verify any sample – one pass for computing the upper, and one for the lower bounds. Exhaustive verification, on the other hand, requires several orders of magnitude more forward passes, and there is a tail of samples with extremely large attack spaces. Experiments ::: Counter-Fitted Embeddings As shown in Figures FIGREF31 and FIGREF36, although IBP can verify arbitrary networks in theory, the verification bound is very loose except for models trained to be IBP-verifiable. One possible reason is the potentially large volume of the perturbation simplex. Since representations of substitution words/characters are not necessarily close to those of synonyms/typos in embedding space, the vertices of the simplex could be far apart, and thus cover a large area in representation space. Therefore, when propagating the interval bounds through the network, the interval bounds become too loose and fail to verify most of the examples if the models are not specifically trained. To test this hypothesis, we follow BIBREF38 and use fine-tuned GloVe embeddings trained to respect linguistic constraints; these representations (called counter-fitted embeddings) force synonyms to be closer and antonyms to be farther apart using word pairs from the PPDB database BIBREF36 and WordNet BIBREF39. We repeat the word level experiments with these counter-fitted embeddings, Figures FIGREF36 and FIGREF36 show the experimental results. We observe that IBP verified accuracy is now substantially higher across models, especially for $\delta =1, 2, 3$. The examples which IBP can verify increase by up to 33.2% when using the counter-fitted embeddings (normal training, $\delta =1$). Moreover, adversarial and exhaustively verified accuracy are also improved, at the cost of a mild deterioration in nominal test accuracy. The IBP-trained model also further improves both its oracle accuracy and IBP verified accuracy. These results validate our hypothesis that reducing the simplex volume via soft linguistic constraints can provide even tighter bounds for IBP, resulting in larger proportions of verifiable samples. Discussion Our experiments indicate that adversarial attacks are not always the worst adversarial inputs, which can only be revealed via verification. On the other hand, exhaustive verification is computationally very expensive. Our results show that using the proposed simplex bounds with IBP can verify a sizable amount of test samples, and can be considered a potent verification method in an NLP context. We note however two limitations within the scope of this work: i) limited model depth: we only investigated models with few layers. IBP bounds are likely to become looser as the number of layers increases. ii) limited model types: we only studied models with CNN and fully connected layers. We focused on the HotFlip attack to showcase specification verification in the NLP context, with the goal of understanding factors that impact its effectiveness (e.g. the perturbation space volume, see Section SECREF50). It is worth noting that symbol substitution is general enough to encompass other threat models such as lexical entailment perturbations BIBREF40, and could potentially be extended to the addition of pre/postfixes BIBREF2, BIBREF41. Interesting directions of future work include: tightening IBP bounds to allow applicability to deeper models, investigating bound propagation in other types of neural architectures (e.g. those based on recurrent networks or self-attention), and exploring other forms of specifications in NLP. Conclusion We introduced formal verification of text classification models against synonym and character flip perturbations. Through experiments, we demonstrated the effectiveness of the proposed simplex bounds with IBP both during training and testing, and found weaknesses of adversarial training compared with exhaustive verification. Verifiably trained models achieve the highest exhaustive verification accuracy on SST and AG News. IBP verifies models in constant time, which is exponentially more efficient than naive verification via exhaustive search.
Stanford Sentiment Treebank (SST) BIBREF15 and AG News BIBREF16
10ddc5caf36fe9d7438eb5a3936e24580c4ffe6a
10ddc5caf36fe9d7438eb5a3936e24580c4ffe6a_0
Q: Which competitive relational classification models do they test? Text: Introduction Author contributions: Hao Zhu designed the research; Weize Chen prepared the data, and organized data annotation; Hao Zhu and Xu Han designed the experiments; Weize Chen performed the experiments; Hao Zhu, Weize Chen and Xu Han wrote the paper; Zhiyuan Liu and Maosong Sun proofread the paper. Zhiyuan Liu is the corresponding author. Relations, representing various types of connections between entities or arguments, are the core of expressing relational facts in most general knowledge bases (KBs) BIBREF0 , BIBREF1 . Hence, identifying relations is a crucial problem for several information extraction tasks. Although considerable effort has been devoted to these tasks, some nuances between similar relations are still overlooked, (tab:similarityexample shows an example); on the other hand, some distinct surface forms carrying the same relational semantics are mistaken as different relations. These severe problems motivate us to quantify the similarity between relations in a more effective and robust method. In this paper, we introduce an adaptive and general framework for measuring similarity of the pairs of relations. Suppose for each relation INLINEFORM0 , we have obtained a conditional distribution, INLINEFORM1 ( INLINEFORM2 are head and tail entities, and INLINEFORM3 is a relation), over all head-tail entity pairs given INLINEFORM4 . We could quantify similarity between a pair of relations by the divergence between the conditional probability distributions given these relations. In this paper, this conditional probability is given by a simple feed-forward neural network, which can capture the dependencies between entities conditioned on specific relations. Despite its simplicity, the proposed network is expected to cover various facts, even if the facts are not used for training, owing to the good generalizability of neural networks. For example, our network will assign a fact a higher probability if it is “logical”: e.g., the network might prefer an athlete has the same nationality as same as his/her national team rather than other nations. Intuitively, two similar relations should have similar conditional distributions over head-tail entity pairs INLINEFORM0 , e.g., the entity pairs associated with be trade to and play for are most likely to be athletes and their clubs, whereas those associated with live in are often people and locations. In this paper, we evaluate the similarity between relations based on their conditional distributions over entity pairs. Specifically, we adopt Kullback–Leibler (KL) divergence of both directions as the metric. However, computing exact KL requires iterating over the whole entity pair space INLINEFORM1 , which is quite intractable. Therefore, we further provide a sampling-based method to approximate the similarity score over the entity pair space for computational efficiency. Besides developing a framework for assessing the similarity between relations, our second contribution is that we have done a survey of applications. We present experiments and analysis aimed at answering five questions: (1) How well does the computed similarity score correlate with human judgment about the similarity between relations? How does our approach compare to other possible approaches based on other kinds of relation embeddings to define a similarity? (sec:relationship and sec:human-judgment) (2) Open IE models inevitably extract many redundant relations. How can our approach help reduce such redundancy? (sec:openie) (3) To which extent, quantitatively, does best relational classification models make errors among similar relations? (sec:error-analysis) (4) Could similarity be used in a heuristic method to enhance negative sampling for relation prediction? (sec:training-guidance-relation-prediction) (5) Could similarity be used as an adaptive margin in softmax-margin training method for relation extraction? (sec:training-guidance-relation-extraction) Finally, we conclude with a discussion of valid extensions to our method and other possible applications. Learning Head-Tail Distribution Just as introduced in sec:introduction, we quantify the similarity between relations by their corresponding head-tail entity pair distributions. Consider the typical case that we have got numbers of facts, but they are still sparse among all facts in the real world. How could we obtain a well-generalized distribution over the whole space of possible triples beyond the training facts? This section proposes a method to parameterize such a distribution. Formal Definition of Fact Distribution A fact is a triple INLINEFORM0 , where INLINEFORM1 and INLINEFORM2 are called head and tail entities, INLINEFORM3 is the relation connecting them, INLINEFORM4 and INLINEFORM5 are the sets of entities and relations respectively. We consider a score function INLINEFORM6 maps all triples to a scalar value. As a special case, the function can be factorized into the sum of two parts: INLINEFORM7 . We use INLINEFORM8 to define the unnormalized probability. DISPLAYFORM0 for every triple INLINEFORM0 . The real parameter INLINEFORM1 can be adjusted to obtain difference distributions over facts. In this paper, we only consider locally normalized version of INLINEFORM0 : DISPLAYFORM0 where INLINEFORM0 and INLINEFORM1 are directly parameterized by feed-forward neural networks. Through local normalization, INLINEFORM2 is naturally a valid probability distribution, as the partition function INLINEFORM3 . Therefore, INLINEFORM4 . Neural architecture design Here we introduce our special design of neural networks. For the first part and the second part, we implement the scoring functions introduced in eq:local-normalization as DISPLAYFORM0 where each INLINEFORM0 represents a multi-layer perceptron composed of layers like INLINEFORM1 , INLINEFORM2 , INLINEFORM3 are embeddings of INLINEFORM4 , INLINEFORM5 , and INLINEFORM6 includes weights and biases in all layers. Training Now we discuss the method to perform training. In this paper, we consider joint training. By minimizing the loss function, we compute the model parameters INLINEFORM0 : DISPLAYFORM0 where INLINEFORM0 is a set of triples. The whole set of parameters, INLINEFORM1 . We train these parameters by Adam optimizer BIBREF2 . Training details are shown in sec:trainingdetail. Quantifying Similarity So far, we have talked about how to use neural networks to approximate the natural distribution of facts. The center topic of our paper, quantifying similarity, will be discussed in detail in this section. Relations as Distributions In this paper, we provide a probability view of relations by representing relation INLINEFORM0 as a probability distribution INLINEFORM1 . After training the neural network on a given set of triples, the model is expected to generalize well on the whole INLINEFORM2 space. Note that it is very easy to calculate INLINEFORM0 in our model thanks to local normalization (eq:local-normalization). Therefore, we can compute it by DISPLAYFORM0 Defining Similarity As the basis of our definition, we hypothesize that the similarity between INLINEFORM0 reflects the similarity between relations. For example, if the conditional distributions of two relations put mass on similar entity pairs, the two relations should be quite similar. If they emphasize different ones, the two should have some differences in meaning. Formally, we define the similarity between two relations as a function of the divergence between the distributions of corresponding head-tail entity pairs: DISPLAYFORM0 where INLINEFORM0 denotes Kullback–Leibler divergence, DISPLAYFORM0 vice versa, and function INLINEFORM0 is a symmetrical function. To keep the coherence between semantic meaning of “similarity” and our definition, INLINEFORM1 should be a monotonically decreasing function. Through this paper, we choose to use an exponential family composed with max function, i.e., INLINEFORM2 . Note that by taking both sides of KL divergence into account, our definition incorporates both the entity pairs with high probability in INLINEFORM3 and INLINEFORM4 . Intuitively, if INLINEFORM5 mainly distributes on a proportion of entities pairs that INLINEFORM6 emphasizes, INLINEFORM7 is only hyponymy of INLINEFORM8 . Considering both sides of KL divergence could help model yield more comprehensive consideration. We will talk about the advantage of this method in detail in sec:relationship. Calculating Similarity Just as introduced in sec:introduction, it is intractable to compute similarity exactly, as involving INLINEFORM0 computation. Hence, we consider the monte-carlo approximation: DISPLAYFORM0 where INLINEFORM0 is a list of entity pairs sampled from INLINEFORM1 . We use sequential sampling to gain INLINEFORM6 , which means we first sample INLINEFORM7 given INLINEFORM8 from INLINEFORM9 , and then sample INLINEFORM10 given INLINEFORM11 and INLINEFORM12 from INLINEFORM13 . Relationship with other metrics Previous work proposed various methods for representing relations as vectors BIBREF3 , BIBREF4 , as matrices BIBREF5 , even as angles BIBREF6 , etc. Based on each of these representations, one could easily define various similarity quantification methods. We show in tab:other-similarity the best one of them in each category of relation presentation. Here we provide two intuitive reasons for using our proposed probability-based similarity: (1) the capacity of a single fixed-size representation is limited — some details about the fact distribution is lost during embedding; (2) directly comparing distributions yields a better interpretability — you can not know about how two relations are different given two relation embeddings, but our model helps you study the detailed differences between probabilities on every entity pair. fig:head-tail-distribution provides an example. Although the two relations talk about the same topic, they have different meanings. TransE embeds them as vectors the closest to each other, while our model can capture the distinction between the distributions corresponds to the two relations, which could be directly noticed from the figure. Embeddings used in this graph are from a trained TransE model. Dataset Construction We show the statistics of the dataset we use in tab:statistics, and the construction procedures will be introduced in this section. Wikidata In Wikidata BIBREF8 , facts can be described as (Head item/property, Property, Tail item/property). To construct a dataset suitable for our task, we only consider the facts whose head entity and tail entity are both items. We first choose the most common 202 relations and 120000 entities from Wikidata as our initial data. Considering that the facts containing the two most frequently appearing relations (P2860: cites, and P31: instance of) occupy half of the initial data, we drop the two relations to downsize the dataset and make the dataset more balanced. Finally, we keep the triples whose head and tail both come from the selected 120000 entities as well as its relation comes from the remaining 200 relations. ReVerb Extractions ReVerb BIBREF9 is a program that automatically identifies and extracts binary relationships from English sentences. We use the extractions from running ReVerb on Wikipedia. We only keep the relations appear more than 10 times and their corresponding triples to construct our dataset. FB15K and TACRED FB15K BIBREF3 is a subset of freebase. TACRED BIBREF10 is a large supervised relation extraction dataset obtained via crowdsourcing. We directly use these two dataset, no extra processing steps were applied. Human Judgments Following BIBREF11 , BIBREF12 and the vast amount of previous work on semantic similarity, we ask nine undergraduate subjects to assess the similarity of 360 pairs of relations from a subset of Wikidata BIBREF8 that are chosen to cover from high to low levels of similarity. In our experiment, subjects were asked to rate an integer similarity score from 0 (no similarity) to 4 (perfectly the same) for each pair. The inter-subject correlation, estimated by leaving-one-out method BIBREF13 , is r = INLINEFORM0 , standard deviation = INLINEFORM1 . This important reference value (marked in fig:correlation) could be seen as the highest expected performance for machines BIBREF12 . To get baselines for comparison, we consider other possible methods to define similarity functions, as shown in tab:other-similarity. We compute the correlation between these methods and human judgment scores. As the models we have chosen are the ones work best in knowledge base completion, we do expect the similarity quantification approaches based on them could measure some degree of similarity. As shown in fig:correlation, the three baseline models could achieve moderate ( INLINEFORM0 ) positive correlation. On the other hand, our model shows a stronger correlation ( INLINEFORM1 ) with human judgment, indicating that considering the probability over whole entity pair space helps to gain a similarity closer to human judgments. These results provide evidence for our claim raised in sec:defining-similarity. Redundant Relation Removal Open IE extracts concise token patterns from plain text to represent various relations between entities, e.g.,, (Mark Twain, was born in, Florida). As Open IE is significant for constructing KBs, many effective extractors have been proposed to extract triples, such as Text-Runner BIBREF14 , ReVerb BIBREF9 , and Standford Open IE BIBREF15 . However, these extractors only yield relation patterns between entities, without aggregating and clustering their results. Accordingly, there are a fair amount of redundant relation patterns after extracting those relation patterns. Furthermore, the redundant patterns lead to some redundant relations in KBs. Recently, some efforts are devoted to Open Relation Extraction (Open RE) BIBREF16 , BIBREF17 , BIBREF18 , BIBREF19 , aiming to cluster relation patterns into several relation types instead of redundant relation patterns. Whenas, these Open RE methods adopt distantly supervised labels as golden relation types, suffering from both false positive and false negative problems on the one hand. On the other hand, these methods still rely on the conventional similarity metrics mentioned above. In this section, we will show that our defined similarity quantification could help Open IE by identifying redundant relations. To be specific, we set a toy experiment to remove redundant relations in KBs for a preliminary comparison (sec:toy-experiment). Then, we evaluate our model and baselines on the real-world dataset extracted by Open IE methods (sec:real-experiment). Considering the existing evaluation metric for Open IE and Open RE rely on either labor-intensive annotations or distantly supervised annotations, we propose a metric approximating recall and precision evaluation based on operable human annotations for balancing both efficiency and accuracy. Toy Experiment In this subsection, we propose a toy environment to verify our similarity-based method. Specifically, we construct a dataset from Wikidata and implement Chinese restaurant process to split every relation in the dataset into several sub-relations. Then, we filter out those sub-relations appearing less than 50 times to eventually get 1165 relations. All these split relations are regarded as different ones during training, and then different relation similarity metrics are adopted to merge those sub-relations into one relation. As Figure FIGREF26 shown that the matrices-based approach is less effective than other approaches, we leave this approach out of this experiment. The results are shown in Table TABREF37 . Real World Experiment In this subsection, we evaluate various relation similarity metrics on the real-world Open IE patterns. The dataset are constructed by ReVerb. Different patterns will be regarded as different relations during training, and we also adopt various relation similarity metrics to merge similar relation patterns. Because it is nearly impossible to annotate all pattern pairs for their merging or not, meanwhile it is also inappropriate to take distantly supervised annotations as golden results. Hence, we propose a novel metric approximating recall and precision evaluation based on minimal human annotations for evaluation in this experiment. Recall is defined as the yielding fraction of true positive instances over the total amount of real positive instances. However, we do not have annotations about which pairs of relations are synonymous. Crowdsourcing is a method to obtain a large number of high-quality annotations. Nevertheless, applying crowdsourcing is not trivial in our settings, because it is intractable to enumerate all synonymous pairs in the large space of relation (pattern) pairs INLINEFORM0 in Open IE. A promising method is to use rejection sampling by uniform sampling from the whole space, and only keep the synonymous ones judged by crowdworkers. However, this is not practical either, as the synonymous pairs are sparse in the whole space, resulting in low efficiency. Fortunately, we could use normalized importance sampling as an alternative to get an unbiased estimation of recall. Theorem 1 Suppose every sample INLINEFORM0 has a label INLINEFORM1 , and the model to be evaluated also gives its prediction INLINEFORM2 . The recall can be written as DISPLAYFORM0 where INLINEFORM0 is the uniform distribution over all samples with INLINEFORM1 . If we have a proposal distribution INLINEFORM2 satisfying INLINEFORM3 , we get an unbiased estimation of recall: DISPLAYFORM0 where INLINEFORM0 is a normalized version of INLINEFORM1 , where INLINEFORM2 is the unnormalized version of q, and INLINEFORM3 are i.i.d. drawn from INLINEFORM4 . Similar to eq:recall-expectation, we can write the expectation form of precision: DISPLAYFORM0 where INLINEFORM0 is the uniform distribution over all samples with INLINEFORM1 . As these samples could be found out by performing models on it. We can simply approximate precision by Monte Carlo Sampling: DISPLAYFORM0 where INLINEFORM0 . In our setting, INLINEFORM0 , INLINEFORM1 means INLINEFORM2 and INLINEFORM3 are the same relations, INLINEFORM4 means INLINEFORM5 is larger than a threshold INLINEFORM6 . The results on the ReVerb Extractions dataset that we constructed are described in fig:precision-recall-openie. To approximate recall, we use the similarity scores as the proposal distribution INLINEFORM0 . 500 relation pairs are then drawn from INLINEFORM1 . To approximate precision, we set thresholds at equal intervals. At each threshold, we uniformly sample 50 to 100 relation pairs whose similarity score given by the model is larger than the threshold. We ask 15 undergraduates to judge whether two relations in a relation pair have the same meaning. A relation pair is viewed valid only if 8 of the annotators annotate it as valid. We use the annotations to approximate recall and precision with eq:recall and eq:precision. Apart from the confidential interval of precision shown in the figure, the largest INLINEFORM2 confidential interval among thresholds for recall is INLINEFORM3 . From the result, we could see that our model performs much better than other models' similarity by a very large margin. Error Analysis for Relational Classification In this section, we consider two kinds of relational classification tasks: (1) relation prediction and (2) relation extraction. Relation prediction aims at predicting the relationship between entities with a given set of triples as training data; while relation extraction aims at extracting the relationship between two entities in a sentence. Relation Prediction We hope to design a simple and clear experiment setup to conduct error analysis for relational prediction. Therefore, we consider a typical method TransE BIBREF3 as the subject as well as FB15K BIBREF3 as the dataset. TransE embeds entities and relations as vectors, and train these embeddings by minimizing DISPLAYFORM0 where INLINEFORM0 is the set of training triples, INLINEFORM1 is the distance function, INLINEFORM2 is a negative sample with one element different from INLINEFORM4 uniformly sampled from INLINEFORM5 , and INLINEFORM6 is the margin. During testing, for each entity pair INLINEFORM0 , TransE rank relations according to INLINEFORM1 . For each INLINEFORM2 in the test set, we call the relations with higher rank scores than INLINEFORM3 distracting relations. We then compare the similarity between the golden relation and distracting relations. Note that some entity pairs could correspond to more than one relations, in which case we just do not see them as distracting relations. Relation Extraction For relation extraction, we consider the supervised relation extraction setting and TACRED dataset BIBREF10 . As for the subject model, we use the best model on TACRED dataset — position-aware neural sequence model. This method first passes the sentence into an LSTM and then calculate an attention sum of the hidden states in the LSTM by taking positional features into account. This simple and effective method achieves the best in TACRED dataset. Results fig:averank shows the distribution of similarity ranks of distracting relations of the above mentioned models' outputs on both relation prediction and relation extraction tasks. From fig:averankrp,fig:averankre, we could observe the most distracting relations are the most similar ones, which corroborate our hypothesis that even the best models on these tasks still make mistakes among the most similar relations. This result also highlights the importance of a heuristic method for guiding models to pay more attention to the boundary between similar relations. We also try to do the negative sampling with relation type constraints, but we see no improvement compared with uniform sampling. The details of negative sampling with relation type constraints are presented in sec:relation-type-constraints. Similarity and Negative Sampling Based on the observation presented in sec:erroranalysisresult, we find out that similar relations are often confusing for relation prediction models. Therefore, corrupted triples with similar relations can be used as high-quality negative samples. For a given valid triple INLINEFORM0 , we corrupt the triple by substituting INLINEFORM1 with INLINEFORM2 with the probability, DISPLAYFORM0 where INLINEFORM0 is the temperature of the exponential function, the bigger the INLINEFORM1 is, the flatter the probability distribution is. When the temperature approaches infinite, the sampling process reduces to uniform sampling. In training, we set the initial temperature to a high level and gradually reduce the temperature. Intuitively, it enables the model to distinguish among those obviously different relations in the early stage and gives more and more confusing negative triples as the training processes to help the model distinguish the similar relations. This can be also viewed as a process of curriculum learning BIBREF21 , the data fed to the model gradually changes from simple negative triples to hard ones. We perform relation prediction task on FB15K with TransE. Following BIBREF3 , we use the "Filtered" setting protocol, i.e., filtering out the corrupted triples that appear in the dataset. Our sampling method is shown to improve the model's performance, especially on Hit@1 (fig:relationprediction). Training details are described in sec:trainingdetail. Similarity and Softmax-Margin Loss Similar to sec:training-guidance-relation-prediction, we find out that relation extraction models often make wrong preditions on similar relations. In this section, we use similarity as an adaptive margin in softmax-margin loss to improve the performance of relation extraction models. As shown in BIBREF22 , Softmax-Margin Loss can be expressed as DISPLAYFORM0 where INLINEFORM0 denotes a structured output space for INLINEFORM1 , and INLINEFORM2 is INLINEFORM3 example in training data. We can easily incorporate similarity into cost function INLINEFORM0 . In this task, we define the cost function as INLINEFORM1 , where INLINEFORM2 is a hyperparameter. Intuitively, we give a larger margin between similar relations, forcing the model to distinguish among them, and thus making the model perform better. We apply our method to Position-aware Attention LSTM (PA-LSTM) BIBREF10 , and tab:relationextraction shows our method improves the performance of PA-LSTM. Training details are described in sec:trainingdetail. Related Works As many early works devoted to psychology and linguistics, especially those works exploring semantic similarity BIBREF11 , BIBREF12 , researchers have empirically found there are various different categorizations of semantic relations among words and contexts. For promoting research on these different semantic relations, bejar1991cognitive explicitly defining these relations and miller1995wordnet further systematically organize rich semantic relations between words via a database. For identifying correlation and distinction between different semantic relations so as to support learning semantic similarity, various methods have attempted to measure relational similarity BIBREF23 , BIBREF24 , BIBREF25 , BIBREF26 , BIBREF27 , BIBREF28 , BIBREF29 . With the ongoing development of information extraction and effective construction of KBs BIBREF0 , BIBREF1 , BIBREF30 , relations are further defined as various types of latent connections between objects more than semantic relations. These general relations play a core role in expressing relational facts in the real world. Hence, there are accordingly various methods proposed for discovering more relations and their facts, including open information extraction BIBREF31 , BIBREF32 , BIBREF33 , BIBREF34 , BIBREF35 , BIBREF36 , BIBREF37 and relation extraction BIBREF38 , BIBREF39 , BIBREF40 , BIBREF41 , BIBREF42 , BIBREF43 , and relation prediction BIBREF3 , BIBREF44 , BIBREF45 , BIBREF46 , BIBREF47 . For both semantic relations and general relations, identifying them is a crucial problem, requiring systems to provide a fine-grained relation similarity metric. However, the existing methods suffer from sparse data, which makes it difficult to achieve an effective and stable similarity metric. Motivated by this, we propose to measure relation similarity by leveraging their fact distribution so that we can identify nuances between similar relations, and merge those distant surface forms of the same relations, benefitting the tasks mentioned above. Conclusion and Future Work In this paper, we introduce an effective method to quantify the relation similarity and provide analysis and a survey of applications. We note that there are a wide range of future directions: (1) human prior knowledge could be incorporated into the similarity quantification; (2) similarity between relations could also be considered in multi-modal settings, e.g., extracting relations from images, videos, or even from audios; (3) by analyzing the distributions corresponding to different relations, one can also find some “meta-relations” between relations, such as hypernymy and hyponymy. Acknowledgements This work is supported by the National Natural Science Foundation of China (NSFC No. 61572273, 61532010), the National Key Research and Development Program of China (No. 2018YFB1004503). Chen and Zhu is supported by Tsinghua University Initiative Scientific Research Program, and Chen is also supported by DCST Student Academic Training Program. Han is also supported by 2018 Tencent Rhino-Bird Elite Training Program. Proofs to theorems in the paper If we have a proposal distribution INLINEFORM0 satisfying INLINEFORM1 , then eq:proofrecallfirstpart can be further written as DISPLAYFORM0 Sometimes, it's hard for us to compute normalized probability INLINEFORM0 . To tackle this problem, consider self-normalized importance sampling as an unbiased estimation BIBREF50 , DISPLAYFORM0 where INLINEFORM0 is the normalized version of INLINEFORM1 . Chinese Restaurant Process Specifically, for a relation INLINEFORM0 with currently INLINEFORM1 sub-relations, we turn it to a new sub-relation with probability DISPLAYFORM0 or to the INLINEFORM0 existing sub-relation with probability DISPLAYFORM0 where INLINEFORM0 is the size of INLINEFORM1 existing sub-relation, INLINEFORM2 is the sum of the number of all sub-relationships of INLINEFORM3 , and INLINEFORM4 is a hyperparameter, in which case we use INLINEFORM5 . Training Details In Wikidata and ReVerb Extractions dataset, we manually split a validation set, assuring every entity and relation appears in validation set also appears in training set. While minimizing loss on the training set, we observe the loss on the validation set and stop training as validation loss stops to decrease. Before training our model on any dataset, we use the entity embeddings and relation embeddings produced by TransE on the dataset as the pretrained embeddings for our model. Training Details on Negative Sampling The sampling is launched with an initial temperature of 8192. The temperature drops to half every 200 epochs and remains stable once it hits 16. Optimization is performed using SGD, with a learning rate of 1e-3. Training Details on Softmax-Margin Loss The sampling is launching with an initial temperature of 64. The temperature drops by 20% per epoch, and remains stable once it hits 16. The alpha we use is 9. Optimization is performed using SGD, with a learning rate of 1. Recall Standard Deviation As is shown in fig:recallstd, the max recall standard deviation for our model is 0.4, and 0.11 for TransE. Negative Samplilng with Relation Type Constraints In FB15K, if two relations have same prefix, we regard them as belonging to a same type, e.g., both /film/film/starring./film/performance/actor and /film/actor/film./film/performance/film have prefix film, they belong to same type. Similar to what is mentioned in sec:training-guidance-relation-prediction, we expect the model first to learn to distinguish among obviously different relations, and gradually learn to distinguish similar relations. Therefore, we conduct negative sampling with relation type constraints in two ways. Add Up Two Uniform Distribution For each triple INLINEFORM0 , we have two uniform distribution INLINEFORM1 and INLINEFORM2 . INLINEFORM3 is the uniform distribution over all the relations except for those appear with INLINEFORM4 in the knowledge base, and INLINEFORM5 is the uniform distribution over the relations of the same type as INLINEFORM6 . When corrupting the triple, we sample INLINEFORM7 from the distribution: DISPLAYFORM0 where INLINEFORM0 is a hyperparameter. We set INLINEFORM1 to 1 at the beginning of training, and every INLINEFORM2 epochs, INLINEFORM3 will be multiplied by decrease rate INLINEFORM4 . We do grid search for INLINEFORM5 and INLINEFORM6 , but no improvement is observed. Add Weight We speculate that the unsatisfactory result produced by adding up two uniform distribution is because that for those types with few relations in it, a small change of INLINEFORM0 will result in a significant change in INLINEFORM1 . Therefore, when sampling a negative INLINEFORM2 , we add weights to relations that are of the same type as INLINEFORM3 instead. Concretely, we substitute INLINEFORM4 with INLINEFORM5 with probability INLINEFORM6 , which can be calculated as: DISPLAYFORM0 where INLINEFORM0 denotes all the relations that are the same type as INLINEFORM1 , INLINEFORM2 is a hyperparameter and INLINEFORM3 is a normalizing constant. We set INLINEFORM4 to 0 at the beginning of training, and every INLINEFORM5 epochs, INLINEFORM6 will increase by INLINEFORM7 . We do grid search for INLINEFORM8 and INLINEFORM9 , still no improvement is observed. Wikidata annotation guidance We show the guidance provided for the annotators here.
For relation prediction they test TransE and for relation extraction they test position aware neural sequence model
29571867fe00346418b1ec36c3b7685f035e22ce
29571867fe00346418b1ec36c3b7685f035e22ce_0
Q: Which tasks do they apply their method to? Text: Introduction Author contributions: Hao Zhu designed the research; Weize Chen prepared the data, and organized data annotation; Hao Zhu and Xu Han designed the experiments; Weize Chen performed the experiments; Hao Zhu, Weize Chen and Xu Han wrote the paper; Zhiyuan Liu and Maosong Sun proofread the paper. Zhiyuan Liu is the corresponding author. Relations, representing various types of connections between entities or arguments, are the core of expressing relational facts in most general knowledge bases (KBs) BIBREF0 , BIBREF1 . Hence, identifying relations is a crucial problem for several information extraction tasks. Although considerable effort has been devoted to these tasks, some nuances between similar relations are still overlooked, (tab:similarityexample shows an example); on the other hand, some distinct surface forms carrying the same relational semantics are mistaken as different relations. These severe problems motivate us to quantify the similarity between relations in a more effective and robust method. In this paper, we introduce an adaptive and general framework for measuring similarity of the pairs of relations. Suppose for each relation INLINEFORM0 , we have obtained a conditional distribution, INLINEFORM1 ( INLINEFORM2 are head and tail entities, and INLINEFORM3 is a relation), over all head-tail entity pairs given INLINEFORM4 . We could quantify similarity between a pair of relations by the divergence between the conditional probability distributions given these relations. In this paper, this conditional probability is given by a simple feed-forward neural network, which can capture the dependencies between entities conditioned on specific relations. Despite its simplicity, the proposed network is expected to cover various facts, even if the facts are not used for training, owing to the good generalizability of neural networks. For example, our network will assign a fact a higher probability if it is “logical”: e.g., the network might prefer an athlete has the same nationality as same as his/her national team rather than other nations. Intuitively, two similar relations should have similar conditional distributions over head-tail entity pairs INLINEFORM0 , e.g., the entity pairs associated with be trade to and play for are most likely to be athletes and their clubs, whereas those associated with live in are often people and locations. In this paper, we evaluate the similarity between relations based on their conditional distributions over entity pairs. Specifically, we adopt Kullback–Leibler (KL) divergence of both directions as the metric. However, computing exact KL requires iterating over the whole entity pair space INLINEFORM1 , which is quite intractable. Therefore, we further provide a sampling-based method to approximate the similarity score over the entity pair space for computational efficiency. Besides developing a framework for assessing the similarity between relations, our second contribution is that we have done a survey of applications. We present experiments and analysis aimed at answering five questions: (1) How well does the computed similarity score correlate with human judgment about the similarity between relations? How does our approach compare to other possible approaches based on other kinds of relation embeddings to define a similarity? (sec:relationship and sec:human-judgment) (2) Open IE models inevitably extract many redundant relations. How can our approach help reduce such redundancy? (sec:openie) (3) To which extent, quantitatively, does best relational classification models make errors among similar relations? (sec:error-analysis) (4) Could similarity be used in a heuristic method to enhance negative sampling for relation prediction? (sec:training-guidance-relation-prediction) (5) Could similarity be used as an adaptive margin in softmax-margin training method for relation extraction? (sec:training-guidance-relation-extraction) Finally, we conclude with a discussion of valid extensions to our method and other possible applications. Learning Head-Tail Distribution Just as introduced in sec:introduction, we quantify the similarity between relations by their corresponding head-tail entity pair distributions. Consider the typical case that we have got numbers of facts, but they are still sparse among all facts in the real world. How could we obtain a well-generalized distribution over the whole space of possible triples beyond the training facts? This section proposes a method to parameterize such a distribution. Formal Definition of Fact Distribution A fact is a triple INLINEFORM0 , where INLINEFORM1 and INLINEFORM2 are called head and tail entities, INLINEFORM3 is the relation connecting them, INLINEFORM4 and INLINEFORM5 are the sets of entities and relations respectively. We consider a score function INLINEFORM6 maps all triples to a scalar value. As a special case, the function can be factorized into the sum of two parts: INLINEFORM7 . We use INLINEFORM8 to define the unnormalized probability. DISPLAYFORM0 for every triple INLINEFORM0 . The real parameter INLINEFORM1 can be adjusted to obtain difference distributions over facts. In this paper, we only consider locally normalized version of INLINEFORM0 : DISPLAYFORM0 where INLINEFORM0 and INLINEFORM1 are directly parameterized by feed-forward neural networks. Through local normalization, INLINEFORM2 is naturally a valid probability distribution, as the partition function INLINEFORM3 . Therefore, INLINEFORM4 . Neural architecture design Here we introduce our special design of neural networks. For the first part and the second part, we implement the scoring functions introduced in eq:local-normalization as DISPLAYFORM0 where each INLINEFORM0 represents a multi-layer perceptron composed of layers like INLINEFORM1 , INLINEFORM2 , INLINEFORM3 are embeddings of INLINEFORM4 , INLINEFORM5 , and INLINEFORM6 includes weights and biases in all layers. Training Now we discuss the method to perform training. In this paper, we consider joint training. By minimizing the loss function, we compute the model parameters INLINEFORM0 : DISPLAYFORM0 where INLINEFORM0 is a set of triples. The whole set of parameters, INLINEFORM1 . We train these parameters by Adam optimizer BIBREF2 . Training details are shown in sec:trainingdetail. Quantifying Similarity So far, we have talked about how to use neural networks to approximate the natural distribution of facts. The center topic of our paper, quantifying similarity, will be discussed in detail in this section. Relations as Distributions In this paper, we provide a probability view of relations by representing relation INLINEFORM0 as a probability distribution INLINEFORM1 . After training the neural network on a given set of triples, the model is expected to generalize well on the whole INLINEFORM2 space. Note that it is very easy to calculate INLINEFORM0 in our model thanks to local normalization (eq:local-normalization). Therefore, we can compute it by DISPLAYFORM0 Defining Similarity As the basis of our definition, we hypothesize that the similarity between INLINEFORM0 reflects the similarity between relations. For example, if the conditional distributions of two relations put mass on similar entity pairs, the two relations should be quite similar. If they emphasize different ones, the two should have some differences in meaning. Formally, we define the similarity between two relations as a function of the divergence between the distributions of corresponding head-tail entity pairs: DISPLAYFORM0 where INLINEFORM0 denotes Kullback–Leibler divergence, DISPLAYFORM0 vice versa, and function INLINEFORM0 is a symmetrical function. To keep the coherence between semantic meaning of “similarity” and our definition, INLINEFORM1 should be a monotonically decreasing function. Through this paper, we choose to use an exponential family composed with max function, i.e., INLINEFORM2 . Note that by taking both sides of KL divergence into account, our definition incorporates both the entity pairs with high probability in INLINEFORM3 and INLINEFORM4 . Intuitively, if INLINEFORM5 mainly distributes on a proportion of entities pairs that INLINEFORM6 emphasizes, INLINEFORM7 is only hyponymy of INLINEFORM8 . Considering both sides of KL divergence could help model yield more comprehensive consideration. We will talk about the advantage of this method in detail in sec:relationship. Calculating Similarity Just as introduced in sec:introduction, it is intractable to compute similarity exactly, as involving INLINEFORM0 computation. Hence, we consider the monte-carlo approximation: DISPLAYFORM0 where INLINEFORM0 is a list of entity pairs sampled from INLINEFORM1 . We use sequential sampling to gain INLINEFORM6 , which means we first sample INLINEFORM7 given INLINEFORM8 from INLINEFORM9 , and then sample INLINEFORM10 given INLINEFORM11 and INLINEFORM12 from INLINEFORM13 . Relationship with other metrics Previous work proposed various methods for representing relations as vectors BIBREF3 , BIBREF4 , as matrices BIBREF5 , even as angles BIBREF6 , etc. Based on each of these representations, one could easily define various similarity quantification methods. We show in tab:other-similarity the best one of them in each category of relation presentation. Here we provide two intuitive reasons for using our proposed probability-based similarity: (1) the capacity of a single fixed-size representation is limited — some details about the fact distribution is lost during embedding; (2) directly comparing distributions yields a better interpretability — you can not know about how two relations are different given two relation embeddings, but our model helps you study the detailed differences between probabilities on every entity pair. fig:head-tail-distribution provides an example. Although the two relations talk about the same topic, they have different meanings. TransE embeds them as vectors the closest to each other, while our model can capture the distinction between the distributions corresponds to the two relations, which could be directly noticed from the figure. Embeddings used in this graph are from a trained TransE model. Dataset Construction We show the statistics of the dataset we use in tab:statistics, and the construction procedures will be introduced in this section. Wikidata In Wikidata BIBREF8 , facts can be described as (Head item/property, Property, Tail item/property). To construct a dataset suitable for our task, we only consider the facts whose head entity and tail entity are both items. We first choose the most common 202 relations and 120000 entities from Wikidata as our initial data. Considering that the facts containing the two most frequently appearing relations (P2860: cites, and P31: instance of) occupy half of the initial data, we drop the two relations to downsize the dataset and make the dataset more balanced. Finally, we keep the triples whose head and tail both come from the selected 120000 entities as well as its relation comes from the remaining 200 relations. ReVerb Extractions ReVerb BIBREF9 is a program that automatically identifies and extracts binary relationships from English sentences. We use the extractions from running ReVerb on Wikipedia. We only keep the relations appear more than 10 times and their corresponding triples to construct our dataset. FB15K and TACRED FB15K BIBREF3 is a subset of freebase. TACRED BIBREF10 is a large supervised relation extraction dataset obtained via crowdsourcing. We directly use these two dataset, no extra processing steps were applied. Human Judgments Following BIBREF11 , BIBREF12 and the vast amount of previous work on semantic similarity, we ask nine undergraduate subjects to assess the similarity of 360 pairs of relations from a subset of Wikidata BIBREF8 that are chosen to cover from high to low levels of similarity. In our experiment, subjects were asked to rate an integer similarity score from 0 (no similarity) to 4 (perfectly the same) for each pair. The inter-subject correlation, estimated by leaving-one-out method BIBREF13 , is r = INLINEFORM0 , standard deviation = INLINEFORM1 . This important reference value (marked in fig:correlation) could be seen as the highest expected performance for machines BIBREF12 . To get baselines for comparison, we consider other possible methods to define similarity functions, as shown in tab:other-similarity. We compute the correlation between these methods and human judgment scores. As the models we have chosen are the ones work best in knowledge base completion, we do expect the similarity quantification approaches based on them could measure some degree of similarity. As shown in fig:correlation, the three baseline models could achieve moderate ( INLINEFORM0 ) positive correlation. On the other hand, our model shows a stronger correlation ( INLINEFORM1 ) with human judgment, indicating that considering the probability over whole entity pair space helps to gain a similarity closer to human judgments. These results provide evidence for our claim raised in sec:defining-similarity. Redundant Relation Removal Open IE extracts concise token patterns from plain text to represent various relations between entities, e.g.,, (Mark Twain, was born in, Florida). As Open IE is significant for constructing KBs, many effective extractors have been proposed to extract triples, such as Text-Runner BIBREF14 , ReVerb BIBREF9 , and Standford Open IE BIBREF15 . However, these extractors only yield relation patterns between entities, without aggregating and clustering their results. Accordingly, there are a fair amount of redundant relation patterns after extracting those relation patterns. Furthermore, the redundant patterns lead to some redundant relations in KBs. Recently, some efforts are devoted to Open Relation Extraction (Open RE) BIBREF16 , BIBREF17 , BIBREF18 , BIBREF19 , aiming to cluster relation patterns into several relation types instead of redundant relation patterns. Whenas, these Open RE methods adopt distantly supervised labels as golden relation types, suffering from both false positive and false negative problems on the one hand. On the other hand, these methods still rely on the conventional similarity metrics mentioned above. In this section, we will show that our defined similarity quantification could help Open IE by identifying redundant relations. To be specific, we set a toy experiment to remove redundant relations in KBs for a preliminary comparison (sec:toy-experiment). Then, we evaluate our model and baselines on the real-world dataset extracted by Open IE methods (sec:real-experiment). Considering the existing evaluation metric for Open IE and Open RE rely on either labor-intensive annotations or distantly supervised annotations, we propose a metric approximating recall and precision evaluation based on operable human annotations for balancing both efficiency and accuracy. Toy Experiment In this subsection, we propose a toy environment to verify our similarity-based method. Specifically, we construct a dataset from Wikidata and implement Chinese restaurant process to split every relation in the dataset into several sub-relations. Then, we filter out those sub-relations appearing less than 50 times to eventually get 1165 relations. All these split relations are regarded as different ones during training, and then different relation similarity metrics are adopted to merge those sub-relations into one relation. As Figure FIGREF26 shown that the matrices-based approach is less effective than other approaches, we leave this approach out of this experiment. The results are shown in Table TABREF37 . Real World Experiment In this subsection, we evaluate various relation similarity metrics on the real-world Open IE patterns. The dataset are constructed by ReVerb. Different patterns will be regarded as different relations during training, and we also adopt various relation similarity metrics to merge similar relation patterns. Because it is nearly impossible to annotate all pattern pairs for their merging or not, meanwhile it is also inappropriate to take distantly supervised annotations as golden results. Hence, we propose a novel metric approximating recall and precision evaluation based on minimal human annotations for evaluation in this experiment. Recall is defined as the yielding fraction of true positive instances over the total amount of real positive instances. However, we do not have annotations about which pairs of relations are synonymous. Crowdsourcing is a method to obtain a large number of high-quality annotations. Nevertheless, applying crowdsourcing is not trivial in our settings, because it is intractable to enumerate all synonymous pairs in the large space of relation (pattern) pairs INLINEFORM0 in Open IE. A promising method is to use rejection sampling by uniform sampling from the whole space, and only keep the synonymous ones judged by crowdworkers. However, this is not practical either, as the synonymous pairs are sparse in the whole space, resulting in low efficiency. Fortunately, we could use normalized importance sampling as an alternative to get an unbiased estimation of recall. Theorem 1 Suppose every sample INLINEFORM0 has a label INLINEFORM1 , and the model to be evaluated also gives its prediction INLINEFORM2 . The recall can be written as DISPLAYFORM0 where INLINEFORM0 is the uniform distribution over all samples with INLINEFORM1 . If we have a proposal distribution INLINEFORM2 satisfying INLINEFORM3 , we get an unbiased estimation of recall: DISPLAYFORM0 where INLINEFORM0 is a normalized version of INLINEFORM1 , where INLINEFORM2 is the unnormalized version of q, and INLINEFORM3 are i.i.d. drawn from INLINEFORM4 . Similar to eq:recall-expectation, we can write the expectation form of precision: DISPLAYFORM0 where INLINEFORM0 is the uniform distribution over all samples with INLINEFORM1 . As these samples could be found out by performing models on it. We can simply approximate precision by Monte Carlo Sampling: DISPLAYFORM0 where INLINEFORM0 . In our setting, INLINEFORM0 , INLINEFORM1 means INLINEFORM2 and INLINEFORM3 are the same relations, INLINEFORM4 means INLINEFORM5 is larger than a threshold INLINEFORM6 . The results on the ReVerb Extractions dataset that we constructed are described in fig:precision-recall-openie. To approximate recall, we use the similarity scores as the proposal distribution INLINEFORM0 . 500 relation pairs are then drawn from INLINEFORM1 . To approximate precision, we set thresholds at equal intervals. At each threshold, we uniformly sample 50 to 100 relation pairs whose similarity score given by the model is larger than the threshold. We ask 15 undergraduates to judge whether two relations in a relation pair have the same meaning. A relation pair is viewed valid only if 8 of the annotators annotate it as valid. We use the annotations to approximate recall and precision with eq:recall and eq:precision. Apart from the confidential interval of precision shown in the figure, the largest INLINEFORM2 confidential interval among thresholds for recall is INLINEFORM3 . From the result, we could see that our model performs much better than other models' similarity by a very large margin. Error Analysis for Relational Classification In this section, we consider two kinds of relational classification tasks: (1) relation prediction and (2) relation extraction. Relation prediction aims at predicting the relationship between entities with a given set of triples as training data; while relation extraction aims at extracting the relationship between two entities in a sentence. Relation Prediction We hope to design a simple and clear experiment setup to conduct error analysis for relational prediction. Therefore, we consider a typical method TransE BIBREF3 as the subject as well as FB15K BIBREF3 as the dataset. TransE embeds entities and relations as vectors, and train these embeddings by minimizing DISPLAYFORM0 where INLINEFORM0 is the set of training triples, INLINEFORM1 is the distance function, INLINEFORM2 is a negative sample with one element different from INLINEFORM4 uniformly sampled from INLINEFORM5 , and INLINEFORM6 is the margin. During testing, for each entity pair INLINEFORM0 , TransE rank relations according to INLINEFORM1 . For each INLINEFORM2 in the test set, we call the relations with higher rank scores than INLINEFORM3 distracting relations. We then compare the similarity between the golden relation and distracting relations. Note that some entity pairs could correspond to more than one relations, in which case we just do not see them as distracting relations. Relation Extraction For relation extraction, we consider the supervised relation extraction setting and TACRED dataset BIBREF10 . As for the subject model, we use the best model on TACRED dataset — position-aware neural sequence model. This method first passes the sentence into an LSTM and then calculate an attention sum of the hidden states in the LSTM by taking positional features into account. This simple and effective method achieves the best in TACRED dataset. Results fig:averank shows the distribution of similarity ranks of distracting relations of the above mentioned models' outputs on both relation prediction and relation extraction tasks. From fig:averankrp,fig:averankre, we could observe the most distracting relations are the most similar ones, which corroborate our hypothesis that even the best models on these tasks still make mistakes among the most similar relations. This result also highlights the importance of a heuristic method for guiding models to pay more attention to the boundary between similar relations. We also try to do the negative sampling with relation type constraints, but we see no improvement compared with uniform sampling. The details of negative sampling with relation type constraints are presented in sec:relation-type-constraints. Similarity and Negative Sampling Based on the observation presented in sec:erroranalysisresult, we find out that similar relations are often confusing for relation prediction models. Therefore, corrupted triples with similar relations can be used as high-quality negative samples. For a given valid triple INLINEFORM0 , we corrupt the triple by substituting INLINEFORM1 with INLINEFORM2 with the probability, DISPLAYFORM0 where INLINEFORM0 is the temperature of the exponential function, the bigger the INLINEFORM1 is, the flatter the probability distribution is. When the temperature approaches infinite, the sampling process reduces to uniform sampling. In training, we set the initial temperature to a high level and gradually reduce the temperature. Intuitively, it enables the model to distinguish among those obviously different relations in the early stage and gives more and more confusing negative triples as the training processes to help the model distinguish the similar relations. This can be also viewed as a process of curriculum learning BIBREF21 , the data fed to the model gradually changes from simple negative triples to hard ones. We perform relation prediction task on FB15K with TransE. Following BIBREF3 , we use the "Filtered" setting protocol, i.e., filtering out the corrupted triples that appear in the dataset. Our sampling method is shown to improve the model's performance, especially on Hit@1 (fig:relationprediction). Training details are described in sec:trainingdetail. Similarity and Softmax-Margin Loss Similar to sec:training-guidance-relation-prediction, we find out that relation extraction models often make wrong preditions on similar relations. In this section, we use similarity as an adaptive margin in softmax-margin loss to improve the performance of relation extraction models. As shown in BIBREF22 , Softmax-Margin Loss can be expressed as DISPLAYFORM0 where INLINEFORM0 denotes a structured output space for INLINEFORM1 , and INLINEFORM2 is INLINEFORM3 example in training data. We can easily incorporate similarity into cost function INLINEFORM0 . In this task, we define the cost function as INLINEFORM1 , where INLINEFORM2 is a hyperparameter. Intuitively, we give a larger margin between similar relations, forcing the model to distinguish among them, and thus making the model perform better. We apply our method to Position-aware Attention LSTM (PA-LSTM) BIBREF10 , and tab:relationextraction shows our method improves the performance of PA-LSTM. Training details are described in sec:trainingdetail. Related Works As many early works devoted to psychology and linguistics, especially those works exploring semantic similarity BIBREF11 , BIBREF12 , researchers have empirically found there are various different categorizations of semantic relations among words and contexts. For promoting research on these different semantic relations, bejar1991cognitive explicitly defining these relations and miller1995wordnet further systematically organize rich semantic relations between words via a database. For identifying correlation and distinction between different semantic relations so as to support learning semantic similarity, various methods have attempted to measure relational similarity BIBREF23 , BIBREF24 , BIBREF25 , BIBREF26 , BIBREF27 , BIBREF28 , BIBREF29 . With the ongoing development of information extraction and effective construction of KBs BIBREF0 , BIBREF1 , BIBREF30 , relations are further defined as various types of latent connections between objects more than semantic relations. These general relations play a core role in expressing relational facts in the real world. Hence, there are accordingly various methods proposed for discovering more relations and their facts, including open information extraction BIBREF31 , BIBREF32 , BIBREF33 , BIBREF34 , BIBREF35 , BIBREF36 , BIBREF37 and relation extraction BIBREF38 , BIBREF39 , BIBREF40 , BIBREF41 , BIBREF42 , BIBREF43 , and relation prediction BIBREF3 , BIBREF44 , BIBREF45 , BIBREF46 , BIBREF47 . For both semantic relations and general relations, identifying them is a crucial problem, requiring systems to provide a fine-grained relation similarity metric. However, the existing methods suffer from sparse data, which makes it difficult to achieve an effective and stable similarity metric. Motivated by this, we propose to measure relation similarity by leveraging their fact distribution so that we can identify nuances between similar relations, and merge those distant surface forms of the same relations, benefitting the tasks mentioned above. Conclusion and Future Work In this paper, we introduce an effective method to quantify the relation similarity and provide analysis and a survey of applications. We note that there are a wide range of future directions: (1) human prior knowledge could be incorporated into the similarity quantification; (2) similarity between relations could also be considered in multi-modal settings, e.g., extracting relations from images, videos, or even from audios; (3) by analyzing the distributions corresponding to different relations, one can also find some “meta-relations” between relations, such as hypernymy and hyponymy. Acknowledgements This work is supported by the National Natural Science Foundation of China (NSFC No. 61572273, 61532010), the National Key Research and Development Program of China (No. 2018YFB1004503). Chen and Zhu is supported by Tsinghua University Initiative Scientific Research Program, and Chen is also supported by DCST Student Academic Training Program. Han is also supported by 2018 Tencent Rhino-Bird Elite Training Program. Proofs to theorems in the paper If we have a proposal distribution INLINEFORM0 satisfying INLINEFORM1 , then eq:proofrecallfirstpart can be further written as DISPLAYFORM0 Sometimes, it's hard for us to compute normalized probability INLINEFORM0 . To tackle this problem, consider self-normalized importance sampling as an unbiased estimation BIBREF50 , DISPLAYFORM0 where INLINEFORM0 is the normalized version of INLINEFORM1 . Chinese Restaurant Process Specifically, for a relation INLINEFORM0 with currently INLINEFORM1 sub-relations, we turn it to a new sub-relation with probability DISPLAYFORM0 or to the INLINEFORM0 existing sub-relation with probability DISPLAYFORM0 where INLINEFORM0 is the size of INLINEFORM1 existing sub-relation, INLINEFORM2 is the sum of the number of all sub-relationships of INLINEFORM3 , and INLINEFORM4 is a hyperparameter, in which case we use INLINEFORM5 . Training Details In Wikidata and ReVerb Extractions dataset, we manually split a validation set, assuring every entity and relation appears in validation set also appears in training set. While minimizing loss on the training set, we observe the loss on the validation set and stop training as validation loss stops to decrease. Before training our model on any dataset, we use the entity embeddings and relation embeddings produced by TransE on the dataset as the pretrained embeddings for our model. Training Details on Negative Sampling The sampling is launched with an initial temperature of 8192. The temperature drops to half every 200 epochs and remains stable once it hits 16. Optimization is performed using SGD, with a learning rate of 1e-3. Training Details on Softmax-Margin Loss The sampling is launching with an initial temperature of 64. The temperature drops by 20% per epoch, and remains stable once it hits 16. The alpha we use is 9. Optimization is performed using SGD, with a learning rate of 1. Recall Standard Deviation As is shown in fig:recallstd, the max recall standard deviation for our model is 0.4, and 0.11 for TransE. Negative Samplilng with Relation Type Constraints In FB15K, if two relations have same prefix, we regard them as belonging to a same type, e.g., both /film/film/starring./film/performance/actor and /film/actor/film./film/performance/film have prefix film, they belong to same type. Similar to what is mentioned in sec:training-guidance-relation-prediction, we expect the model first to learn to distinguish among obviously different relations, and gradually learn to distinguish similar relations. Therefore, we conduct negative sampling with relation type constraints in two ways. Add Up Two Uniform Distribution For each triple INLINEFORM0 , we have two uniform distribution INLINEFORM1 and INLINEFORM2 . INLINEFORM3 is the uniform distribution over all the relations except for those appear with INLINEFORM4 in the knowledge base, and INLINEFORM5 is the uniform distribution over the relations of the same type as INLINEFORM6 . When corrupting the triple, we sample INLINEFORM7 from the distribution: DISPLAYFORM0 where INLINEFORM0 is a hyperparameter. We set INLINEFORM1 to 1 at the beginning of training, and every INLINEFORM2 epochs, INLINEFORM3 will be multiplied by decrease rate INLINEFORM4 . We do grid search for INLINEFORM5 and INLINEFORM6 , but no improvement is observed. Add Weight We speculate that the unsatisfactory result produced by adding up two uniform distribution is because that for those types with few relations in it, a small change of INLINEFORM0 will result in a significant change in INLINEFORM1 . Therefore, when sampling a negative INLINEFORM2 , we add weights to relations that are of the same type as INLINEFORM3 instead. Concretely, we substitute INLINEFORM4 with INLINEFORM5 with probability INLINEFORM6 , which can be calculated as: DISPLAYFORM0 where INLINEFORM0 denotes all the relations that are the same type as INLINEFORM1 , INLINEFORM2 is a hyperparameter and INLINEFORM3 is a normalizing constant. We set INLINEFORM4 to 0 at the beginning of training, and every INLINEFORM5 epochs, INLINEFORM6 will increase by INLINEFORM7 . We do grid search for INLINEFORM8 and INLINEFORM9 , still no improvement is observed. Wikidata annotation guidance We show the guidance provided for the annotators here.
relation prediction, relation extraction, Open IE
1a678d081f97531d54b7122254301c20b3531198
1a678d081f97531d54b7122254301c20b3531198_0
Q: Which knowledge bases do they use? Text: Introduction Author contributions: Hao Zhu designed the research; Weize Chen prepared the data, and organized data annotation; Hao Zhu and Xu Han designed the experiments; Weize Chen performed the experiments; Hao Zhu, Weize Chen and Xu Han wrote the paper; Zhiyuan Liu and Maosong Sun proofread the paper. Zhiyuan Liu is the corresponding author. Relations, representing various types of connections between entities or arguments, are the core of expressing relational facts in most general knowledge bases (KBs) BIBREF0 , BIBREF1 . Hence, identifying relations is a crucial problem for several information extraction tasks. Although considerable effort has been devoted to these tasks, some nuances between similar relations are still overlooked, (tab:similarityexample shows an example); on the other hand, some distinct surface forms carrying the same relational semantics are mistaken as different relations. These severe problems motivate us to quantify the similarity between relations in a more effective and robust method. In this paper, we introduce an adaptive and general framework for measuring similarity of the pairs of relations. Suppose for each relation INLINEFORM0 , we have obtained a conditional distribution, INLINEFORM1 ( INLINEFORM2 are head and tail entities, and INLINEFORM3 is a relation), over all head-tail entity pairs given INLINEFORM4 . We could quantify similarity between a pair of relations by the divergence between the conditional probability distributions given these relations. In this paper, this conditional probability is given by a simple feed-forward neural network, which can capture the dependencies between entities conditioned on specific relations. Despite its simplicity, the proposed network is expected to cover various facts, even if the facts are not used for training, owing to the good generalizability of neural networks. For example, our network will assign a fact a higher probability if it is “logical”: e.g., the network might prefer an athlete has the same nationality as same as his/her national team rather than other nations. Intuitively, two similar relations should have similar conditional distributions over head-tail entity pairs INLINEFORM0 , e.g., the entity pairs associated with be trade to and play for are most likely to be athletes and their clubs, whereas those associated with live in are often people and locations. In this paper, we evaluate the similarity between relations based on their conditional distributions over entity pairs. Specifically, we adopt Kullback–Leibler (KL) divergence of both directions as the metric. However, computing exact KL requires iterating over the whole entity pair space INLINEFORM1 , which is quite intractable. Therefore, we further provide a sampling-based method to approximate the similarity score over the entity pair space for computational efficiency. Besides developing a framework for assessing the similarity between relations, our second contribution is that we have done a survey of applications. We present experiments and analysis aimed at answering five questions: (1) How well does the computed similarity score correlate with human judgment about the similarity between relations? How does our approach compare to other possible approaches based on other kinds of relation embeddings to define a similarity? (sec:relationship and sec:human-judgment) (2) Open IE models inevitably extract many redundant relations. How can our approach help reduce such redundancy? (sec:openie) (3) To which extent, quantitatively, does best relational classification models make errors among similar relations? (sec:error-analysis) (4) Could similarity be used in a heuristic method to enhance negative sampling for relation prediction? (sec:training-guidance-relation-prediction) (5) Could similarity be used as an adaptive margin in softmax-margin training method for relation extraction? (sec:training-guidance-relation-extraction) Finally, we conclude with a discussion of valid extensions to our method and other possible applications. Learning Head-Tail Distribution Just as introduced in sec:introduction, we quantify the similarity between relations by their corresponding head-tail entity pair distributions. Consider the typical case that we have got numbers of facts, but they are still sparse among all facts in the real world. How could we obtain a well-generalized distribution over the whole space of possible triples beyond the training facts? This section proposes a method to parameterize such a distribution. Formal Definition of Fact Distribution A fact is a triple INLINEFORM0 , where INLINEFORM1 and INLINEFORM2 are called head and tail entities, INLINEFORM3 is the relation connecting them, INLINEFORM4 and INLINEFORM5 are the sets of entities and relations respectively. We consider a score function INLINEFORM6 maps all triples to a scalar value. As a special case, the function can be factorized into the sum of two parts: INLINEFORM7 . We use INLINEFORM8 to define the unnormalized probability. DISPLAYFORM0 for every triple INLINEFORM0 . The real parameter INLINEFORM1 can be adjusted to obtain difference distributions over facts. In this paper, we only consider locally normalized version of INLINEFORM0 : DISPLAYFORM0 where INLINEFORM0 and INLINEFORM1 are directly parameterized by feed-forward neural networks. Through local normalization, INLINEFORM2 is naturally a valid probability distribution, as the partition function INLINEFORM3 . Therefore, INLINEFORM4 . Neural architecture design Here we introduce our special design of neural networks. For the first part and the second part, we implement the scoring functions introduced in eq:local-normalization as DISPLAYFORM0 where each INLINEFORM0 represents a multi-layer perceptron composed of layers like INLINEFORM1 , INLINEFORM2 , INLINEFORM3 are embeddings of INLINEFORM4 , INLINEFORM5 , and INLINEFORM6 includes weights and biases in all layers. Training Now we discuss the method to perform training. In this paper, we consider joint training. By minimizing the loss function, we compute the model parameters INLINEFORM0 : DISPLAYFORM0 where INLINEFORM0 is a set of triples. The whole set of parameters, INLINEFORM1 . We train these parameters by Adam optimizer BIBREF2 . Training details are shown in sec:trainingdetail. Quantifying Similarity So far, we have talked about how to use neural networks to approximate the natural distribution of facts. The center topic of our paper, quantifying similarity, will be discussed in detail in this section. Relations as Distributions In this paper, we provide a probability view of relations by representing relation INLINEFORM0 as a probability distribution INLINEFORM1 . After training the neural network on a given set of triples, the model is expected to generalize well on the whole INLINEFORM2 space. Note that it is very easy to calculate INLINEFORM0 in our model thanks to local normalization (eq:local-normalization). Therefore, we can compute it by DISPLAYFORM0 Defining Similarity As the basis of our definition, we hypothesize that the similarity between INLINEFORM0 reflects the similarity between relations. For example, if the conditional distributions of two relations put mass on similar entity pairs, the two relations should be quite similar. If they emphasize different ones, the two should have some differences in meaning. Formally, we define the similarity between two relations as a function of the divergence between the distributions of corresponding head-tail entity pairs: DISPLAYFORM0 where INLINEFORM0 denotes Kullback–Leibler divergence, DISPLAYFORM0 vice versa, and function INLINEFORM0 is a symmetrical function. To keep the coherence between semantic meaning of “similarity” and our definition, INLINEFORM1 should be a monotonically decreasing function. Through this paper, we choose to use an exponential family composed with max function, i.e., INLINEFORM2 . Note that by taking both sides of KL divergence into account, our definition incorporates both the entity pairs with high probability in INLINEFORM3 and INLINEFORM4 . Intuitively, if INLINEFORM5 mainly distributes on a proportion of entities pairs that INLINEFORM6 emphasizes, INLINEFORM7 is only hyponymy of INLINEFORM8 . Considering both sides of KL divergence could help model yield more comprehensive consideration. We will talk about the advantage of this method in detail in sec:relationship. Calculating Similarity Just as introduced in sec:introduction, it is intractable to compute similarity exactly, as involving INLINEFORM0 computation. Hence, we consider the monte-carlo approximation: DISPLAYFORM0 where INLINEFORM0 is a list of entity pairs sampled from INLINEFORM1 . We use sequential sampling to gain INLINEFORM6 , which means we first sample INLINEFORM7 given INLINEFORM8 from INLINEFORM9 , and then sample INLINEFORM10 given INLINEFORM11 and INLINEFORM12 from INLINEFORM13 . Relationship with other metrics Previous work proposed various methods for representing relations as vectors BIBREF3 , BIBREF4 , as matrices BIBREF5 , even as angles BIBREF6 , etc. Based on each of these representations, one could easily define various similarity quantification methods. We show in tab:other-similarity the best one of them in each category of relation presentation. Here we provide two intuitive reasons for using our proposed probability-based similarity: (1) the capacity of a single fixed-size representation is limited — some details about the fact distribution is lost during embedding; (2) directly comparing distributions yields a better interpretability — you can not know about how two relations are different given two relation embeddings, but our model helps you study the detailed differences between probabilities on every entity pair. fig:head-tail-distribution provides an example. Although the two relations talk about the same topic, they have different meanings. TransE embeds them as vectors the closest to each other, while our model can capture the distinction between the distributions corresponds to the two relations, which could be directly noticed from the figure. Embeddings used in this graph are from a trained TransE model. Dataset Construction We show the statistics of the dataset we use in tab:statistics, and the construction procedures will be introduced in this section. Wikidata In Wikidata BIBREF8 , facts can be described as (Head item/property, Property, Tail item/property). To construct a dataset suitable for our task, we only consider the facts whose head entity and tail entity are both items. We first choose the most common 202 relations and 120000 entities from Wikidata as our initial data. Considering that the facts containing the two most frequently appearing relations (P2860: cites, and P31: instance of) occupy half of the initial data, we drop the two relations to downsize the dataset and make the dataset more balanced. Finally, we keep the triples whose head and tail both come from the selected 120000 entities as well as its relation comes from the remaining 200 relations. ReVerb Extractions ReVerb BIBREF9 is a program that automatically identifies and extracts binary relationships from English sentences. We use the extractions from running ReVerb on Wikipedia. We only keep the relations appear more than 10 times and their corresponding triples to construct our dataset. FB15K and TACRED FB15K BIBREF3 is a subset of freebase. TACRED BIBREF10 is a large supervised relation extraction dataset obtained via crowdsourcing. We directly use these two dataset, no extra processing steps were applied. Human Judgments Following BIBREF11 , BIBREF12 and the vast amount of previous work on semantic similarity, we ask nine undergraduate subjects to assess the similarity of 360 pairs of relations from a subset of Wikidata BIBREF8 that are chosen to cover from high to low levels of similarity. In our experiment, subjects were asked to rate an integer similarity score from 0 (no similarity) to 4 (perfectly the same) for each pair. The inter-subject correlation, estimated by leaving-one-out method BIBREF13 , is r = INLINEFORM0 , standard deviation = INLINEFORM1 . This important reference value (marked in fig:correlation) could be seen as the highest expected performance for machines BIBREF12 . To get baselines for comparison, we consider other possible methods to define similarity functions, as shown in tab:other-similarity. We compute the correlation between these methods and human judgment scores. As the models we have chosen are the ones work best in knowledge base completion, we do expect the similarity quantification approaches based on them could measure some degree of similarity. As shown in fig:correlation, the three baseline models could achieve moderate ( INLINEFORM0 ) positive correlation. On the other hand, our model shows a stronger correlation ( INLINEFORM1 ) with human judgment, indicating that considering the probability over whole entity pair space helps to gain a similarity closer to human judgments. These results provide evidence for our claim raised in sec:defining-similarity. Redundant Relation Removal Open IE extracts concise token patterns from plain text to represent various relations between entities, e.g.,, (Mark Twain, was born in, Florida). As Open IE is significant for constructing KBs, many effective extractors have been proposed to extract triples, such as Text-Runner BIBREF14 , ReVerb BIBREF9 , and Standford Open IE BIBREF15 . However, these extractors only yield relation patterns between entities, without aggregating and clustering their results. Accordingly, there are a fair amount of redundant relation patterns after extracting those relation patterns. Furthermore, the redundant patterns lead to some redundant relations in KBs. Recently, some efforts are devoted to Open Relation Extraction (Open RE) BIBREF16 , BIBREF17 , BIBREF18 , BIBREF19 , aiming to cluster relation patterns into several relation types instead of redundant relation patterns. Whenas, these Open RE methods adopt distantly supervised labels as golden relation types, suffering from both false positive and false negative problems on the one hand. On the other hand, these methods still rely on the conventional similarity metrics mentioned above. In this section, we will show that our defined similarity quantification could help Open IE by identifying redundant relations. To be specific, we set a toy experiment to remove redundant relations in KBs for a preliminary comparison (sec:toy-experiment). Then, we evaluate our model and baselines on the real-world dataset extracted by Open IE methods (sec:real-experiment). Considering the existing evaluation metric for Open IE and Open RE rely on either labor-intensive annotations or distantly supervised annotations, we propose a metric approximating recall and precision evaluation based on operable human annotations for balancing both efficiency and accuracy. Toy Experiment In this subsection, we propose a toy environment to verify our similarity-based method. Specifically, we construct a dataset from Wikidata and implement Chinese restaurant process to split every relation in the dataset into several sub-relations. Then, we filter out those sub-relations appearing less than 50 times to eventually get 1165 relations. All these split relations are regarded as different ones during training, and then different relation similarity metrics are adopted to merge those sub-relations into one relation. As Figure FIGREF26 shown that the matrices-based approach is less effective than other approaches, we leave this approach out of this experiment. The results are shown in Table TABREF37 . Real World Experiment In this subsection, we evaluate various relation similarity metrics on the real-world Open IE patterns. The dataset are constructed by ReVerb. Different patterns will be regarded as different relations during training, and we also adopt various relation similarity metrics to merge similar relation patterns. Because it is nearly impossible to annotate all pattern pairs for their merging or not, meanwhile it is also inappropriate to take distantly supervised annotations as golden results. Hence, we propose a novel metric approximating recall and precision evaluation based on minimal human annotations for evaluation in this experiment. Recall is defined as the yielding fraction of true positive instances over the total amount of real positive instances. However, we do not have annotations about which pairs of relations are synonymous. Crowdsourcing is a method to obtain a large number of high-quality annotations. Nevertheless, applying crowdsourcing is not trivial in our settings, because it is intractable to enumerate all synonymous pairs in the large space of relation (pattern) pairs INLINEFORM0 in Open IE. A promising method is to use rejection sampling by uniform sampling from the whole space, and only keep the synonymous ones judged by crowdworkers. However, this is not practical either, as the synonymous pairs are sparse in the whole space, resulting in low efficiency. Fortunately, we could use normalized importance sampling as an alternative to get an unbiased estimation of recall. Theorem 1 Suppose every sample INLINEFORM0 has a label INLINEFORM1 , and the model to be evaluated also gives its prediction INLINEFORM2 . The recall can be written as DISPLAYFORM0 where INLINEFORM0 is the uniform distribution over all samples with INLINEFORM1 . If we have a proposal distribution INLINEFORM2 satisfying INLINEFORM3 , we get an unbiased estimation of recall: DISPLAYFORM0 where INLINEFORM0 is a normalized version of INLINEFORM1 , where INLINEFORM2 is the unnormalized version of q, and INLINEFORM3 are i.i.d. drawn from INLINEFORM4 . Similar to eq:recall-expectation, we can write the expectation form of precision: DISPLAYFORM0 where INLINEFORM0 is the uniform distribution over all samples with INLINEFORM1 . As these samples could be found out by performing models on it. We can simply approximate precision by Monte Carlo Sampling: DISPLAYFORM0 where INLINEFORM0 . In our setting, INLINEFORM0 , INLINEFORM1 means INLINEFORM2 and INLINEFORM3 are the same relations, INLINEFORM4 means INLINEFORM5 is larger than a threshold INLINEFORM6 . The results on the ReVerb Extractions dataset that we constructed are described in fig:precision-recall-openie. To approximate recall, we use the similarity scores as the proposal distribution INLINEFORM0 . 500 relation pairs are then drawn from INLINEFORM1 . To approximate precision, we set thresholds at equal intervals. At each threshold, we uniformly sample 50 to 100 relation pairs whose similarity score given by the model is larger than the threshold. We ask 15 undergraduates to judge whether two relations in a relation pair have the same meaning. A relation pair is viewed valid only if 8 of the annotators annotate it as valid. We use the annotations to approximate recall and precision with eq:recall and eq:precision. Apart from the confidential interval of precision shown in the figure, the largest INLINEFORM2 confidential interval among thresholds for recall is INLINEFORM3 . From the result, we could see that our model performs much better than other models' similarity by a very large margin. Error Analysis for Relational Classification In this section, we consider two kinds of relational classification tasks: (1) relation prediction and (2) relation extraction. Relation prediction aims at predicting the relationship between entities with a given set of triples as training data; while relation extraction aims at extracting the relationship between two entities in a sentence. Relation Prediction We hope to design a simple and clear experiment setup to conduct error analysis for relational prediction. Therefore, we consider a typical method TransE BIBREF3 as the subject as well as FB15K BIBREF3 as the dataset. TransE embeds entities and relations as vectors, and train these embeddings by minimizing DISPLAYFORM0 where INLINEFORM0 is the set of training triples, INLINEFORM1 is the distance function, INLINEFORM2 is a negative sample with one element different from INLINEFORM4 uniformly sampled from INLINEFORM5 , and INLINEFORM6 is the margin. During testing, for each entity pair INLINEFORM0 , TransE rank relations according to INLINEFORM1 . For each INLINEFORM2 in the test set, we call the relations with higher rank scores than INLINEFORM3 distracting relations. We then compare the similarity between the golden relation and distracting relations. Note that some entity pairs could correspond to more than one relations, in which case we just do not see them as distracting relations. Relation Extraction For relation extraction, we consider the supervised relation extraction setting and TACRED dataset BIBREF10 . As for the subject model, we use the best model on TACRED dataset — position-aware neural sequence model. This method first passes the sentence into an LSTM and then calculate an attention sum of the hidden states in the LSTM by taking positional features into account. This simple and effective method achieves the best in TACRED dataset. Results fig:averank shows the distribution of similarity ranks of distracting relations of the above mentioned models' outputs on both relation prediction and relation extraction tasks. From fig:averankrp,fig:averankre, we could observe the most distracting relations are the most similar ones, which corroborate our hypothesis that even the best models on these tasks still make mistakes among the most similar relations. This result also highlights the importance of a heuristic method for guiding models to pay more attention to the boundary between similar relations. We also try to do the negative sampling with relation type constraints, but we see no improvement compared with uniform sampling. The details of negative sampling with relation type constraints are presented in sec:relation-type-constraints. Similarity and Negative Sampling Based on the observation presented in sec:erroranalysisresult, we find out that similar relations are often confusing for relation prediction models. Therefore, corrupted triples with similar relations can be used as high-quality negative samples. For a given valid triple INLINEFORM0 , we corrupt the triple by substituting INLINEFORM1 with INLINEFORM2 with the probability, DISPLAYFORM0 where INLINEFORM0 is the temperature of the exponential function, the bigger the INLINEFORM1 is, the flatter the probability distribution is. When the temperature approaches infinite, the sampling process reduces to uniform sampling. In training, we set the initial temperature to a high level and gradually reduce the temperature. Intuitively, it enables the model to distinguish among those obviously different relations in the early stage and gives more and more confusing negative triples as the training processes to help the model distinguish the similar relations. This can be also viewed as a process of curriculum learning BIBREF21 , the data fed to the model gradually changes from simple negative triples to hard ones. We perform relation prediction task on FB15K with TransE. Following BIBREF3 , we use the "Filtered" setting protocol, i.e., filtering out the corrupted triples that appear in the dataset. Our sampling method is shown to improve the model's performance, especially on Hit@1 (fig:relationprediction). Training details are described in sec:trainingdetail. Similarity and Softmax-Margin Loss Similar to sec:training-guidance-relation-prediction, we find out that relation extraction models often make wrong preditions on similar relations. In this section, we use similarity as an adaptive margin in softmax-margin loss to improve the performance of relation extraction models. As shown in BIBREF22 , Softmax-Margin Loss can be expressed as DISPLAYFORM0 where INLINEFORM0 denotes a structured output space for INLINEFORM1 , and INLINEFORM2 is INLINEFORM3 example in training data. We can easily incorporate similarity into cost function INLINEFORM0 . In this task, we define the cost function as INLINEFORM1 , where INLINEFORM2 is a hyperparameter. Intuitively, we give a larger margin between similar relations, forcing the model to distinguish among them, and thus making the model perform better. We apply our method to Position-aware Attention LSTM (PA-LSTM) BIBREF10 , and tab:relationextraction shows our method improves the performance of PA-LSTM. Training details are described in sec:trainingdetail. Related Works As many early works devoted to psychology and linguistics, especially those works exploring semantic similarity BIBREF11 , BIBREF12 , researchers have empirically found there are various different categorizations of semantic relations among words and contexts. For promoting research on these different semantic relations, bejar1991cognitive explicitly defining these relations and miller1995wordnet further systematically organize rich semantic relations between words via a database. For identifying correlation and distinction between different semantic relations so as to support learning semantic similarity, various methods have attempted to measure relational similarity BIBREF23 , BIBREF24 , BIBREF25 , BIBREF26 , BIBREF27 , BIBREF28 , BIBREF29 . With the ongoing development of information extraction and effective construction of KBs BIBREF0 , BIBREF1 , BIBREF30 , relations are further defined as various types of latent connections between objects more than semantic relations. These general relations play a core role in expressing relational facts in the real world. Hence, there are accordingly various methods proposed for discovering more relations and their facts, including open information extraction BIBREF31 , BIBREF32 , BIBREF33 , BIBREF34 , BIBREF35 , BIBREF36 , BIBREF37 and relation extraction BIBREF38 , BIBREF39 , BIBREF40 , BIBREF41 , BIBREF42 , BIBREF43 , and relation prediction BIBREF3 , BIBREF44 , BIBREF45 , BIBREF46 , BIBREF47 . For both semantic relations and general relations, identifying them is a crucial problem, requiring systems to provide a fine-grained relation similarity metric. However, the existing methods suffer from sparse data, which makes it difficult to achieve an effective and stable similarity metric. Motivated by this, we propose to measure relation similarity by leveraging their fact distribution so that we can identify nuances between similar relations, and merge those distant surface forms of the same relations, benefitting the tasks mentioned above. Conclusion and Future Work In this paper, we introduce an effective method to quantify the relation similarity and provide analysis and a survey of applications. We note that there are a wide range of future directions: (1) human prior knowledge could be incorporated into the similarity quantification; (2) similarity between relations could also be considered in multi-modal settings, e.g., extracting relations from images, videos, or even from audios; (3) by analyzing the distributions corresponding to different relations, one can also find some “meta-relations” between relations, such as hypernymy and hyponymy. Acknowledgements This work is supported by the National Natural Science Foundation of China (NSFC No. 61572273, 61532010), the National Key Research and Development Program of China (No. 2018YFB1004503). Chen and Zhu is supported by Tsinghua University Initiative Scientific Research Program, and Chen is also supported by DCST Student Academic Training Program. Han is also supported by 2018 Tencent Rhino-Bird Elite Training Program. Proofs to theorems in the paper If we have a proposal distribution INLINEFORM0 satisfying INLINEFORM1 , then eq:proofrecallfirstpart can be further written as DISPLAYFORM0 Sometimes, it's hard for us to compute normalized probability INLINEFORM0 . To tackle this problem, consider self-normalized importance sampling as an unbiased estimation BIBREF50 , DISPLAYFORM0 where INLINEFORM0 is the normalized version of INLINEFORM1 . Chinese Restaurant Process Specifically, for a relation INLINEFORM0 with currently INLINEFORM1 sub-relations, we turn it to a new sub-relation with probability DISPLAYFORM0 or to the INLINEFORM0 existing sub-relation with probability DISPLAYFORM0 where INLINEFORM0 is the size of INLINEFORM1 existing sub-relation, INLINEFORM2 is the sum of the number of all sub-relationships of INLINEFORM3 , and INLINEFORM4 is a hyperparameter, in which case we use INLINEFORM5 . Training Details In Wikidata and ReVerb Extractions dataset, we manually split a validation set, assuring every entity and relation appears in validation set also appears in training set. While minimizing loss on the training set, we observe the loss on the validation set and stop training as validation loss stops to decrease. Before training our model on any dataset, we use the entity embeddings and relation embeddings produced by TransE on the dataset as the pretrained embeddings for our model. Training Details on Negative Sampling The sampling is launched with an initial temperature of 8192. The temperature drops to half every 200 epochs and remains stable once it hits 16. Optimization is performed using SGD, with a learning rate of 1e-3. Training Details on Softmax-Margin Loss The sampling is launching with an initial temperature of 64. The temperature drops by 20% per epoch, and remains stable once it hits 16. The alpha we use is 9. Optimization is performed using SGD, with a learning rate of 1. Recall Standard Deviation As is shown in fig:recallstd, the max recall standard deviation for our model is 0.4, and 0.11 for TransE. Negative Samplilng with Relation Type Constraints In FB15K, if two relations have same prefix, we regard them as belonging to a same type, e.g., both /film/film/starring./film/performance/actor and /film/actor/film./film/performance/film have prefix film, they belong to same type. Similar to what is mentioned in sec:training-guidance-relation-prediction, we expect the model first to learn to distinguish among obviously different relations, and gradually learn to distinguish similar relations. Therefore, we conduct negative sampling with relation type constraints in two ways. Add Up Two Uniform Distribution For each triple INLINEFORM0 , we have two uniform distribution INLINEFORM1 and INLINEFORM2 . INLINEFORM3 is the uniform distribution over all the relations except for those appear with INLINEFORM4 in the knowledge base, and INLINEFORM5 is the uniform distribution over the relations of the same type as INLINEFORM6 . When corrupting the triple, we sample INLINEFORM7 from the distribution: DISPLAYFORM0 where INLINEFORM0 is a hyperparameter. We set INLINEFORM1 to 1 at the beginning of training, and every INLINEFORM2 epochs, INLINEFORM3 will be multiplied by decrease rate INLINEFORM4 . We do grid search for INLINEFORM5 and INLINEFORM6 , but no improvement is observed. Add Weight We speculate that the unsatisfactory result produced by adding up two uniform distribution is because that for those types with few relations in it, a small change of INLINEFORM0 will result in a significant change in INLINEFORM1 . Therefore, when sampling a negative INLINEFORM2 , we add weights to relations that are of the same type as INLINEFORM3 instead. Concretely, we substitute INLINEFORM4 with INLINEFORM5 with probability INLINEFORM6 , which can be calculated as: DISPLAYFORM0 where INLINEFORM0 denotes all the relations that are the same type as INLINEFORM1 , INLINEFORM2 is a hyperparameter and INLINEFORM3 is a normalizing constant. We set INLINEFORM4 to 0 at the beginning of training, and every INLINEFORM5 epochs, INLINEFORM6 will increase by INLINEFORM7 . We do grid search for INLINEFORM8 and INLINEFORM9 , still no improvement is observed. Wikidata annotation guidance We show the guidance provided for the annotators here.
Wikidata, ReVerb, FB15K, TACRED
b9f2a30f5ef664ff845d860cf4bfc2afb0a46e5a
b9f2a30f5ef664ff845d860cf4bfc2afb0a46e5a_0
Q: How do they gather human judgements for similarity between relations? Text: Introduction Author contributions: Hao Zhu designed the research; Weize Chen prepared the data, and organized data annotation; Hao Zhu and Xu Han designed the experiments; Weize Chen performed the experiments; Hao Zhu, Weize Chen and Xu Han wrote the paper; Zhiyuan Liu and Maosong Sun proofread the paper. Zhiyuan Liu is the corresponding author. Relations, representing various types of connections between entities or arguments, are the core of expressing relational facts in most general knowledge bases (KBs) BIBREF0 , BIBREF1 . Hence, identifying relations is a crucial problem for several information extraction tasks. Although considerable effort has been devoted to these tasks, some nuances between similar relations are still overlooked, (tab:similarityexample shows an example); on the other hand, some distinct surface forms carrying the same relational semantics are mistaken as different relations. These severe problems motivate us to quantify the similarity between relations in a more effective and robust method. In this paper, we introduce an adaptive and general framework for measuring similarity of the pairs of relations. Suppose for each relation INLINEFORM0 , we have obtained a conditional distribution, INLINEFORM1 ( INLINEFORM2 are head and tail entities, and INLINEFORM3 is a relation), over all head-tail entity pairs given INLINEFORM4 . We could quantify similarity between a pair of relations by the divergence between the conditional probability distributions given these relations. In this paper, this conditional probability is given by a simple feed-forward neural network, which can capture the dependencies between entities conditioned on specific relations. Despite its simplicity, the proposed network is expected to cover various facts, even if the facts are not used for training, owing to the good generalizability of neural networks. For example, our network will assign a fact a higher probability if it is “logical”: e.g., the network might prefer an athlete has the same nationality as same as his/her national team rather than other nations. Intuitively, two similar relations should have similar conditional distributions over head-tail entity pairs INLINEFORM0 , e.g., the entity pairs associated with be trade to and play for are most likely to be athletes and their clubs, whereas those associated with live in are often people and locations. In this paper, we evaluate the similarity between relations based on their conditional distributions over entity pairs. Specifically, we adopt Kullback–Leibler (KL) divergence of both directions as the metric. However, computing exact KL requires iterating over the whole entity pair space INLINEFORM1 , which is quite intractable. Therefore, we further provide a sampling-based method to approximate the similarity score over the entity pair space for computational efficiency. Besides developing a framework for assessing the similarity between relations, our second contribution is that we have done a survey of applications. We present experiments and analysis aimed at answering five questions: (1) How well does the computed similarity score correlate with human judgment about the similarity between relations? How does our approach compare to other possible approaches based on other kinds of relation embeddings to define a similarity? (sec:relationship and sec:human-judgment) (2) Open IE models inevitably extract many redundant relations. How can our approach help reduce such redundancy? (sec:openie) (3) To which extent, quantitatively, does best relational classification models make errors among similar relations? (sec:error-analysis) (4) Could similarity be used in a heuristic method to enhance negative sampling for relation prediction? (sec:training-guidance-relation-prediction) (5) Could similarity be used as an adaptive margin in softmax-margin training method for relation extraction? (sec:training-guidance-relation-extraction) Finally, we conclude with a discussion of valid extensions to our method and other possible applications. Learning Head-Tail Distribution Just as introduced in sec:introduction, we quantify the similarity between relations by their corresponding head-tail entity pair distributions. Consider the typical case that we have got numbers of facts, but they are still sparse among all facts in the real world. How could we obtain a well-generalized distribution over the whole space of possible triples beyond the training facts? This section proposes a method to parameterize such a distribution. Formal Definition of Fact Distribution A fact is a triple INLINEFORM0 , where INLINEFORM1 and INLINEFORM2 are called head and tail entities, INLINEFORM3 is the relation connecting them, INLINEFORM4 and INLINEFORM5 are the sets of entities and relations respectively. We consider a score function INLINEFORM6 maps all triples to a scalar value. As a special case, the function can be factorized into the sum of two parts: INLINEFORM7 . We use INLINEFORM8 to define the unnormalized probability. DISPLAYFORM0 for every triple INLINEFORM0 . The real parameter INLINEFORM1 can be adjusted to obtain difference distributions over facts. In this paper, we only consider locally normalized version of INLINEFORM0 : DISPLAYFORM0 where INLINEFORM0 and INLINEFORM1 are directly parameterized by feed-forward neural networks. Through local normalization, INLINEFORM2 is naturally a valid probability distribution, as the partition function INLINEFORM3 . Therefore, INLINEFORM4 . Neural architecture design Here we introduce our special design of neural networks. For the first part and the second part, we implement the scoring functions introduced in eq:local-normalization as DISPLAYFORM0 where each INLINEFORM0 represents a multi-layer perceptron composed of layers like INLINEFORM1 , INLINEFORM2 , INLINEFORM3 are embeddings of INLINEFORM4 , INLINEFORM5 , and INLINEFORM6 includes weights and biases in all layers. Training Now we discuss the method to perform training. In this paper, we consider joint training. By minimizing the loss function, we compute the model parameters INLINEFORM0 : DISPLAYFORM0 where INLINEFORM0 is a set of triples. The whole set of parameters, INLINEFORM1 . We train these parameters by Adam optimizer BIBREF2 . Training details are shown in sec:trainingdetail. Quantifying Similarity So far, we have talked about how to use neural networks to approximate the natural distribution of facts. The center topic of our paper, quantifying similarity, will be discussed in detail in this section. Relations as Distributions In this paper, we provide a probability view of relations by representing relation INLINEFORM0 as a probability distribution INLINEFORM1 . After training the neural network on a given set of triples, the model is expected to generalize well on the whole INLINEFORM2 space. Note that it is very easy to calculate INLINEFORM0 in our model thanks to local normalization (eq:local-normalization). Therefore, we can compute it by DISPLAYFORM0 Defining Similarity As the basis of our definition, we hypothesize that the similarity between INLINEFORM0 reflects the similarity between relations. For example, if the conditional distributions of two relations put mass on similar entity pairs, the two relations should be quite similar. If they emphasize different ones, the two should have some differences in meaning. Formally, we define the similarity between two relations as a function of the divergence between the distributions of corresponding head-tail entity pairs: DISPLAYFORM0 where INLINEFORM0 denotes Kullback–Leibler divergence, DISPLAYFORM0 vice versa, and function INLINEFORM0 is a symmetrical function. To keep the coherence between semantic meaning of “similarity” and our definition, INLINEFORM1 should be a monotonically decreasing function. Through this paper, we choose to use an exponential family composed with max function, i.e., INLINEFORM2 . Note that by taking both sides of KL divergence into account, our definition incorporates both the entity pairs with high probability in INLINEFORM3 and INLINEFORM4 . Intuitively, if INLINEFORM5 mainly distributes on a proportion of entities pairs that INLINEFORM6 emphasizes, INLINEFORM7 is only hyponymy of INLINEFORM8 . Considering both sides of KL divergence could help model yield more comprehensive consideration. We will talk about the advantage of this method in detail in sec:relationship. Calculating Similarity Just as introduced in sec:introduction, it is intractable to compute similarity exactly, as involving INLINEFORM0 computation. Hence, we consider the monte-carlo approximation: DISPLAYFORM0 where INLINEFORM0 is a list of entity pairs sampled from INLINEFORM1 . We use sequential sampling to gain INLINEFORM6 , which means we first sample INLINEFORM7 given INLINEFORM8 from INLINEFORM9 , and then sample INLINEFORM10 given INLINEFORM11 and INLINEFORM12 from INLINEFORM13 . Relationship with other metrics Previous work proposed various methods for representing relations as vectors BIBREF3 , BIBREF4 , as matrices BIBREF5 , even as angles BIBREF6 , etc. Based on each of these representations, one could easily define various similarity quantification methods. We show in tab:other-similarity the best one of them in each category of relation presentation. Here we provide two intuitive reasons for using our proposed probability-based similarity: (1) the capacity of a single fixed-size representation is limited — some details about the fact distribution is lost during embedding; (2) directly comparing distributions yields a better interpretability — you can not know about how two relations are different given two relation embeddings, but our model helps you study the detailed differences between probabilities on every entity pair. fig:head-tail-distribution provides an example. Although the two relations talk about the same topic, they have different meanings. TransE embeds them as vectors the closest to each other, while our model can capture the distinction between the distributions corresponds to the two relations, which could be directly noticed from the figure. Embeddings used in this graph are from a trained TransE model. Dataset Construction We show the statistics of the dataset we use in tab:statistics, and the construction procedures will be introduced in this section. Wikidata In Wikidata BIBREF8 , facts can be described as (Head item/property, Property, Tail item/property). To construct a dataset suitable for our task, we only consider the facts whose head entity and tail entity are both items. We first choose the most common 202 relations and 120000 entities from Wikidata as our initial data. Considering that the facts containing the two most frequently appearing relations (P2860: cites, and P31: instance of) occupy half of the initial data, we drop the two relations to downsize the dataset and make the dataset more balanced. Finally, we keep the triples whose head and tail both come from the selected 120000 entities as well as its relation comes from the remaining 200 relations. ReVerb Extractions ReVerb BIBREF9 is a program that automatically identifies and extracts binary relationships from English sentences. We use the extractions from running ReVerb on Wikipedia. We only keep the relations appear more than 10 times and their corresponding triples to construct our dataset. FB15K and TACRED FB15K BIBREF3 is a subset of freebase. TACRED BIBREF10 is a large supervised relation extraction dataset obtained via crowdsourcing. We directly use these two dataset, no extra processing steps were applied. Human Judgments Following BIBREF11 , BIBREF12 and the vast amount of previous work on semantic similarity, we ask nine undergraduate subjects to assess the similarity of 360 pairs of relations from a subset of Wikidata BIBREF8 that are chosen to cover from high to low levels of similarity. In our experiment, subjects were asked to rate an integer similarity score from 0 (no similarity) to 4 (perfectly the same) for each pair. The inter-subject correlation, estimated by leaving-one-out method BIBREF13 , is r = INLINEFORM0 , standard deviation = INLINEFORM1 . This important reference value (marked in fig:correlation) could be seen as the highest expected performance for machines BIBREF12 . To get baselines for comparison, we consider other possible methods to define similarity functions, as shown in tab:other-similarity. We compute the correlation between these methods and human judgment scores. As the models we have chosen are the ones work best in knowledge base completion, we do expect the similarity quantification approaches based on them could measure some degree of similarity. As shown in fig:correlation, the three baseline models could achieve moderate ( INLINEFORM0 ) positive correlation. On the other hand, our model shows a stronger correlation ( INLINEFORM1 ) with human judgment, indicating that considering the probability over whole entity pair space helps to gain a similarity closer to human judgments. These results provide evidence for our claim raised in sec:defining-similarity. Redundant Relation Removal Open IE extracts concise token patterns from plain text to represent various relations between entities, e.g.,, (Mark Twain, was born in, Florida). As Open IE is significant for constructing KBs, many effective extractors have been proposed to extract triples, such as Text-Runner BIBREF14 , ReVerb BIBREF9 , and Standford Open IE BIBREF15 . However, these extractors only yield relation patterns between entities, without aggregating and clustering their results. Accordingly, there are a fair amount of redundant relation patterns after extracting those relation patterns. Furthermore, the redundant patterns lead to some redundant relations in KBs. Recently, some efforts are devoted to Open Relation Extraction (Open RE) BIBREF16 , BIBREF17 , BIBREF18 , BIBREF19 , aiming to cluster relation patterns into several relation types instead of redundant relation patterns. Whenas, these Open RE methods adopt distantly supervised labels as golden relation types, suffering from both false positive and false negative problems on the one hand. On the other hand, these methods still rely on the conventional similarity metrics mentioned above. In this section, we will show that our defined similarity quantification could help Open IE by identifying redundant relations. To be specific, we set a toy experiment to remove redundant relations in KBs for a preliminary comparison (sec:toy-experiment). Then, we evaluate our model and baselines on the real-world dataset extracted by Open IE methods (sec:real-experiment). Considering the existing evaluation metric for Open IE and Open RE rely on either labor-intensive annotations or distantly supervised annotations, we propose a metric approximating recall and precision evaluation based on operable human annotations for balancing both efficiency and accuracy. Toy Experiment In this subsection, we propose a toy environment to verify our similarity-based method. Specifically, we construct a dataset from Wikidata and implement Chinese restaurant process to split every relation in the dataset into several sub-relations. Then, we filter out those sub-relations appearing less than 50 times to eventually get 1165 relations. All these split relations are regarded as different ones during training, and then different relation similarity metrics are adopted to merge those sub-relations into one relation. As Figure FIGREF26 shown that the matrices-based approach is less effective than other approaches, we leave this approach out of this experiment. The results are shown in Table TABREF37 . Real World Experiment In this subsection, we evaluate various relation similarity metrics on the real-world Open IE patterns. The dataset are constructed by ReVerb. Different patterns will be regarded as different relations during training, and we also adopt various relation similarity metrics to merge similar relation patterns. Because it is nearly impossible to annotate all pattern pairs for their merging or not, meanwhile it is also inappropriate to take distantly supervised annotations as golden results. Hence, we propose a novel metric approximating recall and precision evaluation based on minimal human annotations for evaluation in this experiment. Recall is defined as the yielding fraction of true positive instances over the total amount of real positive instances. However, we do not have annotations about which pairs of relations are synonymous. Crowdsourcing is a method to obtain a large number of high-quality annotations. Nevertheless, applying crowdsourcing is not trivial in our settings, because it is intractable to enumerate all synonymous pairs in the large space of relation (pattern) pairs INLINEFORM0 in Open IE. A promising method is to use rejection sampling by uniform sampling from the whole space, and only keep the synonymous ones judged by crowdworkers. However, this is not practical either, as the synonymous pairs are sparse in the whole space, resulting in low efficiency. Fortunately, we could use normalized importance sampling as an alternative to get an unbiased estimation of recall. Theorem 1 Suppose every sample INLINEFORM0 has a label INLINEFORM1 , and the model to be evaluated also gives its prediction INLINEFORM2 . The recall can be written as DISPLAYFORM0 where INLINEFORM0 is the uniform distribution over all samples with INLINEFORM1 . If we have a proposal distribution INLINEFORM2 satisfying INLINEFORM3 , we get an unbiased estimation of recall: DISPLAYFORM0 where INLINEFORM0 is a normalized version of INLINEFORM1 , where INLINEFORM2 is the unnormalized version of q, and INLINEFORM3 are i.i.d. drawn from INLINEFORM4 . Similar to eq:recall-expectation, we can write the expectation form of precision: DISPLAYFORM0 where INLINEFORM0 is the uniform distribution over all samples with INLINEFORM1 . As these samples could be found out by performing models on it. We can simply approximate precision by Monte Carlo Sampling: DISPLAYFORM0 where INLINEFORM0 . In our setting, INLINEFORM0 , INLINEFORM1 means INLINEFORM2 and INLINEFORM3 are the same relations, INLINEFORM4 means INLINEFORM5 is larger than a threshold INLINEFORM6 . The results on the ReVerb Extractions dataset that we constructed are described in fig:precision-recall-openie. To approximate recall, we use the similarity scores as the proposal distribution INLINEFORM0 . 500 relation pairs are then drawn from INLINEFORM1 . To approximate precision, we set thresholds at equal intervals. At each threshold, we uniformly sample 50 to 100 relation pairs whose similarity score given by the model is larger than the threshold. We ask 15 undergraduates to judge whether two relations in a relation pair have the same meaning. A relation pair is viewed valid only if 8 of the annotators annotate it as valid. We use the annotations to approximate recall and precision with eq:recall and eq:precision. Apart from the confidential interval of precision shown in the figure, the largest INLINEFORM2 confidential interval among thresholds for recall is INLINEFORM3 . From the result, we could see that our model performs much better than other models' similarity by a very large margin. Error Analysis for Relational Classification In this section, we consider two kinds of relational classification tasks: (1) relation prediction and (2) relation extraction. Relation prediction aims at predicting the relationship between entities with a given set of triples as training data; while relation extraction aims at extracting the relationship between two entities in a sentence. Relation Prediction We hope to design a simple and clear experiment setup to conduct error analysis for relational prediction. Therefore, we consider a typical method TransE BIBREF3 as the subject as well as FB15K BIBREF3 as the dataset. TransE embeds entities and relations as vectors, and train these embeddings by minimizing DISPLAYFORM0 where INLINEFORM0 is the set of training triples, INLINEFORM1 is the distance function, INLINEFORM2 is a negative sample with one element different from INLINEFORM4 uniformly sampled from INLINEFORM5 , and INLINEFORM6 is the margin. During testing, for each entity pair INLINEFORM0 , TransE rank relations according to INLINEFORM1 . For each INLINEFORM2 in the test set, we call the relations with higher rank scores than INLINEFORM3 distracting relations. We then compare the similarity between the golden relation and distracting relations. Note that some entity pairs could correspond to more than one relations, in which case we just do not see them as distracting relations. Relation Extraction For relation extraction, we consider the supervised relation extraction setting and TACRED dataset BIBREF10 . As for the subject model, we use the best model on TACRED dataset — position-aware neural sequence model. This method first passes the sentence into an LSTM and then calculate an attention sum of the hidden states in the LSTM by taking positional features into account. This simple and effective method achieves the best in TACRED dataset. Results fig:averank shows the distribution of similarity ranks of distracting relations of the above mentioned models' outputs on both relation prediction and relation extraction tasks. From fig:averankrp,fig:averankre, we could observe the most distracting relations are the most similar ones, which corroborate our hypothesis that even the best models on these tasks still make mistakes among the most similar relations. This result also highlights the importance of a heuristic method for guiding models to pay more attention to the boundary between similar relations. We also try to do the negative sampling with relation type constraints, but we see no improvement compared with uniform sampling. The details of negative sampling with relation type constraints are presented in sec:relation-type-constraints. Similarity and Negative Sampling Based on the observation presented in sec:erroranalysisresult, we find out that similar relations are often confusing for relation prediction models. Therefore, corrupted triples with similar relations can be used as high-quality negative samples. For a given valid triple INLINEFORM0 , we corrupt the triple by substituting INLINEFORM1 with INLINEFORM2 with the probability, DISPLAYFORM0 where INLINEFORM0 is the temperature of the exponential function, the bigger the INLINEFORM1 is, the flatter the probability distribution is. When the temperature approaches infinite, the sampling process reduces to uniform sampling. In training, we set the initial temperature to a high level and gradually reduce the temperature. Intuitively, it enables the model to distinguish among those obviously different relations in the early stage and gives more and more confusing negative triples as the training processes to help the model distinguish the similar relations. This can be also viewed as a process of curriculum learning BIBREF21 , the data fed to the model gradually changes from simple negative triples to hard ones. We perform relation prediction task on FB15K with TransE. Following BIBREF3 , we use the "Filtered" setting protocol, i.e., filtering out the corrupted triples that appear in the dataset. Our sampling method is shown to improve the model's performance, especially on Hit@1 (fig:relationprediction). Training details are described in sec:trainingdetail. Similarity and Softmax-Margin Loss Similar to sec:training-guidance-relation-prediction, we find out that relation extraction models often make wrong preditions on similar relations. In this section, we use similarity as an adaptive margin in softmax-margin loss to improve the performance of relation extraction models. As shown in BIBREF22 , Softmax-Margin Loss can be expressed as DISPLAYFORM0 where INLINEFORM0 denotes a structured output space for INLINEFORM1 , and INLINEFORM2 is INLINEFORM3 example in training data. We can easily incorporate similarity into cost function INLINEFORM0 . In this task, we define the cost function as INLINEFORM1 , where INLINEFORM2 is a hyperparameter. Intuitively, we give a larger margin between similar relations, forcing the model to distinguish among them, and thus making the model perform better. We apply our method to Position-aware Attention LSTM (PA-LSTM) BIBREF10 , and tab:relationextraction shows our method improves the performance of PA-LSTM. Training details are described in sec:trainingdetail. Related Works As many early works devoted to psychology and linguistics, especially those works exploring semantic similarity BIBREF11 , BIBREF12 , researchers have empirically found there are various different categorizations of semantic relations among words and contexts. For promoting research on these different semantic relations, bejar1991cognitive explicitly defining these relations and miller1995wordnet further systematically organize rich semantic relations between words via a database. For identifying correlation and distinction between different semantic relations so as to support learning semantic similarity, various methods have attempted to measure relational similarity BIBREF23 , BIBREF24 , BIBREF25 , BIBREF26 , BIBREF27 , BIBREF28 , BIBREF29 . With the ongoing development of information extraction and effective construction of KBs BIBREF0 , BIBREF1 , BIBREF30 , relations are further defined as various types of latent connections between objects more than semantic relations. These general relations play a core role in expressing relational facts in the real world. Hence, there are accordingly various methods proposed for discovering more relations and their facts, including open information extraction BIBREF31 , BIBREF32 , BIBREF33 , BIBREF34 , BIBREF35 , BIBREF36 , BIBREF37 and relation extraction BIBREF38 , BIBREF39 , BIBREF40 , BIBREF41 , BIBREF42 , BIBREF43 , and relation prediction BIBREF3 , BIBREF44 , BIBREF45 , BIBREF46 , BIBREF47 . For both semantic relations and general relations, identifying them is a crucial problem, requiring systems to provide a fine-grained relation similarity metric. However, the existing methods suffer from sparse data, which makes it difficult to achieve an effective and stable similarity metric. Motivated by this, we propose to measure relation similarity by leveraging their fact distribution so that we can identify nuances between similar relations, and merge those distant surface forms of the same relations, benefitting the tasks mentioned above. Conclusion and Future Work In this paper, we introduce an effective method to quantify the relation similarity and provide analysis and a survey of applications. We note that there are a wide range of future directions: (1) human prior knowledge could be incorporated into the similarity quantification; (2) similarity between relations could also be considered in multi-modal settings, e.g., extracting relations from images, videos, or even from audios; (3) by analyzing the distributions corresponding to different relations, one can also find some “meta-relations” between relations, such as hypernymy and hyponymy. Acknowledgements This work is supported by the National Natural Science Foundation of China (NSFC No. 61572273, 61532010), the National Key Research and Development Program of China (No. 2018YFB1004503). Chen and Zhu is supported by Tsinghua University Initiative Scientific Research Program, and Chen is also supported by DCST Student Academic Training Program. Han is also supported by 2018 Tencent Rhino-Bird Elite Training Program. Proofs to theorems in the paper If we have a proposal distribution INLINEFORM0 satisfying INLINEFORM1 , then eq:proofrecallfirstpart can be further written as DISPLAYFORM0 Sometimes, it's hard for us to compute normalized probability INLINEFORM0 . To tackle this problem, consider self-normalized importance sampling as an unbiased estimation BIBREF50 , DISPLAYFORM0 where INLINEFORM0 is the normalized version of INLINEFORM1 . Chinese Restaurant Process Specifically, for a relation INLINEFORM0 with currently INLINEFORM1 sub-relations, we turn it to a new sub-relation with probability DISPLAYFORM0 or to the INLINEFORM0 existing sub-relation with probability DISPLAYFORM0 where INLINEFORM0 is the size of INLINEFORM1 existing sub-relation, INLINEFORM2 is the sum of the number of all sub-relationships of INLINEFORM3 , and INLINEFORM4 is a hyperparameter, in which case we use INLINEFORM5 . Training Details In Wikidata and ReVerb Extractions dataset, we manually split a validation set, assuring every entity and relation appears in validation set also appears in training set. While minimizing loss on the training set, we observe the loss on the validation set and stop training as validation loss stops to decrease. Before training our model on any dataset, we use the entity embeddings and relation embeddings produced by TransE on the dataset as the pretrained embeddings for our model. Training Details on Negative Sampling The sampling is launched with an initial temperature of 8192. The temperature drops to half every 200 epochs and remains stable once it hits 16. Optimization is performed using SGD, with a learning rate of 1e-3. Training Details on Softmax-Margin Loss The sampling is launching with an initial temperature of 64. The temperature drops by 20% per epoch, and remains stable once it hits 16. The alpha we use is 9. Optimization is performed using SGD, with a learning rate of 1. Recall Standard Deviation As is shown in fig:recallstd, the max recall standard deviation for our model is 0.4, and 0.11 for TransE. Negative Samplilng with Relation Type Constraints In FB15K, if two relations have same prefix, we regard them as belonging to a same type, e.g., both /film/film/starring./film/performance/actor and /film/actor/film./film/performance/film have prefix film, they belong to same type. Similar to what is mentioned in sec:training-guidance-relation-prediction, we expect the model first to learn to distinguish among obviously different relations, and gradually learn to distinguish similar relations. Therefore, we conduct negative sampling with relation type constraints in two ways. Add Up Two Uniform Distribution For each triple INLINEFORM0 , we have two uniform distribution INLINEFORM1 and INLINEFORM2 . INLINEFORM3 is the uniform distribution over all the relations except for those appear with INLINEFORM4 in the knowledge base, and INLINEFORM5 is the uniform distribution over the relations of the same type as INLINEFORM6 . When corrupting the triple, we sample INLINEFORM7 from the distribution: DISPLAYFORM0 where INLINEFORM0 is a hyperparameter. We set INLINEFORM1 to 1 at the beginning of training, and every INLINEFORM2 epochs, INLINEFORM3 will be multiplied by decrease rate INLINEFORM4 . We do grid search for INLINEFORM5 and INLINEFORM6 , but no improvement is observed. Add Weight We speculate that the unsatisfactory result produced by adding up two uniform distribution is because that for those types with few relations in it, a small change of INLINEFORM0 will result in a significant change in INLINEFORM1 . Therefore, when sampling a negative INLINEFORM2 , we add weights to relations that are of the same type as INLINEFORM3 instead. Concretely, we substitute INLINEFORM4 with INLINEFORM5 with probability INLINEFORM6 , which can be calculated as: DISPLAYFORM0 where INLINEFORM0 denotes all the relations that are the same type as INLINEFORM1 , INLINEFORM2 is a hyperparameter and INLINEFORM3 is a normalizing constant. We set INLINEFORM4 to 0 at the beginning of training, and every INLINEFORM5 epochs, INLINEFORM6 will increase by INLINEFORM7 . We do grid search for INLINEFORM8 and INLINEFORM9 , still no improvement is observed. Wikidata annotation guidance We show the guidance provided for the annotators here.
By assessing similarity of 360 pairs of relations from a subset of Wikidata using an integer similarity score from 0 to 4
3513682d4ee2e64725b956c489cd5b5995a6acf2
3513682d4ee2e64725b956c489cd5b5995a6acf2_0
Q: Which sampling method do they use to approximate similarity between the conditional probability distributions over entity pairs? Text: Introduction Author contributions: Hao Zhu designed the research; Weize Chen prepared the data, and organized data annotation; Hao Zhu and Xu Han designed the experiments; Weize Chen performed the experiments; Hao Zhu, Weize Chen and Xu Han wrote the paper; Zhiyuan Liu and Maosong Sun proofread the paper. Zhiyuan Liu is the corresponding author. Relations, representing various types of connections between entities or arguments, are the core of expressing relational facts in most general knowledge bases (KBs) BIBREF0 , BIBREF1 . Hence, identifying relations is a crucial problem for several information extraction tasks. Although considerable effort has been devoted to these tasks, some nuances between similar relations are still overlooked, (tab:similarityexample shows an example); on the other hand, some distinct surface forms carrying the same relational semantics are mistaken as different relations. These severe problems motivate us to quantify the similarity between relations in a more effective and robust method. In this paper, we introduce an adaptive and general framework for measuring similarity of the pairs of relations. Suppose for each relation INLINEFORM0 , we have obtained a conditional distribution, INLINEFORM1 ( INLINEFORM2 are head and tail entities, and INLINEFORM3 is a relation), over all head-tail entity pairs given INLINEFORM4 . We could quantify similarity between a pair of relations by the divergence between the conditional probability distributions given these relations. In this paper, this conditional probability is given by a simple feed-forward neural network, which can capture the dependencies between entities conditioned on specific relations. Despite its simplicity, the proposed network is expected to cover various facts, even if the facts are not used for training, owing to the good generalizability of neural networks. For example, our network will assign a fact a higher probability if it is “logical”: e.g., the network might prefer an athlete has the same nationality as same as his/her national team rather than other nations. Intuitively, two similar relations should have similar conditional distributions over head-tail entity pairs INLINEFORM0 , e.g., the entity pairs associated with be trade to and play for are most likely to be athletes and their clubs, whereas those associated with live in are often people and locations. In this paper, we evaluate the similarity between relations based on their conditional distributions over entity pairs. Specifically, we adopt Kullback–Leibler (KL) divergence of both directions as the metric. However, computing exact KL requires iterating over the whole entity pair space INLINEFORM1 , which is quite intractable. Therefore, we further provide a sampling-based method to approximate the similarity score over the entity pair space for computational efficiency. Besides developing a framework for assessing the similarity between relations, our second contribution is that we have done a survey of applications. We present experiments and analysis aimed at answering five questions: (1) How well does the computed similarity score correlate with human judgment about the similarity between relations? How does our approach compare to other possible approaches based on other kinds of relation embeddings to define a similarity? (sec:relationship and sec:human-judgment) (2) Open IE models inevitably extract many redundant relations. How can our approach help reduce such redundancy? (sec:openie) (3) To which extent, quantitatively, does best relational classification models make errors among similar relations? (sec:error-analysis) (4) Could similarity be used in a heuristic method to enhance negative sampling for relation prediction? (sec:training-guidance-relation-prediction) (5) Could similarity be used as an adaptive margin in softmax-margin training method for relation extraction? (sec:training-guidance-relation-extraction) Finally, we conclude with a discussion of valid extensions to our method and other possible applications. Learning Head-Tail Distribution Just as introduced in sec:introduction, we quantify the similarity between relations by their corresponding head-tail entity pair distributions. Consider the typical case that we have got numbers of facts, but they are still sparse among all facts in the real world. How could we obtain a well-generalized distribution over the whole space of possible triples beyond the training facts? This section proposes a method to parameterize such a distribution. Formal Definition of Fact Distribution A fact is a triple INLINEFORM0 , where INLINEFORM1 and INLINEFORM2 are called head and tail entities, INLINEFORM3 is the relation connecting them, INLINEFORM4 and INLINEFORM5 are the sets of entities and relations respectively. We consider a score function INLINEFORM6 maps all triples to a scalar value. As a special case, the function can be factorized into the sum of two parts: INLINEFORM7 . We use INLINEFORM8 to define the unnormalized probability. DISPLAYFORM0 for every triple INLINEFORM0 . The real parameter INLINEFORM1 can be adjusted to obtain difference distributions over facts. In this paper, we only consider locally normalized version of INLINEFORM0 : DISPLAYFORM0 where INLINEFORM0 and INLINEFORM1 are directly parameterized by feed-forward neural networks. Through local normalization, INLINEFORM2 is naturally a valid probability distribution, as the partition function INLINEFORM3 . Therefore, INLINEFORM4 . Neural architecture design Here we introduce our special design of neural networks. For the first part and the second part, we implement the scoring functions introduced in eq:local-normalization as DISPLAYFORM0 where each INLINEFORM0 represents a multi-layer perceptron composed of layers like INLINEFORM1 , INLINEFORM2 , INLINEFORM3 are embeddings of INLINEFORM4 , INLINEFORM5 , and INLINEFORM6 includes weights and biases in all layers. Training Now we discuss the method to perform training. In this paper, we consider joint training. By minimizing the loss function, we compute the model parameters INLINEFORM0 : DISPLAYFORM0 where INLINEFORM0 is a set of triples. The whole set of parameters, INLINEFORM1 . We train these parameters by Adam optimizer BIBREF2 . Training details are shown in sec:trainingdetail. Quantifying Similarity So far, we have talked about how to use neural networks to approximate the natural distribution of facts. The center topic of our paper, quantifying similarity, will be discussed in detail in this section. Relations as Distributions In this paper, we provide a probability view of relations by representing relation INLINEFORM0 as a probability distribution INLINEFORM1 . After training the neural network on a given set of triples, the model is expected to generalize well on the whole INLINEFORM2 space. Note that it is very easy to calculate INLINEFORM0 in our model thanks to local normalization (eq:local-normalization). Therefore, we can compute it by DISPLAYFORM0 Defining Similarity As the basis of our definition, we hypothesize that the similarity between INLINEFORM0 reflects the similarity between relations. For example, if the conditional distributions of two relations put mass on similar entity pairs, the two relations should be quite similar. If they emphasize different ones, the two should have some differences in meaning. Formally, we define the similarity between two relations as a function of the divergence between the distributions of corresponding head-tail entity pairs: DISPLAYFORM0 where INLINEFORM0 denotes Kullback–Leibler divergence, DISPLAYFORM0 vice versa, and function INLINEFORM0 is a symmetrical function. To keep the coherence between semantic meaning of “similarity” and our definition, INLINEFORM1 should be a monotonically decreasing function. Through this paper, we choose to use an exponential family composed with max function, i.e., INLINEFORM2 . Note that by taking both sides of KL divergence into account, our definition incorporates both the entity pairs with high probability in INLINEFORM3 and INLINEFORM4 . Intuitively, if INLINEFORM5 mainly distributes on a proportion of entities pairs that INLINEFORM6 emphasizes, INLINEFORM7 is only hyponymy of INLINEFORM8 . Considering both sides of KL divergence could help model yield more comprehensive consideration. We will talk about the advantage of this method in detail in sec:relationship. Calculating Similarity Just as introduced in sec:introduction, it is intractable to compute similarity exactly, as involving INLINEFORM0 computation. Hence, we consider the monte-carlo approximation: DISPLAYFORM0 where INLINEFORM0 is a list of entity pairs sampled from INLINEFORM1 . We use sequential sampling to gain INLINEFORM6 , which means we first sample INLINEFORM7 given INLINEFORM8 from INLINEFORM9 , and then sample INLINEFORM10 given INLINEFORM11 and INLINEFORM12 from INLINEFORM13 . Relationship with other metrics Previous work proposed various methods for representing relations as vectors BIBREF3 , BIBREF4 , as matrices BIBREF5 , even as angles BIBREF6 , etc. Based on each of these representations, one could easily define various similarity quantification methods. We show in tab:other-similarity the best one of them in each category of relation presentation. Here we provide two intuitive reasons for using our proposed probability-based similarity: (1) the capacity of a single fixed-size representation is limited — some details about the fact distribution is lost during embedding; (2) directly comparing distributions yields a better interpretability — you can not know about how two relations are different given two relation embeddings, but our model helps you study the detailed differences between probabilities on every entity pair. fig:head-tail-distribution provides an example. Although the two relations talk about the same topic, they have different meanings. TransE embeds them as vectors the closest to each other, while our model can capture the distinction between the distributions corresponds to the two relations, which could be directly noticed from the figure. Embeddings used in this graph are from a trained TransE model. Dataset Construction We show the statistics of the dataset we use in tab:statistics, and the construction procedures will be introduced in this section. Wikidata In Wikidata BIBREF8 , facts can be described as (Head item/property, Property, Tail item/property). To construct a dataset suitable for our task, we only consider the facts whose head entity and tail entity are both items. We first choose the most common 202 relations and 120000 entities from Wikidata as our initial data. Considering that the facts containing the two most frequently appearing relations (P2860: cites, and P31: instance of) occupy half of the initial data, we drop the two relations to downsize the dataset and make the dataset more balanced. Finally, we keep the triples whose head and tail both come from the selected 120000 entities as well as its relation comes from the remaining 200 relations. ReVerb Extractions ReVerb BIBREF9 is a program that automatically identifies and extracts binary relationships from English sentences. We use the extractions from running ReVerb on Wikipedia. We only keep the relations appear more than 10 times and their corresponding triples to construct our dataset. FB15K and TACRED FB15K BIBREF3 is a subset of freebase. TACRED BIBREF10 is a large supervised relation extraction dataset obtained via crowdsourcing. We directly use these two dataset, no extra processing steps were applied. Human Judgments Following BIBREF11 , BIBREF12 and the vast amount of previous work on semantic similarity, we ask nine undergraduate subjects to assess the similarity of 360 pairs of relations from a subset of Wikidata BIBREF8 that are chosen to cover from high to low levels of similarity. In our experiment, subjects were asked to rate an integer similarity score from 0 (no similarity) to 4 (perfectly the same) for each pair. The inter-subject correlation, estimated by leaving-one-out method BIBREF13 , is r = INLINEFORM0 , standard deviation = INLINEFORM1 . This important reference value (marked in fig:correlation) could be seen as the highest expected performance for machines BIBREF12 . To get baselines for comparison, we consider other possible methods to define similarity functions, as shown in tab:other-similarity. We compute the correlation between these methods and human judgment scores. As the models we have chosen are the ones work best in knowledge base completion, we do expect the similarity quantification approaches based on them could measure some degree of similarity. As shown in fig:correlation, the three baseline models could achieve moderate ( INLINEFORM0 ) positive correlation. On the other hand, our model shows a stronger correlation ( INLINEFORM1 ) with human judgment, indicating that considering the probability over whole entity pair space helps to gain a similarity closer to human judgments. These results provide evidence for our claim raised in sec:defining-similarity. Redundant Relation Removal Open IE extracts concise token patterns from plain text to represent various relations between entities, e.g.,, (Mark Twain, was born in, Florida). As Open IE is significant for constructing KBs, many effective extractors have been proposed to extract triples, such as Text-Runner BIBREF14 , ReVerb BIBREF9 , and Standford Open IE BIBREF15 . However, these extractors only yield relation patterns between entities, without aggregating and clustering their results. Accordingly, there are a fair amount of redundant relation patterns after extracting those relation patterns. Furthermore, the redundant patterns lead to some redundant relations in KBs. Recently, some efforts are devoted to Open Relation Extraction (Open RE) BIBREF16 , BIBREF17 , BIBREF18 , BIBREF19 , aiming to cluster relation patterns into several relation types instead of redundant relation patterns. Whenas, these Open RE methods adopt distantly supervised labels as golden relation types, suffering from both false positive and false negative problems on the one hand. On the other hand, these methods still rely on the conventional similarity metrics mentioned above. In this section, we will show that our defined similarity quantification could help Open IE by identifying redundant relations. To be specific, we set a toy experiment to remove redundant relations in KBs for a preliminary comparison (sec:toy-experiment). Then, we evaluate our model and baselines on the real-world dataset extracted by Open IE methods (sec:real-experiment). Considering the existing evaluation metric for Open IE and Open RE rely on either labor-intensive annotations or distantly supervised annotations, we propose a metric approximating recall and precision evaluation based on operable human annotations for balancing both efficiency and accuracy. Toy Experiment In this subsection, we propose a toy environment to verify our similarity-based method. Specifically, we construct a dataset from Wikidata and implement Chinese restaurant process to split every relation in the dataset into several sub-relations. Then, we filter out those sub-relations appearing less than 50 times to eventually get 1165 relations. All these split relations are regarded as different ones during training, and then different relation similarity metrics are adopted to merge those sub-relations into one relation. As Figure FIGREF26 shown that the matrices-based approach is less effective than other approaches, we leave this approach out of this experiment. The results are shown in Table TABREF37 . Real World Experiment In this subsection, we evaluate various relation similarity metrics on the real-world Open IE patterns. The dataset are constructed by ReVerb. Different patterns will be regarded as different relations during training, and we also adopt various relation similarity metrics to merge similar relation patterns. Because it is nearly impossible to annotate all pattern pairs for their merging or not, meanwhile it is also inappropriate to take distantly supervised annotations as golden results. Hence, we propose a novel metric approximating recall and precision evaluation based on minimal human annotations for evaluation in this experiment. Recall is defined as the yielding fraction of true positive instances over the total amount of real positive instances. However, we do not have annotations about which pairs of relations are synonymous. Crowdsourcing is a method to obtain a large number of high-quality annotations. Nevertheless, applying crowdsourcing is not trivial in our settings, because it is intractable to enumerate all synonymous pairs in the large space of relation (pattern) pairs INLINEFORM0 in Open IE. A promising method is to use rejection sampling by uniform sampling from the whole space, and only keep the synonymous ones judged by crowdworkers. However, this is not practical either, as the synonymous pairs are sparse in the whole space, resulting in low efficiency. Fortunately, we could use normalized importance sampling as an alternative to get an unbiased estimation of recall. Theorem 1 Suppose every sample INLINEFORM0 has a label INLINEFORM1 , and the model to be evaluated also gives its prediction INLINEFORM2 . The recall can be written as DISPLAYFORM0 where INLINEFORM0 is the uniform distribution over all samples with INLINEFORM1 . If we have a proposal distribution INLINEFORM2 satisfying INLINEFORM3 , we get an unbiased estimation of recall: DISPLAYFORM0 where INLINEFORM0 is a normalized version of INLINEFORM1 , where INLINEFORM2 is the unnormalized version of q, and INLINEFORM3 are i.i.d. drawn from INLINEFORM4 . Similar to eq:recall-expectation, we can write the expectation form of precision: DISPLAYFORM0 where INLINEFORM0 is the uniform distribution over all samples with INLINEFORM1 . As these samples could be found out by performing models on it. We can simply approximate precision by Monte Carlo Sampling: DISPLAYFORM0 where INLINEFORM0 . In our setting, INLINEFORM0 , INLINEFORM1 means INLINEFORM2 and INLINEFORM3 are the same relations, INLINEFORM4 means INLINEFORM5 is larger than a threshold INLINEFORM6 . The results on the ReVerb Extractions dataset that we constructed are described in fig:precision-recall-openie. To approximate recall, we use the similarity scores as the proposal distribution INLINEFORM0 . 500 relation pairs are then drawn from INLINEFORM1 . To approximate precision, we set thresholds at equal intervals. At each threshold, we uniformly sample 50 to 100 relation pairs whose similarity score given by the model is larger than the threshold. We ask 15 undergraduates to judge whether two relations in a relation pair have the same meaning. A relation pair is viewed valid only if 8 of the annotators annotate it as valid. We use the annotations to approximate recall and precision with eq:recall and eq:precision. Apart from the confidential interval of precision shown in the figure, the largest INLINEFORM2 confidential interval among thresholds for recall is INLINEFORM3 . From the result, we could see that our model performs much better than other models' similarity by a very large margin. Error Analysis for Relational Classification In this section, we consider two kinds of relational classification tasks: (1) relation prediction and (2) relation extraction. Relation prediction aims at predicting the relationship between entities with a given set of triples as training data; while relation extraction aims at extracting the relationship between two entities in a sentence. Relation Prediction We hope to design a simple and clear experiment setup to conduct error analysis for relational prediction. Therefore, we consider a typical method TransE BIBREF3 as the subject as well as FB15K BIBREF3 as the dataset. TransE embeds entities and relations as vectors, and train these embeddings by minimizing DISPLAYFORM0 where INLINEFORM0 is the set of training triples, INLINEFORM1 is the distance function, INLINEFORM2 is a negative sample with one element different from INLINEFORM4 uniformly sampled from INLINEFORM5 , and INLINEFORM6 is the margin. During testing, for each entity pair INLINEFORM0 , TransE rank relations according to INLINEFORM1 . For each INLINEFORM2 in the test set, we call the relations with higher rank scores than INLINEFORM3 distracting relations. We then compare the similarity between the golden relation and distracting relations. Note that some entity pairs could correspond to more than one relations, in which case we just do not see them as distracting relations. Relation Extraction For relation extraction, we consider the supervised relation extraction setting and TACRED dataset BIBREF10 . As for the subject model, we use the best model on TACRED dataset — position-aware neural sequence model. This method first passes the sentence into an LSTM and then calculate an attention sum of the hidden states in the LSTM by taking positional features into account. This simple and effective method achieves the best in TACRED dataset. Results fig:averank shows the distribution of similarity ranks of distracting relations of the above mentioned models' outputs on both relation prediction and relation extraction tasks. From fig:averankrp,fig:averankre, we could observe the most distracting relations are the most similar ones, which corroborate our hypothesis that even the best models on these tasks still make mistakes among the most similar relations. This result also highlights the importance of a heuristic method for guiding models to pay more attention to the boundary between similar relations. We also try to do the negative sampling with relation type constraints, but we see no improvement compared with uniform sampling. The details of negative sampling with relation type constraints are presented in sec:relation-type-constraints. Similarity and Negative Sampling Based on the observation presented in sec:erroranalysisresult, we find out that similar relations are often confusing for relation prediction models. Therefore, corrupted triples with similar relations can be used as high-quality negative samples. For a given valid triple INLINEFORM0 , we corrupt the triple by substituting INLINEFORM1 with INLINEFORM2 with the probability, DISPLAYFORM0 where INLINEFORM0 is the temperature of the exponential function, the bigger the INLINEFORM1 is, the flatter the probability distribution is. When the temperature approaches infinite, the sampling process reduces to uniform sampling. In training, we set the initial temperature to a high level and gradually reduce the temperature. Intuitively, it enables the model to distinguish among those obviously different relations in the early stage and gives more and more confusing negative triples as the training processes to help the model distinguish the similar relations. This can be also viewed as a process of curriculum learning BIBREF21 , the data fed to the model gradually changes from simple negative triples to hard ones. We perform relation prediction task on FB15K with TransE. Following BIBREF3 , we use the "Filtered" setting protocol, i.e., filtering out the corrupted triples that appear in the dataset. Our sampling method is shown to improve the model's performance, especially on Hit@1 (fig:relationprediction). Training details are described in sec:trainingdetail. Similarity and Softmax-Margin Loss Similar to sec:training-guidance-relation-prediction, we find out that relation extraction models often make wrong preditions on similar relations. In this section, we use similarity as an adaptive margin in softmax-margin loss to improve the performance of relation extraction models. As shown in BIBREF22 , Softmax-Margin Loss can be expressed as DISPLAYFORM0 where INLINEFORM0 denotes a structured output space for INLINEFORM1 , and INLINEFORM2 is INLINEFORM3 example in training data. We can easily incorporate similarity into cost function INLINEFORM0 . In this task, we define the cost function as INLINEFORM1 , where INLINEFORM2 is a hyperparameter. Intuitively, we give a larger margin between similar relations, forcing the model to distinguish among them, and thus making the model perform better. We apply our method to Position-aware Attention LSTM (PA-LSTM) BIBREF10 , and tab:relationextraction shows our method improves the performance of PA-LSTM. Training details are described in sec:trainingdetail. Related Works As many early works devoted to psychology and linguistics, especially those works exploring semantic similarity BIBREF11 , BIBREF12 , researchers have empirically found there are various different categorizations of semantic relations among words and contexts. For promoting research on these different semantic relations, bejar1991cognitive explicitly defining these relations and miller1995wordnet further systematically organize rich semantic relations between words via a database. For identifying correlation and distinction between different semantic relations so as to support learning semantic similarity, various methods have attempted to measure relational similarity BIBREF23 , BIBREF24 , BIBREF25 , BIBREF26 , BIBREF27 , BIBREF28 , BIBREF29 . With the ongoing development of information extraction and effective construction of KBs BIBREF0 , BIBREF1 , BIBREF30 , relations are further defined as various types of latent connections between objects more than semantic relations. These general relations play a core role in expressing relational facts in the real world. Hence, there are accordingly various methods proposed for discovering more relations and their facts, including open information extraction BIBREF31 , BIBREF32 , BIBREF33 , BIBREF34 , BIBREF35 , BIBREF36 , BIBREF37 and relation extraction BIBREF38 , BIBREF39 , BIBREF40 , BIBREF41 , BIBREF42 , BIBREF43 , and relation prediction BIBREF3 , BIBREF44 , BIBREF45 , BIBREF46 , BIBREF47 . For both semantic relations and general relations, identifying them is a crucial problem, requiring systems to provide a fine-grained relation similarity metric. However, the existing methods suffer from sparse data, which makes it difficult to achieve an effective and stable similarity metric. Motivated by this, we propose to measure relation similarity by leveraging their fact distribution so that we can identify nuances between similar relations, and merge those distant surface forms of the same relations, benefitting the tasks mentioned above. Conclusion and Future Work In this paper, we introduce an effective method to quantify the relation similarity and provide analysis and a survey of applications. We note that there are a wide range of future directions: (1) human prior knowledge could be incorporated into the similarity quantification; (2) similarity between relations could also be considered in multi-modal settings, e.g., extracting relations from images, videos, or even from audios; (3) by analyzing the distributions corresponding to different relations, one can also find some “meta-relations” between relations, such as hypernymy and hyponymy. Acknowledgements This work is supported by the National Natural Science Foundation of China (NSFC No. 61572273, 61532010), the National Key Research and Development Program of China (No. 2018YFB1004503). Chen and Zhu is supported by Tsinghua University Initiative Scientific Research Program, and Chen is also supported by DCST Student Academic Training Program. Han is also supported by 2018 Tencent Rhino-Bird Elite Training Program. Proofs to theorems in the paper If we have a proposal distribution INLINEFORM0 satisfying INLINEFORM1 , then eq:proofrecallfirstpart can be further written as DISPLAYFORM0 Sometimes, it's hard for us to compute normalized probability INLINEFORM0 . To tackle this problem, consider self-normalized importance sampling as an unbiased estimation BIBREF50 , DISPLAYFORM0 where INLINEFORM0 is the normalized version of INLINEFORM1 . Chinese Restaurant Process Specifically, for a relation INLINEFORM0 with currently INLINEFORM1 sub-relations, we turn it to a new sub-relation with probability DISPLAYFORM0 or to the INLINEFORM0 existing sub-relation with probability DISPLAYFORM0 where INLINEFORM0 is the size of INLINEFORM1 existing sub-relation, INLINEFORM2 is the sum of the number of all sub-relationships of INLINEFORM3 , and INLINEFORM4 is a hyperparameter, in which case we use INLINEFORM5 . Training Details In Wikidata and ReVerb Extractions dataset, we manually split a validation set, assuring every entity and relation appears in validation set also appears in training set. While minimizing loss on the training set, we observe the loss on the validation set and stop training as validation loss stops to decrease. Before training our model on any dataset, we use the entity embeddings and relation embeddings produced by TransE on the dataset as the pretrained embeddings for our model. Training Details on Negative Sampling The sampling is launched with an initial temperature of 8192. The temperature drops to half every 200 epochs and remains stable once it hits 16. Optimization is performed using SGD, with a learning rate of 1e-3. Training Details on Softmax-Margin Loss The sampling is launching with an initial temperature of 64. The temperature drops by 20% per epoch, and remains stable once it hits 16. The alpha we use is 9. Optimization is performed using SGD, with a learning rate of 1. Recall Standard Deviation As is shown in fig:recallstd, the max recall standard deviation for our model is 0.4, and 0.11 for TransE. Negative Samplilng with Relation Type Constraints In FB15K, if two relations have same prefix, we regard them as belonging to a same type, e.g., both /film/film/starring./film/performance/actor and /film/actor/film./film/performance/film have prefix film, they belong to same type. Similar to what is mentioned in sec:training-guidance-relation-prediction, we expect the model first to learn to distinguish among obviously different relations, and gradually learn to distinguish similar relations. Therefore, we conduct negative sampling with relation type constraints in two ways. Add Up Two Uniform Distribution For each triple INLINEFORM0 , we have two uniform distribution INLINEFORM1 and INLINEFORM2 . INLINEFORM3 is the uniform distribution over all the relations except for those appear with INLINEFORM4 in the knowledge base, and INLINEFORM5 is the uniform distribution over the relations of the same type as INLINEFORM6 . When corrupting the triple, we sample INLINEFORM7 from the distribution: DISPLAYFORM0 where INLINEFORM0 is a hyperparameter. We set INLINEFORM1 to 1 at the beginning of training, and every INLINEFORM2 epochs, INLINEFORM3 will be multiplied by decrease rate INLINEFORM4 . We do grid search for INLINEFORM5 and INLINEFORM6 , but no improvement is observed. Add Weight We speculate that the unsatisfactory result produced by adding up two uniform distribution is because that for those types with few relations in it, a small change of INLINEFORM0 will result in a significant change in INLINEFORM1 . Therefore, when sampling a negative INLINEFORM2 , we add weights to relations that are of the same type as INLINEFORM3 instead. Concretely, we substitute INLINEFORM4 with INLINEFORM5 with probability INLINEFORM6 , which can be calculated as: DISPLAYFORM0 where INLINEFORM0 denotes all the relations that are the same type as INLINEFORM1 , INLINEFORM2 is a hyperparameter and INLINEFORM3 is a normalizing constant. We set INLINEFORM4 to 0 at the beginning of training, and every INLINEFORM5 epochs, INLINEFORM6 will increase by INLINEFORM7 . We do grid search for INLINEFORM8 and INLINEFORM9 , still no improvement is observed. Wikidata annotation guidance We show the guidance provided for the annotators here.
monte-carlo, sequential sampling
30b5e5293001f65d2fb9e4d1fdf4dc230e8cf320
30b5e5293001f65d2fb9e4d1fdf4dc230e8cf320_0
Q: What text classification task is considered? Text: Introduction Tremendous advances in natural language processing (NLP) have been enabled by novel deep neural network architectures and word embeddings. Historically, convolutional neural network (CNN) BIBREF0 , BIBREF1 and recurrent neural network (RNN) BIBREF2 , BIBREF3 topologies have competed to provide state-of-the-art results for NLP tasks, ranging from text classification to reading comprehension. CNNs identify and aggregate patterns with increasing feature sizes, reflecting our common practice of identifying patterns, literal or idiomatic, for understanding language; they are thus adept at tasks involving key phrase identification. RNNs instead construct a representation of sentences by successively updating their understanding of the sentence as they read new words, appealing to the formally sequential and rule-based construction of language. While both networks display great efficacy at certain tasks BIBREF4 , RNNs tend to be the more versatile, have emerged as the clear victor in, e.g., language translation BIBREF5 , BIBREF6 , BIBREF7 , and are typically more capable of identifying important contextual points through attention mechanisms for, e.g., reading comprehension BIBREF8 , BIBREF9 , BIBREF10 , BIBREF11 . With an interest in NLP, we thus turn to RNNs. RNNs nominally aim to solve a general problem involving sequential inputs. For various more specified tasks, specialized and constrained implementations tend to perform better BIBREF12 , BIBREF13 , BIBREF14 , BIBREF7 , BIBREF15 , BIBREF16 , BIBREF17 , BIBREF10 , BIBREF11 , BIBREF8 , BIBREF9 . Often, the improvement simply mitigates the exploding/vanishing gradient problem BIBREF18 , BIBREF19 , but, for many tasks, the improvement is more capable of generalizing the network's training for that task. Understanding better how and why certain networks excel at certain NLP tasks can lead to more performant networks, and networks that solve new problems. Advances in word embeddings have furnished the remainder of recent progress in NLP BIBREF20 , BIBREF21 , BIBREF22 , BIBREF23 , BIBREF24 , BIBREF25 . Although it is possible to train word embeddings end-to-end with the rest of a network, this is often either prohibitive due to exploding/vanishing gradients for long corpora, or results in poor embeddings for rare words BIBREF26 . Embeddings are thus typically constructed using powerful, but heuristically motivated, procedures to provide pre-trained vectors on top of which a network can be trained. As with the RNNs themselves, understanding better how and why optimal embeddings are constructed in, e.g., end-to-end training can provide the necessary insight to forge better embedding algorithms that can be deployed pre-network training. Beyond improving technologies and ensuring deep learning advances at a breakneck pace, gaining a better understanding of how these systems function is crucial for allaying public concerns surrounding the often inscrutable nature of deep neural networks. This is particularly important for RNNs, since nothing comparable to DeepDream or Lucid exists for them BIBREF27 . To these ends, the goal of this work is two fold. First, we wish to understand any emergent algebraic structure RNNs and word embeddings, trained end-to-end, may exhibit. Many algebraic structures are well understood, so any hints of structure would provide us with new perspectives from which and tools with which deep learning can be approached. Second, we wish to propose novel networks and word embedding schemes by appealing to any emergent structure, should it appear. The paper is structured as follows. Methods and experimental results comprise the bulk of the paper, so, for faster reference, § SECREF2 provides a convenient summary and intrepretation of the results, and outlines a new class of neural network and new word embedding scheme leveraging the results. § SECREF3 motivates the investigation into algebraic structures and explains the experimental setup. § SECREF4 Discusses the findings from each of the experiments. § SECREF5 interprets the results, and motivates the proposed network class and word embeddings. § SECREF6 provides closing remarks and discusses followup work, and § SECREF7 gives acknowledgments. To make a matter of notation clear going forward, we begin by referring to the space of words as INLINEFORM0 , and transition to INLINEFORM1 after analyzing the results in order to be consistent with notation in the literature on algebraic spaces. Summary of results We embedded words as vectors and used a uni-directional GRU connected to a dense layer to classify the account from which tweets may have originated. The embeddings and simple network were trained end-to-end to avoid imposing any artificial or heuristic constraints on the system. There are two primary takeaways from the work presented herein: The first point follows since 1) words are embedded in a continuous space; 2) an identity word exists that causes the RNN to act trivially on a hidden state; 3) word inverses exist that cause the RNN to undo its action on a hidden state; 4) the successive action of the RNN using two words is equivalent to the action of the RNN with a single third word, implying the multiplicative closure of words; and 5) words are not manifestly closed under any other binary action. The second point follows given that words embed on a manifold, sentences traces out paths on the manifold, and the difference equation the RNN solves bears a striking resemble to the first order equation for parallel transport, DISPLAYFORM0 where INLINEFORM0 is the INLINEFORM1 -th hidden state encountered when reading over a sentence and INLINEFORM2 is the RNN conditioned by the INLINEFORM3 -th word, INLINEFORM4 , acting on the hidden state. Since sentences trace out a path on the word manifold, and parallel transport operators for representations of the word manifold take values in the group, the RNN must parallel transport hidden states either on the group itself or on a base space, INLINEFORM5 , equipped with some word field, INLINEFORM6 , that connects the path in the base space to the path on the word manifold. Leveraging these results, we propose two new technologies. First, we propose a class of recurrent-like neural networks for NLP tasks that satisfy the differential equation DISPLAYFORM0 where DISPLAYFORM0 and where INLINEFORM0 and INLINEFORM1 are learned functions. INLINEFORM2 corresponds to traditional RNNs, with INLINEFORM3 . For INLINEFORM4 , this takes the form of RNN cells with either nested internal memories or dependencies that extend temporally beyond the immediately previous hidden state. In particular, using INLINEFORM5 for sentence generation is the topic of a manuscript presently in preparation. Second, we propose embedding schemes that explicitly embed words as elements of a Lie group. In practice, these embedding schemes would involve representing words as constrained matrices, and optimizing the elements, subject to the constraints, according to a loss function constructed from invariants of the matrices, and then applying the matrix log to obtain Lie vectors. A prototypical implementation, in which the words are assumed to be in the fundamental representation of the special orthogonal group, INLINEFORM0 , and are conditioned on losses sensitive to the relative actions of words, is the subject of another manuscript presently in preparation. The proposals are only briefly discussed herein, as they are the focus of followup work; the focus of the present work is on the experimental evidence for the emergent algebraic structure of RNNs and embeddings in NLP. Intuition and motivation We provide two points to motivate examining the potential algebraic properties of RNNs and their space of inputs in the context of NLP. First, a RNN provides a function, INLINEFORM0 , that successively updates a hidden memory vector, INLINEFORM1 , characterizing the information contained in a sequence of input vectors, INLINEFORM2 , as it reads over elements of the sequence. Explicitly, INLINEFORM3 . At face value, INLINEFORM4 takes the same form as a (nonlinear) representation of some general algebraic structure, INLINEFORM5 , with at least a binary action, INLINEFORM6 , on the vector space INLINEFORM7 . While demanding much structure on INLINEFORM8 generally places a strong constraint on the network's behavior, it would be fortuitous for such structure to emerge. Generally, constrained systems still capable of performing a required task will perform the task better, or, at least, generalize more reliably BIBREF28 , BIBREF29 , BIBREF30 , BIBREF31 , BIBREF32 , BIBREF33 . To this end, the suggestive form RNNs assume invites further examination to determine if there exist any reasonable constraints that may be placed on the network. To highlight the suggestiveness of this form in what follows, we represent the INLINEFORM9 argument of INLINEFORM10 as a subscript and the INLINEFORM11 argument by treating INLINEFORM12 as a left action on INLINEFORM13 , adopting the notation INLINEFORM14 . Since, in this paper, we consider RNNs vis-à-vis NLP, we take INLINEFORM15 as the (continuous) set of words. Second, in the massive exploration of hyperparameters presented in BIBREF5 , it was noted that, for a given word embedding dimension, the network's performance on a seq2seq task was largely insensitive to the hidden dimension of the RNN above a threshold ( INLINEFORM0 128). The dimension of admissible representations of a given algebraic structure is generally discrete and spaced out. Interpreting neurons as basis functions and the output of layers as elements of the span of the functions BIBREF34 , BIBREF35 , BIBREF36 , we would expect a network's performance to improve until an admissible dimension for the representation is found, after which the addition of hidden neurons would simply contribute to better learning the components of the proper representation by appearing in linear combinations with other neurons, and contribute minimally to improving the overall performance. In their hyperparameter search, a marginal improvement was found at a hidden dimension of 2024, suggesting a potentially better representation may have been found. These motivating factors may hint at an underlying algebraic structure in language, at least when using RNNs, but they raise the question: what structures are worth investigating? Groups present themselves as a candidate for consideration since they naturally appear in a variety of applications. Unitary weight matrices have already enjoyed much success in mitigating the exploding/vanishing gradients problem BIBREF13 , BIBREF14 , and RNNs even further constrained to act explicitly as nonlinear representations of unitary groups offer competitive results BIBREF15 . Moreover, intuitively, RNNs in NLP could plausibly behave as a group since: 1) the RNN must learn to ignore padding words used to square batches of training data, indicating an identity element of INLINEFORM0 must exist; 2) the existence of contractions, portmanteaus, and the Germanic tradition of representing sentences as singular words suggest INLINEFORM1 might be closed; and 3) the ability to backtrack and undo statements suggests language may admit natural inverses - that is, active, controlled “forgetting" in language may be tied to inversion. Indeed, groups seem reasonably promising. It is also possible portmanteaus only make sense for a finite subset of pairs of words, so INLINEFORM0 may take on the structure of a groupoid instead; moreover, it is possible, at least in classification tasks, that information is lost through successive applications of INLINEFORM1 , suggesting an inverse may not actually exist, leaving INLINEFORM2 as either a monoid or category. INLINEFORM3 may also actually admit additional structure, or an additional binary operation, rendering it a ring or algebra. To determine what, if any, algebraic structure INLINEFORM0 possesses, we tested if the following axiomatic properties of faithful representations of INLINEFORM1 hold: (Identity) INLINEFORM0 such that INLINEFORM1 , INLINEFORM2 (Closure under multiplication) INLINEFORM0 , INLINEFORM1 such that INLINEFORM2 , INLINEFORM3 (Inverse) INLINEFORM0 , INLINEFORM1 such that INLINEFORM2 , INLINEFORM3 (Closure under Lie bracket) INLINEFORM0 , INLINEFORM1 such that INLINEFORM2 , INLINEFORM3 Closure under Lie bracket simultaneously checks for ring and Lie algebra structures. Whatever structure, if any, INLINEFORM0 possesses, it must additionally be continuous since words are typically embedded in continuous spaces. This implies Lie groups (manifolds), Lie semigroups with an identity (also manifolds), and Lie algebras (vector spaces with a Lie bracket) are all plausible algebraic candidates. Data and methods We trained word embeddings and a uni-directional GRU connected to a dense layer end-to-end for text classification on a set of scraped tweets using cross-entropy as the loss function. End-to-end training was selected to impose as few heuristic constraints on the system as possible. Each tweet was tokenized using NLTK TweetTokenizer and classified as one of 10 potential accounts from which it may have originated. The accounts were chosen based on the distinct topics each is known to typically tweet about. Tokens that occurred fewer than 5 times were disregarded in the model. The model was trained on 22106 tweets over 10 epochs, while 5526 were reserved for validation and testing sets (2763 each). The network demonstrated an insensitivity to the initialization of the hidden state, so, for algebraic considerations, INLINEFORM0 was chosen for hidden dimension of INLINEFORM1 . A graph of the network is shown in Fig.( FIGREF13 ). Algebraic structures typically exhibit some relationship between the dimension of the structure and the dimension of admissible representations, so exploring the embedding and hidden dimensions for which certain algebraic properties hold is of interest. Additionally, beyond the present interest in algebraic properties, the network's insensitivity to the hidden dimension invites an investigation into its sensitivity to the word embedding dimension. To address both points of interest, we extend the hyperparameter search of BIBREF5 , and perform a comparative search over embedding dimensions and hidden dimensions to determine the impact of each on the network's performance and algebraic properties. Each dimension in the hyperparameter pair, INLINEFORM0 , runs from 20 to 280 by increments of 20. After training the network for each hyperparameter pair, the GRU model parameters and embedding matrix were frozen to begin testing for emergent algebraic structure. To satisfy the common “ INLINEFORM0 " requirement stated in § SECREF6 , real hidden states encountered in the testing data were saved to be randomly sampled when testing the actions of the GRU on states. 7 tests were conducted for each hyperparameter pair with randomly selected states: Identity (“arbitrary identity") Inverse of all words in corpus (“arbitrary inverse") Closure under multiplication of arbitrary pairs of words in total corpus (“arbitrary closure") Closure under commutation of arbitrary pairs of words in total corpus (“arbitrary commutativity") Closure under multiplication of random pairs of words from within each tweet (“intra-sentence closure") Closure of composition of long sequences of words in each tweet (“composite closure") Inverse of composition of long sequences of words in each tweet (“composite inverse") Tests 6 and 7 were performed since, if closure is upheld, the composition of multiple words must also be upheld. These tests were done to ensure mathematical consistency. To test for the existence of “words" that satisfy these conditions, vectors were searched for that, when inserted into the GRU, minimized the ratio of the Euclidean norms of the difference between the “searched" hidden vector and the correct hidden vector. For concreteness, the loss function for each algebraic property from § SECREF6 were defined as follows: (Identity) DISPLAYFORM0 (Closure under multiplication) DISPLAYFORM0 (Inverse) DISPLAYFORM0 (Closure under Lie bracket) DISPLAYFORM0 where INLINEFORM0 are random, learned word vectors, INLINEFORM1 is a hidden state, and INLINEFORM2 is the model parameter trained to minimize the loss. We refer to Eqs.( SECREF12 ) as the “axiomatic losses." It is worth noting that the non-zero hidden state initialization was chosen to prevent the denominators from vanishing when the initial state is selected as a candidate INLINEFORM3 in Eqs.( EQREF22 )&( EQREF26 ). The reported losses below are the average across all INLINEFORM4 's and INLINEFORM5 's that were examined. Optimization over the losses in Eqs.( SECREF12 ) was performed over 5000 epochs. For the associated condition to be satisfied, there must exist a word vector INLINEFORM6 that sufficiently minimizes the axiomatic losses. If it is indeed the case that the GRU attempts to learn a representation of an algebraic structure and each neuron serves as a basis function, it is not necessary that each neuron individually satisfies the above constraints. For clarity, recall the second motivating point that the addition of neurons, once a representation is found, simply contributes to learning the representation better. Instead, only a linear combination of the neurons must. We consider this possibility for the most task-performant hyperparameter pair, and two other capricious pairs. The target dimension of the linear combination, INLINEFORM0 , which we refer to as the “latent dimension," could generally be smaller than the hidden dimension, INLINEFORM1 . To compute the linear combination of the neurons, the outputs of the GRU were right-multiplied by a INLINEFORM2 matrix, INLINEFORM3 : DISPLAYFORM0 Since the linear combination is not à priori known, INLINEFORM0 is treated as a model parameter. The minimization task previously described was repeated with this combinatorial modification while scanning over latent dimensions, INLINEFORM0 , in steps of 20. The test was performed 10 times and the reported results averaged for each value of INLINEFORM1 to reduce fluctuations in the loss from differing local minima. INLINEFORM2 was trained to optimize various combinations of the algebraic axioms, the results of which were largely found to be redundant. In § SECREF4 , we address the case in which INLINEFORM3 was only trained to assist in optimizing a single condition, and frozen in other axiomatic tests; the commutative closure condition, however, was given a separate linear combination matrix for reasons that will be discussed later. Finally, the geometric structure of the resulting word vectors was explored, naively using the Euclidean metric. Sentences trace out (discrete) paths in the word embedding space, so it was natural to consider relationships between both word vectors and vectors “tangent" to the sentences' paths. Explicitly, the angles and distances between random pairs of words all words and the global average word vector random pairs of co-occurring words all words with a co-occurring word vector average adjacent tangent vectors tangent vectors with a co-occurring tangent vector average were computed to determine how word vectors are geometrically distributed. Intuitively, similar words are expected to affect hidden states similarly. To test this, and to gain insight into possible algebraic interpretations of word embeddings, the ratio of the Euclidean norm of the difference between hidden states produced by acting on a hidden state with two different words to the Euclidean norm of the original hidden state was computed as a function of the popular cosine similarity metric and distance between embeddings. This fractional difference, cosine similarity, and word distance were computed as, DISPLAYFORM0 where Einstein summation is applied to the (contravariant) vector indices. High-level descriptions of the methods will be briefly revisited in each subsection of § SECREF4 so that they are more self-contained and pedagogical. Hyperparameters and model accuracy We performed hyperparameter tuning over the word embedding dimension and the GRU hidden dimension to optimize the classifier's accuracy. Each dimension ran from 20 to 280 in increments of 20. A contour plot of the hyperparameter search is shown in Fig.( FIGREF39 ). For comparison, using pretrained, 50 dimensional GloVe vectors with this network architecture typically yielded accuracies on the order of INLINEFORM0 on this data set, even for more performant hidden dimensions. Thus, training the embeddings end-to-end is clearly advantageous for short text classification. It is worth noting that training them end-to-end is viable primarily because of the short length of tweets; for longer documents, exploding/vanishing gradients typically prohibits such training. The average Fisher information of each hyperparameter dimension over the searched region was computed to determine the relative sensitivities of the model to the hyperparameters. The Fisher information for the hidden dimension was INLINEFORM0 ; the Fisher information for the embedding dimension was INLINEFORM1 . Evidently, by this metric, the model was, on average in this region of parameter space, 1.76 times more sensitive to the hidden dimension than the embedding dimension. Nevertheless, a larger word embedding dimension was critical for the network to realize its full potential. The model performance generally behaved as expected across the hyperparameter search. Indeed, higher embedding and hidden dimensions tended to yield better results. Given time and resource constraints, the results are not averaged over many search attempts. Consequently, it is unclear if the pockets of entropy are indicative of anything deeper, or merely incidental fluctuations. It would be worthwhile to revisit this search in future work. Algebraic properties Seven tests were conducted for each hyperparameter pair to explore any emergent algebraic structure the GRU and word embeddings may exhibit. Specifically, the tests searched for 1) the existence of an identity element, 2) existence of an inverse word for each word, 3) multiplicative closure for arbitrary pairs of words, 4) commutative closure for arbitrary pairs of words, 5) multiplicative closure of pairs of words that co-occur within a tweet, 6) multiplicative closure of all sequences of words that appear in tweets, and 7) the existence of an inverse for all sequences of words that appear in tweets. The tests optimized the axiomatic losses defined in Eqs.( SECREF12 ). In what follows, we have chosen INLINEFORM0 (or, INLINEFORM1 error) as the criterion by which we declare a condition “satisfied." The tests can be broken roughly into two classes: 1) arbitrary solitary words and pairs of words, and 2) pairs and sequences of words co-occurring within a tweet. The results for class 1 are shown in Fig.( FIGREF41 ); the results for class 2 are shown in Fig.( FIGREF42 ). The identity condition was clearly satisfied for virtually all embedding and hidden dimensions, with possible exceptions for small embedding dimensions and large hidden dimensions. Although we did not explicitly check, it is likely that even the possible exceptions would be viable in the linear combination search. Arbitrary pairs of words were evidently not closed under multiplication without performing a linear combination search, with a minimum error of INLINEFORM0 across all dimensions. Moreover, the large entropy across the search does not suggest any fundamentally interesting or notable behavior, or any connections between the embedding dimension, hidden dimension, and closure property. Arbitrary pairs of words were very badly not closed under commutation, and it is unfathomable that even a linear combination search could rescue the property. One might consider the possibility that specific pairs of words might have still closed under commutation, and that the exceptional error was due to a handful of words that commute outright since this would push the loss up with a near-vanishing denominator. As previously stated, the hidden states were not initialized to be zero states, and separate experiments confirm that the zero state was not in the orbit of any non-zero state, so there would have been no hope to negate the vanishing denominator. Thus, this concern is in principle possible. However, explicitly removing examples with exploding denominators (norm INLINEFORM0 ) from the loss when performing linear combination searches still resulted in unacceptable errors ( INLINEFORM1 ), so this possibility is not actually realized. We did not explicitly check for this closure in class 2 tests since class 2 is a subset of class 1, and such a flagrant violation of the condition would not be possible if successful closure in class 2 were averaged into class 1 results. Even though commutative closure is not satisfied, it is curious to note that the error exhibited a mostly well-behaved stratification. The most interesting class 1 result was the arbitrary inverse. For embedding dimensions sufficiently large compared to the hidden dimension, inverses clearly existed even without a linear combination search. Even more remarkable was the well-behaved stratification of the axiomatic error, implying a very clear relationship between the embedding dimension, hidden dimension, and emergent algebraic structure of the model. It is not unreasonable to expect the inverse condition to be trivially satisfied in a linear combination search for a broad range of hyperparameter pairs. The same behavior of the inverse property is immediately apparent in all class 2 results. The stratification of the error was virtually identical, and all of the tested properties have acceptable errors for sufficiently large embedding dimensions for given hidden dimensions, even without a linear combination search. Linear combination search The optimal hyperparameter pair for this single pass of tuning was INLINEFORM0 , which resulted in a model accuracy of INLINEFORM1 . This was not a statistically significant result since multiple searches were not averaged, so random variations in validation sets and optimization running to differing local minima may have lead to fluctuations in the test accuracies. However, the selection provided a reasonable injection point to investigate the algebraic properties of linear combinations of the output of the GRU's neurons. For comparison, we also considered INLINEFORM2 and INLINEFORM3 . The tests were run with the linear combination matrix, INLINEFORM0 , trained to assist in optimizing the composite inverse. The learned INLINEFORM1 was then applied to the output hidden states for the other properties except for commutative closure, which was given its own linear combination matrix to determine if any existed that would render it an emergent property. The combination was trained to optimize a single condition because, if there exists an optimal linear combination for one condition, and there indeed exists an underlying algebraic structure incorporating other conditions, the linear combination would be optimal for all other conditions. Initial results for the INLINEFORM0 search is shown in Figs.( FIGREF45 )&( FIGREF46 ). Well-optimized properties are shown in Fig.( FIGREF45 ), while the expected poorly-optimized properties are shown in Fig.( FIGREF46 ). The four conditions examined in Fig.( FIGREF45 ) are clearly satisfied for all latent dimensions. They all also reach a minimum error in the same region. Composite closure, intra-sentence closure, and arbitrary inverse are all optimized for INLINEFORM0 ; composite inverse is optimized for INLINEFORM1 , though the variation in the range INLINEFORM2 is small ( INLINEFORM3 variation around the mean, or an absolute variation of INLINEFORM4 in the error). Arbitrary multiplicative closure and commutative closure are highly anti-correlated, and both conditions are badly violated. It is worth noting that the results in Fig.( FIGREF46 )(b) did not remove commutative pairs of words from the error, and yet the scale of the error in the linear combination search is virtually identical to what was separately observed with the commutative pairs removed. They both also exhibit a monotonic dependence on the latent dimension. Despite their violation, this dependence is well-behaved, and potentially indicative of some other structure. Before discussing the linear combination searches for the other selected hyperparameter pairs, it is worthwhile noting that retraining the network and performing the linear combination search again can yield differing results. Figs.( FIGREF47 )&( FIGREF48 ) show the linear combination results after retraining the model for the same hyperparameter pair, with a different network performance of INLINEFORM0 . Qualitatively, the results are mostly the same: there is a common minimizing region of INLINEFORM0 , and conditions are satisfied, at least in the common minimal region. However, the minimizing region starkly shifted down, and became sharper for composite closure, intra-sentence closure, and arbitrary inverse. Once more, the results are mostly the same. Arbitrary closure error drastically increased, but both are still highly anti-correlated, and mostly monotonic, despite the erratic fluctuations in the arbitrary closure error. Figs.( FIGREF49 )&( FIGREF50 ) show the linear combination search for INLINEFORM0 . The model was retrained, and achieved INLINEFORM1 for the displayed results. Interestingly, the optimal latent dimension occurs significantly higher than for the other reported hyperparameter pairs. This result, however, is not true for all retrainings at this INLINEFORM0 pair. The entropy in the arbitrary closure loss increased, and the commutative closure loss seemed to asymptote at higher latent dimension. Figs.( FIGREF51 )&( FIGREF52 ) show the linear combination search for INLINEFORM0 . The model was retrained, and achieved INLINEFORM1 for the displayed results. At lower dimensions, the optimal latent dimension was no longer shared between the satisfied conditions. The unsatisfied conditions displayed mostly the same behavior at lower dimensions. Embedding structure To explore the geometric distribution of word vectors, the angles and distances between 1) random pairs of words, 2) all words and the global average word vector, 3) random pairs of co-occurring words, 4) all words with a co-occurring word vector average, 5) adjacent tangent vectors, 6) tangent vectors with a co-occurring tangent vector average were computed. The magnitudes of the average word vectors, average co-occurring word vectors, and average tangent vectors were also computed. Additionally, the relative effect of words on states is computed verses their cosine similarities and relative distances, measured by Eqs.( EQREF37 )-(). In the figures that follow, there are, generally, three categories of word vectors explored: 1) random word vectors from the pool of all word vectors, 2) co-occurring word vectors, and 3) tangent vectors (the difference vector between adjacent words). Fig.( FIGREF54 ) shows the distribution in the Euclidean norms of the average vectors that were investigated. The tangent vectors and average word vectors had comparable norms. The non-zero value of the average word vector indicates that words do not perfectly distribute throughout space. The non-zero value of the average tangent vectors indicates that tweets in general progress in a preferred direction relative to the origin in embedding space; albeit, since the magnitudes are the smallest of the categories investigated, the preference is only slight. The norm of the average of co-occurring word vectors is significantly larger than the norms of others categories of vectors, indicating that the words in tweets typically occupy a more strongly preferred region of embedding space (e.g. in a cone, thus preventing component-wise cancellations when computing the average). Fig.( FIGREF55 ) shows the distribution of the Euclidean cosine similarities of both pairs of vectors and vectors relative to the categorical averages. The cosine similarity of pairs of random words and co-occurring words shared a very common distribution, albeit with the notable spikes are specific angles and a prominent spike at INLINEFORM0 for co-occurring pairs. The prominent spike could potentially be explained by the re-occurrence of punctuation within tweets, so it may not indicate anything of importance; the potential origin of the smaller spikes throughout the co-occurring distribution is unclear. Generally, the pairs strongly preferred to be orthogonal, which is unsurprising given recent investigations into the efficacy of orthogonal embeddings BIBREF37 . Adjacent pairs of tangent vectors, however, exhibited a very strong preference for obtuse relative angles, with a spike at INLINEFORM1 . Words tended to have at most a very slightly positive cosine similarity to the global average, which is again indicative of the fact words did not spread out uniformly. Co-occurring words tended to form acute angles with respect to the co-occurring average. Meanwhile, tangent vectors strongly preferred to be orthogonal to the average. The strong negative cosine similarity of adjacent tangent vectors, and the strong positive cosine similarity of words with their co-occurring average, indicate co-occurring words tended to form a grid structure in a cone. That is, adjacent words tended to be perpendicular to each other in the positive span of some set of word basis vectors. Of course, this was not strictly adhered to, but the preferred geometry is apparent. Fig.( FIGREF56 ) shows the distribution of the Euclidean distances of both pairs of vectors and vectors relative to the categorical averages. Distributions of random pairs of words and co-occurring words were virtually identical in both plots, indicating that most of the variation is attributable to the relative orientations of the vectors rather than the distances between them. Fig.( FIGREF57 ) shows the correlation of the similarity of the action of pairs of words to their cosine similarity and distances apart. Both plots confirm that the more similar words are, the more similar their actions on the hidden states are. The strongly linear, bi-modal dependence of the fractional difference on the distance between words indicates that word distance is a stronger predictor of the relative meaning of words than the popular cosine similarity. Interpretation of results The important take-aways from the results are: The GRU trivially learned an identity `word'. The action of the GRU for any individual word admits an inverse for sufficiently large embedding dimension relative to the hidden dimension. The successive action of the GRU for any arbitrary pair of words is not, generally, equivalent to the action of the GRU for any equivalent third `word'. The commutation of successive actions of the GRU for any arbitrary pair of words is not equivalent to the action of the GRU for any equivalent third `word'. The successive action of the GRU for any co-occurring pair of words is equivalent to the action of the GRU for an equivalent third `word' for sufficiently large embedding dimension relative to the hidden dimension. The successive action of the GRU for any series of co-occuring words is equivalent to the action of the GRU for an equivalent `word' for sufficiently large embedding dimension relative to the hidden dimension. The action of the GRU for any series of co-occurring words admits an inverse for sufficiently large embedding dimension relative to the hidden dimension. Any condition satisfied for a sufficiently large embedding dimension relative to the hidden dimension is true for any pair of dimensions given an appropriate linear combination of the outputs of the GRU projected into an appropriate lower dimension (latent dimension). The axiomatic errors for all satisfied conditions for the most performant models are minimized for specific, shared latent dimensions, and increases away from these latent dimensions; the optimal latent dimension is not shared for sufficiently small embedding dimensions. Models with lower test performance tend to optimally satisfy these conditions for lower latent dimensions. Co-occurring word vectors tend to be perpendicular to each other and occupy a cone in embedding space. The difference of the action of two word vectors on a hidden state increases linearly with the distance between the two words, and follows a generally bi-modal trend. Although there are still several outstanding points to consider, we offer an attempt to interpret these results in this section. Identity, inverse, and closure properties for co-occurring words are satisfied, and in such a way that they are all related under some algebraic structure. Since closure is not satisfied for arbitrary pairs of words, there are, essentially, two possible explanations for the observed structure: The union of all sets of co-occurring words is the Cartesian product of multiple Lie groups: DISPLAYFORM0 where INLINEFORM0 is the space of words, and INLINEFORM1 is a Lie group. Since multiplication between groups is not defined, the closure of arbitrary pairs of words is unsatisfied. The GRU's inability to properly close pairs of words it has never encountered together is the result of the generalization problem, and all words consequently embed in a larger Lie group: DISPLAYFORM0 In either case, words can be considered elements of a Lie group. Since Lie groups are also manifolds, the word vector components can be interpreted as coordinates on this Lie group. Traditionally, Lie groups are practically handled by considering the Lie algebra that generates them, INLINEFORM0 . The components of the Lie vectors in INLINEFORM1 are then typically taken to be the coordinates on the Lie group. This hints at a connection between INLINEFORM2 and the word vectors, but this connection was not made clear by any of the experiments. Furthermore, RNNs learn a nonlinear representation of the group on some latent space spanned by the hidden layer. Since sentences form paths on the embedding group, it's reasonable to attempt to form a more precise interpretation of the action of RNNs. We begin by considering their explicit action on hidden states as the path is traversed: DISPLAYFORM0 Eq.() takes the form of a difference equation. In particular, it looks very similar to the finite form of the differential equation governing the nonlinear parallel transport along a path, INLINEFORM0 , on a principal fibre bundle with base space INLINEFORM1 and group INLINEFORM2 . If the tangent vector at INLINEFORM3 is INLINEFORM4 , and the vector being transported at INLINEFORM5 is INLINEFORM6 then we have DISPLAYFORM0 where INLINEFORM0 is the (nonlinear) connection at INLINEFORM1 . If INLINEFORM2 were explicitly a function of INLINEFORM3 , Eq.( EQREF76 ) would take a more familiar form: DISPLAYFORM0 Given the striking resemblance between Eqs.( EQREF77 )&(), is it natural to consider either The word embedding group serving as the base space, INLINEFORM0 , so that the path INLINEFORM1 corresponds explicitly to the sentence path. A word field on the base space, INLINEFORM0 , so that there exists a mapping between INLINEFORM1 and the sentence path. The second option is more general, but requires both a candidate for INLINEFORM0 and a compelling way to connect INLINEFORM1 and INLINEFORM2 . This is also more challenging, since, generally, parallel transport operators, while taking values in the group, are not closed. If the path were on INLINEFORM3 itself, closure would be guaranteed, since any parallel transport operator would be an element of the co-occurring subgroup, and closure arises from an equivalence class of paths. To recapitulate the final interpretations of word embeddings and RNNs in NLP: Words naturally embed as elements in a Lie group, INLINEFORM0 , and end-to-end word vectors may be related to the generating Lie algebra. RNNs learn to parallel transport nonlinear representations of INLINEFORM0 either on the Lie group itself, or on a principal INLINEFORM1 -bundle. Proposal for class of recurrent-like networks The geometric derivative along a path parameterized by INLINEFORM0 is defined as: DISPLAYFORM0 where INLINEFORM0 is the tangent vector at INLINEFORM1 , and INLINEFORM2 is the connection. This implies RNNs learn the solution of the first-order geometric differential equation: DISPLAYFORM0 It is natural, then, to consider neural network solutions to higher-order generalizations: DISPLAYFORM0 Networks that solve Eq.( EQREF85 ) are recurrent-like. Updates to a hidden state will generally depend on states beyond the immediately preceding one; often, this dependence can be captured by evolving on the phase space of the hidden states, rather than on the sequences of the hidden states themselves. The latter results in a nested RNN structure for the recurrent-like cell, similar to the structure proposed in BIBREF12 . Applications of Eq.( EQREF85 ) are currently being explored. In particular, if no additional structure exists and RNNs parallel transport states along paths on the word embedding group itself (the first RNN interpretation), geodesics emerge as a natural candidate for sentence paths to lie on. Thus, sentence generation could potentially be modeled using the geodesic equation and a nonlinear adjoint representation: INLINEFORM0 , INLINEFORM1 in Eq.( EQREF85 ). This geodesic neural network (GeoNN) is the topic of a manuscript presently in preparation. Proposal for new word embeddings The embeddings trained end-to-end in this work provided highly performant results. Unfortunately, training embeddings on end-tasks with longer documents is challenging, and the resulting embeddings are often poor for rare words. However, it would seem constructing pre-trained word embeddings by leveraging the emergent Lie group structure observed herein could provide competitive results without the need for end-to-end training. Intuitively, it is unsurprising groups appear as a candidate to construct word embeddings. Evidently, the proximity of words is governed by their actions on hidden states, and groups are often the natural language to describe actions on vectors. Since groups are generally non-commutative, embedding words in a Lie group can additionally capture their order- and context-dependence. Lie groups are also generated by Lie algebras, so one group can act on the algebra of another group, and recursively form a hierarchical tower. Such an arrangement can explicitly capture the hierarchical structure language is expected to exhibit. E.g., the group structure in the first interpretation given by Eq.( EQREF72 ), DISPLAYFORM0 admits, for appropriately selected INLINEFORM0 , hierarchical representations of the form DISPLAYFORM0 where INLINEFORM0 . Such embedding schemes have the potential to generalize current attempts at capturing hierarchy, such as Poincaré embeddings BIBREF22 . Indeed, hyperbolic geometries, such as the Poincaré ball, owe their structure to their isometry groups. Indeed, it is well known that the hyperbolic INLINEFORM1 dimensional Minkowski space arises as a representation of INLINEFORM2 + translation symmetries. In practice, Lie group embedding schemes would involve representing words as constrained matrices and optimizing the elements, subject to the constraints, according to a loss function constructed from invariants of the matrices, and then applying the matrix log to obtain Lie vectors. A prototypical implementation, dubbed “LieGr," in which the words are assumed to be in the fundamental representation of the special orthogonal group, INLINEFORM0 , and are conditioned on losses sensitive to the relative actions of words, is the subject of another manuscript presently in preparation. Closing remarks The results presented herein offer insight into how RNNs and word embeddings naturally tend to structure themselves for text classification. Beyond elucidating the inner machinations of deep NLP, such results can be used to help construct novel network architectures and embeddings. There is, however, much immediate followup work worth pursuing. In particular, the uniqueness of identities, inverses, and multiplicative closure was not addressed in this work, which is critical to better understand the observed emergent algebraic structure. The cause for the hyperparameter stratification of the error in, and a more complete exploration of, commutative closure remains outstanding. Additionally, the cause of the breakdown of the common optimal latent dimension for low embedding dimension is unclear, and the bi-model, linear relationship between the action of words on hidden states and the Euclidean distance between end-to-end word embeddings invites much investigation. As a less critical, but still curious inquiry: is the additive relationship between words, e.g. “king - man + woman = queen," preserved, or is it replaced by something new? In light of the Lie group structure words trained on end tasks seem to exhibit, it would not be surprising if a new relationship, such as the Baker-Campbell-Hausdorff formula, applied. Acknowledgements The author would like to thank Robin Tully, Dr. John H. Cantrell, and Mark Laczin for providing useful discussions, of both linguistic and mathematical natures, as the work unfolded. Robin in particular provided essential feedback throughout the work, and helped explore the potential use of free groups in computational linguistics at the outset. John furnished many essential conversations that ensured the scientific and mathematical consistency of the experiments, and provided useful insights into the results. Mark prompted the investigation into potential emergent monoid structures since they appear frequently in state machines.
To classify a text as belonging to one of the ten possible classes.
993b896771c31f3478f28112a7335e7be9d03f21
993b896771c31f3478f28112a7335e7be9d03f21_0
Q: What novel class of recurrent-like networks is proposed? Text: Introduction Tremendous advances in natural language processing (NLP) have been enabled by novel deep neural network architectures and word embeddings. Historically, convolutional neural network (CNN) BIBREF0 , BIBREF1 and recurrent neural network (RNN) BIBREF2 , BIBREF3 topologies have competed to provide state-of-the-art results for NLP tasks, ranging from text classification to reading comprehension. CNNs identify and aggregate patterns with increasing feature sizes, reflecting our common practice of identifying patterns, literal or idiomatic, for understanding language; they are thus adept at tasks involving key phrase identification. RNNs instead construct a representation of sentences by successively updating their understanding of the sentence as they read new words, appealing to the formally sequential and rule-based construction of language. While both networks display great efficacy at certain tasks BIBREF4 , RNNs tend to be the more versatile, have emerged as the clear victor in, e.g., language translation BIBREF5 , BIBREF6 , BIBREF7 , and are typically more capable of identifying important contextual points through attention mechanisms for, e.g., reading comprehension BIBREF8 , BIBREF9 , BIBREF10 , BIBREF11 . With an interest in NLP, we thus turn to RNNs. RNNs nominally aim to solve a general problem involving sequential inputs. For various more specified tasks, specialized and constrained implementations tend to perform better BIBREF12 , BIBREF13 , BIBREF14 , BIBREF7 , BIBREF15 , BIBREF16 , BIBREF17 , BIBREF10 , BIBREF11 , BIBREF8 , BIBREF9 . Often, the improvement simply mitigates the exploding/vanishing gradient problem BIBREF18 , BIBREF19 , but, for many tasks, the improvement is more capable of generalizing the network's training for that task. Understanding better how and why certain networks excel at certain NLP tasks can lead to more performant networks, and networks that solve new problems. Advances in word embeddings have furnished the remainder of recent progress in NLP BIBREF20 , BIBREF21 , BIBREF22 , BIBREF23 , BIBREF24 , BIBREF25 . Although it is possible to train word embeddings end-to-end with the rest of a network, this is often either prohibitive due to exploding/vanishing gradients for long corpora, or results in poor embeddings for rare words BIBREF26 . Embeddings are thus typically constructed using powerful, but heuristically motivated, procedures to provide pre-trained vectors on top of which a network can be trained. As with the RNNs themselves, understanding better how and why optimal embeddings are constructed in, e.g., end-to-end training can provide the necessary insight to forge better embedding algorithms that can be deployed pre-network training. Beyond improving technologies and ensuring deep learning advances at a breakneck pace, gaining a better understanding of how these systems function is crucial for allaying public concerns surrounding the often inscrutable nature of deep neural networks. This is particularly important for RNNs, since nothing comparable to DeepDream or Lucid exists for them BIBREF27 . To these ends, the goal of this work is two fold. First, we wish to understand any emergent algebraic structure RNNs and word embeddings, trained end-to-end, may exhibit. Many algebraic structures are well understood, so any hints of structure would provide us with new perspectives from which and tools with which deep learning can be approached. Second, we wish to propose novel networks and word embedding schemes by appealing to any emergent structure, should it appear. The paper is structured as follows. Methods and experimental results comprise the bulk of the paper, so, for faster reference, § SECREF2 provides a convenient summary and intrepretation of the results, and outlines a new class of neural network and new word embedding scheme leveraging the results. § SECREF3 motivates the investigation into algebraic structures and explains the experimental setup. § SECREF4 Discusses the findings from each of the experiments. § SECREF5 interprets the results, and motivates the proposed network class and word embeddings. § SECREF6 provides closing remarks and discusses followup work, and § SECREF7 gives acknowledgments. To make a matter of notation clear going forward, we begin by referring to the space of words as INLINEFORM0 , and transition to INLINEFORM1 after analyzing the results in order to be consistent with notation in the literature on algebraic spaces. Summary of results We embedded words as vectors and used a uni-directional GRU connected to a dense layer to classify the account from which tweets may have originated. The embeddings and simple network were trained end-to-end to avoid imposing any artificial or heuristic constraints on the system. There are two primary takeaways from the work presented herein: The first point follows since 1) words are embedded in a continuous space; 2) an identity word exists that causes the RNN to act trivially on a hidden state; 3) word inverses exist that cause the RNN to undo its action on a hidden state; 4) the successive action of the RNN using two words is equivalent to the action of the RNN with a single third word, implying the multiplicative closure of words; and 5) words are not manifestly closed under any other binary action. The second point follows given that words embed on a manifold, sentences traces out paths on the manifold, and the difference equation the RNN solves bears a striking resemble to the first order equation for parallel transport, DISPLAYFORM0 where INLINEFORM0 is the INLINEFORM1 -th hidden state encountered when reading over a sentence and INLINEFORM2 is the RNN conditioned by the INLINEFORM3 -th word, INLINEFORM4 , acting on the hidden state. Since sentences trace out a path on the word manifold, and parallel transport operators for representations of the word manifold take values in the group, the RNN must parallel transport hidden states either on the group itself or on a base space, INLINEFORM5 , equipped with some word field, INLINEFORM6 , that connects the path in the base space to the path on the word manifold. Leveraging these results, we propose two new technologies. First, we propose a class of recurrent-like neural networks for NLP tasks that satisfy the differential equation DISPLAYFORM0 where DISPLAYFORM0 and where INLINEFORM0 and INLINEFORM1 are learned functions. INLINEFORM2 corresponds to traditional RNNs, with INLINEFORM3 . For INLINEFORM4 , this takes the form of RNN cells with either nested internal memories or dependencies that extend temporally beyond the immediately previous hidden state. In particular, using INLINEFORM5 for sentence generation is the topic of a manuscript presently in preparation. Second, we propose embedding schemes that explicitly embed words as elements of a Lie group. In practice, these embedding schemes would involve representing words as constrained matrices, and optimizing the elements, subject to the constraints, according to a loss function constructed from invariants of the matrices, and then applying the matrix log to obtain Lie vectors. A prototypical implementation, in which the words are assumed to be in the fundamental representation of the special orthogonal group, INLINEFORM0 , and are conditioned on losses sensitive to the relative actions of words, is the subject of another manuscript presently in preparation. The proposals are only briefly discussed herein, as they are the focus of followup work; the focus of the present work is on the experimental evidence for the emergent algebraic structure of RNNs and embeddings in NLP. Intuition and motivation We provide two points to motivate examining the potential algebraic properties of RNNs and their space of inputs in the context of NLP. First, a RNN provides a function, INLINEFORM0 , that successively updates a hidden memory vector, INLINEFORM1 , characterizing the information contained in a sequence of input vectors, INLINEFORM2 , as it reads over elements of the sequence. Explicitly, INLINEFORM3 . At face value, INLINEFORM4 takes the same form as a (nonlinear) representation of some general algebraic structure, INLINEFORM5 , with at least a binary action, INLINEFORM6 , on the vector space INLINEFORM7 . While demanding much structure on INLINEFORM8 generally places a strong constraint on the network's behavior, it would be fortuitous for such structure to emerge. Generally, constrained systems still capable of performing a required task will perform the task better, or, at least, generalize more reliably BIBREF28 , BIBREF29 , BIBREF30 , BIBREF31 , BIBREF32 , BIBREF33 . To this end, the suggestive form RNNs assume invites further examination to determine if there exist any reasonable constraints that may be placed on the network. To highlight the suggestiveness of this form in what follows, we represent the INLINEFORM9 argument of INLINEFORM10 as a subscript and the INLINEFORM11 argument by treating INLINEFORM12 as a left action on INLINEFORM13 , adopting the notation INLINEFORM14 . Since, in this paper, we consider RNNs vis-à-vis NLP, we take INLINEFORM15 as the (continuous) set of words. Second, in the massive exploration of hyperparameters presented in BIBREF5 , it was noted that, for a given word embedding dimension, the network's performance on a seq2seq task was largely insensitive to the hidden dimension of the RNN above a threshold ( INLINEFORM0 128). The dimension of admissible representations of a given algebraic structure is generally discrete and spaced out. Interpreting neurons as basis functions and the output of layers as elements of the span of the functions BIBREF34 , BIBREF35 , BIBREF36 , we would expect a network's performance to improve until an admissible dimension for the representation is found, after which the addition of hidden neurons would simply contribute to better learning the components of the proper representation by appearing in linear combinations with other neurons, and contribute minimally to improving the overall performance. In their hyperparameter search, a marginal improvement was found at a hidden dimension of 2024, suggesting a potentially better representation may have been found. These motivating factors may hint at an underlying algebraic structure in language, at least when using RNNs, but they raise the question: what structures are worth investigating? Groups present themselves as a candidate for consideration since they naturally appear in a variety of applications. Unitary weight matrices have already enjoyed much success in mitigating the exploding/vanishing gradients problem BIBREF13 , BIBREF14 , and RNNs even further constrained to act explicitly as nonlinear representations of unitary groups offer competitive results BIBREF15 . Moreover, intuitively, RNNs in NLP could plausibly behave as a group since: 1) the RNN must learn to ignore padding words used to square batches of training data, indicating an identity element of INLINEFORM0 must exist; 2) the existence of contractions, portmanteaus, and the Germanic tradition of representing sentences as singular words suggest INLINEFORM1 might be closed; and 3) the ability to backtrack and undo statements suggests language may admit natural inverses - that is, active, controlled “forgetting" in language may be tied to inversion. Indeed, groups seem reasonably promising. It is also possible portmanteaus only make sense for a finite subset of pairs of words, so INLINEFORM0 may take on the structure of a groupoid instead; moreover, it is possible, at least in classification tasks, that information is lost through successive applications of INLINEFORM1 , suggesting an inverse may not actually exist, leaving INLINEFORM2 as either a monoid or category. INLINEFORM3 may also actually admit additional structure, or an additional binary operation, rendering it a ring or algebra. To determine what, if any, algebraic structure INLINEFORM0 possesses, we tested if the following axiomatic properties of faithful representations of INLINEFORM1 hold: (Identity) INLINEFORM0 such that INLINEFORM1 , INLINEFORM2 (Closure under multiplication) INLINEFORM0 , INLINEFORM1 such that INLINEFORM2 , INLINEFORM3 (Inverse) INLINEFORM0 , INLINEFORM1 such that INLINEFORM2 , INLINEFORM3 (Closure under Lie bracket) INLINEFORM0 , INLINEFORM1 such that INLINEFORM2 , INLINEFORM3 Closure under Lie bracket simultaneously checks for ring and Lie algebra structures. Whatever structure, if any, INLINEFORM0 possesses, it must additionally be continuous since words are typically embedded in continuous spaces. This implies Lie groups (manifolds), Lie semigroups with an identity (also manifolds), and Lie algebras (vector spaces with a Lie bracket) are all plausible algebraic candidates. Data and methods We trained word embeddings and a uni-directional GRU connected to a dense layer end-to-end for text classification on a set of scraped tweets using cross-entropy as the loss function. End-to-end training was selected to impose as few heuristic constraints on the system as possible. Each tweet was tokenized using NLTK TweetTokenizer and classified as one of 10 potential accounts from which it may have originated. The accounts were chosen based on the distinct topics each is known to typically tweet about. Tokens that occurred fewer than 5 times were disregarded in the model. The model was trained on 22106 tweets over 10 epochs, while 5526 were reserved for validation and testing sets (2763 each). The network demonstrated an insensitivity to the initialization of the hidden state, so, for algebraic considerations, INLINEFORM0 was chosen for hidden dimension of INLINEFORM1 . A graph of the network is shown in Fig.( FIGREF13 ). Algebraic structures typically exhibit some relationship between the dimension of the structure and the dimension of admissible representations, so exploring the embedding and hidden dimensions for which certain algebraic properties hold is of interest. Additionally, beyond the present interest in algebraic properties, the network's insensitivity to the hidden dimension invites an investigation into its sensitivity to the word embedding dimension. To address both points of interest, we extend the hyperparameter search of BIBREF5 , and perform a comparative search over embedding dimensions and hidden dimensions to determine the impact of each on the network's performance and algebraic properties. Each dimension in the hyperparameter pair, INLINEFORM0 , runs from 20 to 280 by increments of 20. After training the network for each hyperparameter pair, the GRU model parameters and embedding matrix were frozen to begin testing for emergent algebraic structure. To satisfy the common “ INLINEFORM0 " requirement stated in § SECREF6 , real hidden states encountered in the testing data were saved to be randomly sampled when testing the actions of the GRU on states. 7 tests were conducted for each hyperparameter pair with randomly selected states: Identity (“arbitrary identity") Inverse of all words in corpus (“arbitrary inverse") Closure under multiplication of arbitrary pairs of words in total corpus (“arbitrary closure") Closure under commutation of arbitrary pairs of words in total corpus (“arbitrary commutativity") Closure under multiplication of random pairs of words from within each tweet (“intra-sentence closure") Closure of composition of long sequences of words in each tweet (“composite closure") Inverse of composition of long sequences of words in each tweet (“composite inverse") Tests 6 and 7 were performed since, if closure is upheld, the composition of multiple words must also be upheld. These tests were done to ensure mathematical consistency. To test for the existence of “words" that satisfy these conditions, vectors were searched for that, when inserted into the GRU, minimized the ratio of the Euclidean norms of the difference between the “searched" hidden vector and the correct hidden vector. For concreteness, the loss function for each algebraic property from § SECREF6 were defined as follows: (Identity) DISPLAYFORM0 (Closure under multiplication) DISPLAYFORM0 (Inverse) DISPLAYFORM0 (Closure under Lie bracket) DISPLAYFORM0 where INLINEFORM0 are random, learned word vectors, INLINEFORM1 is a hidden state, and INLINEFORM2 is the model parameter trained to minimize the loss. We refer to Eqs.( SECREF12 ) as the “axiomatic losses." It is worth noting that the non-zero hidden state initialization was chosen to prevent the denominators from vanishing when the initial state is selected as a candidate INLINEFORM3 in Eqs.( EQREF22 )&( EQREF26 ). The reported losses below are the average across all INLINEFORM4 's and INLINEFORM5 's that were examined. Optimization over the losses in Eqs.( SECREF12 ) was performed over 5000 epochs. For the associated condition to be satisfied, there must exist a word vector INLINEFORM6 that sufficiently minimizes the axiomatic losses. If it is indeed the case that the GRU attempts to learn a representation of an algebraic structure and each neuron serves as a basis function, it is not necessary that each neuron individually satisfies the above constraints. For clarity, recall the second motivating point that the addition of neurons, once a representation is found, simply contributes to learning the representation better. Instead, only a linear combination of the neurons must. We consider this possibility for the most task-performant hyperparameter pair, and two other capricious pairs. The target dimension of the linear combination, INLINEFORM0 , which we refer to as the “latent dimension," could generally be smaller than the hidden dimension, INLINEFORM1 . To compute the linear combination of the neurons, the outputs of the GRU were right-multiplied by a INLINEFORM2 matrix, INLINEFORM3 : DISPLAYFORM0 Since the linear combination is not à priori known, INLINEFORM0 is treated as a model parameter. The minimization task previously described was repeated with this combinatorial modification while scanning over latent dimensions, INLINEFORM0 , in steps of 20. The test was performed 10 times and the reported results averaged for each value of INLINEFORM1 to reduce fluctuations in the loss from differing local minima. INLINEFORM2 was trained to optimize various combinations of the algebraic axioms, the results of which were largely found to be redundant. In § SECREF4 , we address the case in which INLINEFORM3 was only trained to assist in optimizing a single condition, and frozen in other axiomatic tests; the commutative closure condition, however, was given a separate linear combination matrix for reasons that will be discussed later. Finally, the geometric structure of the resulting word vectors was explored, naively using the Euclidean metric. Sentences trace out (discrete) paths in the word embedding space, so it was natural to consider relationships between both word vectors and vectors “tangent" to the sentences' paths. Explicitly, the angles and distances between random pairs of words all words and the global average word vector random pairs of co-occurring words all words with a co-occurring word vector average adjacent tangent vectors tangent vectors with a co-occurring tangent vector average were computed to determine how word vectors are geometrically distributed. Intuitively, similar words are expected to affect hidden states similarly. To test this, and to gain insight into possible algebraic interpretations of word embeddings, the ratio of the Euclidean norm of the difference between hidden states produced by acting on a hidden state with two different words to the Euclidean norm of the original hidden state was computed as a function of the popular cosine similarity metric and distance between embeddings. This fractional difference, cosine similarity, and word distance were computed as, DISPLAYFORM0 where Einstein summation is applied to the (contravariant) vector indices. High-level descriptions of the methods will be briefly revisited in each subsection of § SECREF4 so that they are more self-contained and pedagogical. Hyperparameters and model accuracy We performed hyperparameter tuning over the word embedding dimension and the GRU hidden dimension to optimize the classifier's accuracy. Each dimension ran from 20 to 280 in increments of 20. A contour plot of the hyperparameter search is shown in Fig.( FIGREF39 ). For comparison, using pretrained, 50 dimensional GloVe vectors with this network architecture typically yielded accuracies on the order of INLINEFORM0 on this data set, even for more performant hidden dimensions. Thus, training the embeddings end-to-end is clearly advantageous for short text classification. It is worth noting that training them end-to-end is viable primarily because of the short length of tweets; for longer documents, exploding/vanishing gradients typically prohibits such training. The average Fisher information of each hyperparameter dimension over the searched region was computed to determine the relative sensitivities of the model to the hyperparameters. The Fisher information for the hidden dimension was INLINEFORM0 ; the Fisher information for the embedding dimension was INLINEFORM1 . Evidently, by this metric, the model was, on average in this region of parameter space, 1.76 times more sensitive to the hidden dimension than the embedding dimension. Nevertheless, a larger word embedding dimension was critical for the network to realize its full potential. The model performance generally behaved as expected across the hyperparameter search. Indeed, higher embedding and hidden dimensions tended to yield better results. Given time and resource constraints, the results are not averaged over many search attempts. Consequently, it is unclear if the pockets of entropy are indicative of anything deeper, or merely incidental fluctuations. It would be worthwhile to revisit this search in future work. Algebraic properties Seven tests were conducted for each hyperparameter pair to explore any emergent algebraic structure the GRU and word embeddings may exhibit. Specifically, the tests searched for 1) the existence of an identity element, 2) existence of an inverse word for each word, 3) multiplicative closure for arbitrary pairs of words, 4) commutative closure for arbitrary pairs of words, 5) multiplicative closure of pairs of words that co-occur within a tweet, 6) multiplicative closure of all sequences of words that appear in tweets, and 7) the existence of an inverse for all sequences of words that appear in tweets. The tests optimized the axiomatic losses defined in Eqs.( SECREF12 ). In what follows, we have chosen INLINEFORM0 (or, INLINEFORM1 error) as the criterion by which we declare a condition “satisfied." The tests can be broken roughly into two classes: 1) arbitrary solitary words and pairs of words, and 2) pairs and sequences of words co-occurring within a tweet. The results for class 1 are shown in Fig.( FIGREF41 ); the results for class 2 are shown in Fig.( FIGREF42 ). The identity condition was clearly satisfied for virtually all embedding and hidden dimensions, with possible exceptions for small embedding dimensions and large hidden dimensions. Although we did not explicitly check, it is likely that even the possible exceptions would be viable in the linear combination search. Arbitrary pairs of words were evidently not closed under multiplication without performing a linear combination search, with a minimum error of INLINEFORM0 across all dimensions. Moreover, the large entropy across the search does not suggest any fundamentally interesting or notable behavior, or any connections between the embedding dimension, hidden dimension, and closure property. Arbitrary pairs of words were very badly not closed under commutation, and it is unfathomable that even a linear combination search could rescue the property. One might consider the possibility that specific pairs of words might have still closed under commutation, and that the exceptional error was due to a handful of words that commute outright since this would push the loss up with a near-vanishing denominator. As previously stated, the hidden states were not initialized to be zero states, and separate experiments confirm that the zero state was not in the orbit of any non-zero state, so there would have been no hope to negate the vanishing denominator. Thus, this concern is in principle possible. However, explicitly removing examples with exploding denominators (norm INLINEFORM0 ) from the loss when performing linear combination searches still resulted in unacceptable errors ( INLINEFORM1 ), so this possibility is not actually realized. We did not explicitly check for this closure in class 2 tests since class 2 is a subset of class 1, and such a flagrant violation of the condition would not be possible if successful closure in class 2 were averaged into class 1 results. Even though commutative closure is not satisfied, it is curious to note that the error exhibited a mostly well-behaved stratification. The most interesting class 1 result was the arbitrary inverse. For embedding dimensions sufficiently large compared to the hidden dimension, inverses clearly existed even without a linear combination search. Even more remarkable was the well-behaved stratification of the axiomatic error, implying a very clear relationship between the embedding dimension, hidden dimension, and emergent algebraic structure of the model. It is not unreasonable to expect the inverse condition to be trivially satisfied in a linear combination search for a broad range of hyperparameter pairs. The same behavior of the inverse property is immediately apparent in all class 2 results. The stratification of the error was virtually identical, and all of the tested properties have acceptable errors for sufficiently large embedding dimensions for given hidden dimensions, even without a linear combination search. Linear combination search The optimal hyperparameter pair for this single pass of tuning was INLINEFORM0 , which resulted in a model accuracy of INLINEFORM1 . This was not a statistically significant result since multiple searches were not averaged, so random variations in validation sets and optimization running to differing local minima may have lead to fluctuations in the test accuracies. However, the selection provided a reasonable injection point to investigate the algebraic properties of linear combinations of the output of the GRU's neurons. For comparison, we also considered INLINEFORM2 and INLINEFORM3 . The tests were run with the linear combination matrix, INLINEFORM0 , trained to assist in optimizing the composite inverse. The learned INLINEFORM1 was then applied to the output hidden states for the other properties except for commutative closure, which was given its own linear combination matrix to determine if any existed that would render it an emergent property. The combination was trained to optimize a single condition because, if there exists an optimal linear combination for one condition, and there indeed exists an underlying algebraic structure incorporating other conditions, the linear combination would be optimal for all other conditions. Initial results for the INLINEFORM0 search is shown in Figs.( FIGREF45 )&( FIGREF46 ). Well-optimized properties are shown in Fig.( FIGREF45 ), while the expected poorly-optimized properties are shown in Fig.( FIGREF46 ). The four conditions examined in Fig.( FIGREF45 ) are clearly satisfied for all latent dimensions. They all also reach a minimum error in the same region. Composite closure, intra-sentence closure, and arbitrary inverse are all optimized for INLINEFORM0 ; composite inverse is optimized for INLINEFORM1 , though the variation in the range INLINEFORM2 is small ( INLINEFORM3 variation around the mean, or an absolute variation of INLINEFORM4 in the error). Arbitrary multiplicative closure and commutative closure are highly anti-correlated, and both conditions are badly violated. It is worth noting that the results in Fig.( FIGREF46 )(b) did not remove commutative pairs of words from the error, and yet the scale of the error in the linear combination search is virtually identical to what was separately observed with the commutative pairs removed. They both also exhibit a monotonic dependence on the latent dimension. Despite their violation, this dependence is well-behaved, and potentially indicative of some other structure. Before discussing the linear combination searches for the other selected hyperparameter pairs, it is worthwhile noting that retraining the network and performing the linear combination search again can yield differing results. Figs.( FIGREF47 )&( FIGREF48 ) show the linear combination results after retraining the model for the same hyperparameter pair, with a different network performance of INLINEFORM0 . Qualitatively, the results are mostly the same: there is a common minimizing region of INLINEFORM0 , and conditions are satisfied, at least in the common minimal region. However, the minimizing region starkly shifted down, and became sharper for composite closure, intra-sentence closure, and arbitrary inverse. Once more, the results are mostly the same. Arbitrary closure error drastically increased, but both are still highly anti-correlated, and mostly monotonic, despite the erratic fluctuations in the arbitrary closure error. Figs.( FIGREF49 )&( FIGREF50 ) show the linear combination search for INLINEFORM0 . The model was retrained, and achieved INLINEFORM1 for the displayed results. Interestingly, the optimal latent dimension occurs significantly higher than for the other reported hyperparameter pairs. This result, however, is not true for all retrainings at this INLINEFORM0 pair. The entropy in the arbitrary closure loss increased, and the commutative closure loss seemed to asymptote at higher latent dimension. Figs.( FIGREF51 )&( FIGREF52 ) show the linear combination search for INLINEFORM0 . The model was retrained, and achieved INLINEFORM1 for the displayed results. At lower dimensions, the optimal latent dimension was no longer shared between the satisfied conditions. The unsatisfied conditions displayed mostly the same behavior at lower dimensions. Embedding structure To explore the geometric distribution of word vectors, the angles and distances between 1) random pairs of words, 2) all words and the global average word vector, 3) random pairs of co-occurring words, 4) all words with a co-occurring word vector average, 5) adjacent tangent vectors, 6) tangent vectors with a co-occurring tangent vector average were computed. The magnitudes of the average word vectors, average co-occurring word vectors, and average tangent vectors were also computed. Additionally, the relative effect of words on states is computed verses their cosine similarities and relative distances, measured by Eqs.( EQREF37 )-(). In the figures that follow, there are, generally, three categories of word vectors explored: 1) random word vectors from the pool of all word vectors, 2) co-occurring word vectors, and 3) tangent vectors (the difference vector between adjacent words). Fig.( FIGREF54 ) shows the distribution in the Euclidean norms of the average vectors that were investigated. The tangent vectors and average word vectors had comparable norms. The non-zero value of the average word vector indicates that words do not perfectly distribute throughout space. The non-zero value of the average tangent vectors indicates that tweets in general progress in a preferred direction relative to the origin in embedding space; albeit, since the magnitudes are the smallest of the categories investigated, the preference is only slight. The norm of the average of co-occurring word vectors is significantly larger than the norms of others categories of vectors, indicating that the words in tweets typically occupy a more strongly preferred region of embedding space (e.g. in a cone, thus preventing component-wise cancellations when computing the average). Fig.( FIGREF55 ) shows the distribution of the Euclidean cosine similarities of both pairs of vectors and vectors relative to the categorical averages. The cosine similarity of pairs of random words and co-occurring words shared a very common distribution, albeit with the notable spikes are specific angles and a prominent spike at INLINEFORM0 for co-occurring pairs. The prominent spike could potentially be explained by the re-occurrence of punctuation within tweets, so it may not indicate anything of importance; the potential origin of the smaller spikes throughout the co-occurring distribution is unclear. Generally, the pairs strongly preferred to be orthogonal, which is unsurprising given recent investigations into the efficacy of orthogonal embeddings BIBREF37 . Adjacent pairs of tangent vectors, however, exhibited a very strong preference for obtuse relative angles, with a spike at INLINEFORM1 . Words tended to have at most a very slightly positive cosine similarity to the global average, which is again indicative of the fact words did not spread out uniformly. Co-occurring words tended to form acute angles with respect to the co-occurring average. Meanwhile, tangent vectors strongly preferred to be orthogonal to the average. The strong negative cosine similarity of adjacent tangent vectors, and the strong positive cosine similarity of words with their co-occurring average, indicate co-occurring words tended to form a grid structure in a cone. That is, adjacent words tended to be perpendicular to each other in the positive span of some set of word basis vectors. Of course, this was not strictly adhered to, but the preferred geometry is apparent. Fig.( FIGREF56 ) shows the distribution of the Euclidean distances of both pairs of vectors and vectors relative to the categorical averages. Distributions of random pairs of words and co-occurring words were virtually identical in both plots, indicating that most of the variation is attributable to the relative orientations of the vectors rather than the distances between them. Fig.( FIGREF57 ) shows the correlation of the similarity of the action of pairs of words to their cosine similarity and distances apart. Both plots confirm that the more similar words are, the more similar their actions on the hidden states are. The strongly linear, bi-modal dependence of the fractional difference on the distance between words indicates that word distance is a stronger predictor of the relative meaning of words than the popular cosine similarity. Interpretation of results The important take-aways from the results are: The GRU trivially learned an identity `word'. The action of the GRU for any individual word admits an inverse for sufficiently large embedding dimension relative to the hidden dimension. The successive action of the GRU for any arbitrary pair of words is not, generally, equivalent to the action of the GRU for any equivalent third `word'. The commutation of successive actions of the GRU for any arbitrary pair of words is not equivalent to the action of the GRU for any equivalent third `word'. The successive action of the GRU for any co-occurring pair of words is equivalent to the action of the GRU for an equivalent third `word' for sufficiently large embedding dimension relative to the hidden dimension. The successive action of the GRU for any series of co-occuring words is equivalent to the action of the GRU for an equivalent `word' for sufficiently large embedding dimension relative to the hidden dimension. The action of the GRU for any series of co-occurring words admits an inverse for sufficiently large embedding dimension relative to the hidden dimension. Any condition satisfied for a sufficiently large embedding dimension relative to the hidden dimension is true for any pair of dimensions given an appropriate linear combination of the outputs of the GRU projected into an appropriate lower dimension (latent dimension). The axiomatic errors for all satisfied conditions for the most performant models are minimized for specific, shared latent dimensions, and increases away from these latent dimensions; the optimal latent dimension is not shared for sufficiently small embedding dimensions. Models with lower test performance tend to optimally satisfy these conditions for lower latent dimensions. Co-occurring word vectors tend to be perpendicular to each other and occupy a cone in embedding space. The difference of the action of two word vectors on a hidden state increases linearly with the distance between the two words, and follows a generally bi-modal trend. Although there are still several outstanding points to consider, we offer an attempt to interpret these results in this section. Identity, inverse, and closure properties for co-occurring words are satisfied, and in such a way that they are all related under some algebraic structure. Since closure is not satisfied for arbitrary pairs of words, there are, essentially, two possible explanations for the observed structure: The union of all sets of co-occurring words is the Cartesian product of multiple Lie groups: DISPLAYFORM0 where INLINEFORM0 is the space of words, and INLINEFORM1 is a Lie group. Since multiplication between groups is not defined, the closure of arbitrary pairs of words is unsatisfied. The GRU's inability to properly close pairs of words it has never encountered together is the result of the generalization problem, and all words consequently embed in a larger Lie group: DISPLAYFORM0 In either case, words can be considered elements of a Lie group. Since Lie groups are also manifolds, the word vector components can be interpreted as coordinates on this Lie group. Traditionally, Lie groups are practically handled by considering the Lie algebra that generates them, INLINEFORM0 . The components of the Lie vectors in INLINEFORM1 are then typically taken to be the coordinates on the Lie group. This hints at a connection between INLINEFORM2 and the word vectors, but this connection was not made clear by any of the experiments. Furthermore, RNNs learn a nonlinear representation of the group on some latent space spanned by the hidden layer. Since sentences form paths on the embedding group, it's reasonable to attempt to form a more precise interpretation of the action of RNNs. We begin by considering their explicit action on hidden states as the path is traversed: DISPLAYFORM0 Eq.() takes the form of a difference equation. In particular, it looks very similar to the finite form of the differential equation governing the nonlinear parallel transport along a path, INLINEFORM0 , on a principal fibre bundle with base space INLINEFORM1 and group INLINEFORM2 . If the tangent vector at INLINEFORM3 is INLINEFORM4 , and the vector being transported at INLINEFORM5 is INLINEFORM6 then we have DISPLAYFORM0 where INLINEFORM0 is the (nonlinear) connection at INLINEFORM1 . If INLINEFORM2 were explicitly a function of INLINEFORM3 , Eq.( EQREF76 ) would take a more familiar form: DISPLAYFORM0 Given the striking resemblance between Eqs.( EQREF77 )&(), is it natural to consider either The word embedding group serving as the base space, INLINEFORM0 , so that the path INLINEFORM1 corresponds explicitly to the sentence path. A word field on the base space, INLINEFORM0 , so that there exists a mapping between INLINEFORM1 and the sentence path. The second option is more general, but requires both a candidate for INLINEFORM0 and a compelling way to connect INLINEFORM1 and INLINEFORM2 . This is also more challenging, since, generally, parallel transport operators, while taking values in the group, are not closed. If the path were on INLINEFORM3 itself, closure would be guaranteed, since any parallel transport operator would be an element of the co-occurring subgroup, and closure arises from an equivalence class of paths. To recapitulate the final interpretations of word embeddings and RNNs in NLP: Words naturally embed as elements in a Lie group, INLINEFORM0 , and end-to-end word vectors may be related to the generating Lie algebra. RNNs learn to parallel transport nonlinear representations of INLINEFORM0 either on the Lie group itself, or on a principal INLINEFORM1 -bundle. Proposal for class of recurrent-like networks The geometric derivative along a path parameterized by INLINEFORM0 is defined as: DISPLAYFORM0 where INLINEFORM0 is the tangent vector at INLINEFORM1 , and INLINEFORM2 is the connection. This implies RNNs learn the solution of the first-order geometric differential equation: DISPLAYFORM0 It is natural, then, to consider neural network solutions to higher-order generalizations: DISPLAYFORM0 Networks that solve Eq.( EQREF85 ) are recurrent-like. Updates to a hidden state will generally depend on states beyond the immediately preceding one; often, this dependence can be captured by evolving on the phase space of the hidden states, rather than on the sequences of the hidden states themselves. The latter results in a nested RNN structure for the recurrent-like cell, similar to the structure proposed in BIBREF12 . Applications of Eq.( EQREF85 ) are currently being explored. In particular, if no additional structure exists and RNNs parallel transport states along paths on the word embedding group itself (the first RNN interpretation), geodesics emerge as a natural candidate for sentence paths to lie on. Thus, sentence generation could potentially be modeled using the geodesic equation and a nonlinear adjoint representation: INLINEFORM0 , INLINEFORM1 in Eq.( EQREF85 ). This geodesic neural network (GeoNN) is the topic of a manuscript presently in preparation. Proposal for new word embeddings The embeddings trained end-to-end in this work provided highly performant results. Unfortunately, training embeddings on end-tasks with longer documents is challenging, and the resulting embeddings are often poor for rare words. However, it would seem constructing pre-trained word embeddings by leveraging the emergent Lie group structure observed herein could provide competitive results without the need for end-to-end training. Intuitively, it is unsurprising groups appear as a candidate to construct word embeddings. Evidently, the proximity of words is governed by their actions on hidden states, and groups are often the natural language to describe actions on vectors. Since groups are generally non-commutative, embedding words in a Lie group can additionally capture their order- and context-dependence. Lie groups are also generated by Lie algebras, so one group can act on the algebra of another group, and recursively form a hierarchical tower. Such an arrangement can explicitly capture the hierarchical structure language is expected to exhibit. E.g., the group structure in the first interpretation given by Eq.( EQREF72 ), DISPLAYFORM0 admits, for appropriately selected INLINEFORM0 , hierarchical representations of the form DISPLAYFORM0 where INLINEFORM0 . Such embedding schemes have the potential to generalize current attempts at capturing hierarchy, such as Poincaré embeddings BIBREF22 . Indeed, hyperbolic geometries, such as the Poincaré ball, owe their structure to their isometry groups. Indeed, it is well known that the hyperbolic INLINEFORM1 dimensional Minkowski space arises as a representation of INLINEFORM2 + translation symmetries. In practice, Lie group embedding schemes would involve representing words as constrained matrices and optimizing the elements, subject to the constraints, according to a loss function constructed from invariants of the matrices, and then applying the matrix log to obtain Lie vectors. A prototypical implementation, dubbed “LieGr," in which the words are assumed to be in the fundamental representation of the special orthogonal group, INLINEFORM0 , and are conditioned on losses sensitive to the relative actions of words, is the subject of another manuscript presently in preparation. Closing remarks The results presented herein offer insight into how RNNs and word embeddings naturally tend to structure themselves for text classification. Beyond elucidating the inner machinations of deep NLP, such results can be used to help construct novel network architectures and embeddings. There is, however, much immediate followup work worth pursuing. In particular, the uniqueness of identities, inverses, and multiplicative closure was not addressed in this work, which is critical to better understand the observed emergent algebraic structure. The cause for the hyperparameter stratification of the error in, and a more complete exploration of, commutative closure remains outstanding. Additionally, the cause of the breakdown of the common optimal latent dimension for low embedding dimension is unclear, and the bi-model, linear relationship between the action of words on hidden states and the Euclidean distance between end-to-end word embeddings invites much investigation. As a less critical, but still curious inquiry: is the additive relationship between words, e.g. “king - man + woman = queen," preserved, or is it replaced by something new? In light of the Lie group structure words trained on end tasks seem to exhibit, it would not be surprising if a new relationship, such as the Baker-Campbell-Hausdorff formula, applied. Acknowledgements The author would like to thank Robin Tully, Dr. John H. Cantrell, and Mark Laczin for providing useful discussions, of both linguistic and mathematical natures, as the work unfolded. Robin in particular provided essential feedback throughout the work, and helped explore the potential use of free groups in computational linguistics at the outset. John furnished many essential conversations that ensured the scientific and mathematical consistency of the experiments, and provided useful insights into the results. Mark prompted the investigation into potential emergent monoid structures since they appear frequently in state machines.
A network, whose learned functions satisfy a certain equation. The network contains RNN cells with either nested internal memories or dependencies that extend temporally beyond the immediately previous hidden state.
dee116df92f9f92d9a67ac4d30e32822c22158a6
dee116df92f9f92d9a67ac4d30e32822c22158a6_0
Q: Is there a formal proof that the RNNs form a representation of the group? Text: Introduction Tremendous advances in natural language processing (NLP) have been enabled by novel deep neural network architectures and word embeddings. Historically, convolutional neural network (CNN) BIBREF0 , BIBREF1 and recurrent neural network (RNN) BIBREF2 , BIBREF3 topologies have competed to provide state-of-the-art results for NLP tasks, ranging from text classification to reading comprehension. CNNs identify and aggregate patterns with increasing feature sizes, reflecting our common practice of identifying patterns, literal or idiomatic, for understanding language; they are thus adept at tasks involving key phrase identification. RNNs instead construct a representation of sentences by successively updating their understanding of the sentence as they read new words, appealing to the formally sequential and rule-based construction of language. While both networks display great efficacy at certain tasks BIBREF4 , RNNs tend to be the more versatile, have emerged as the clear victor in, e.g., language translation BIBREF5 , BIBREF6 , BIBREF7 , and are typically more capable of identifying important contextual points through attention mechanisms for, e.g., reading comprehension BIBREF8 , BIBREF9 , BIBREF10 , BIBREF11 . With an interest in NLP, we thus turn to RNNs. RNNs nominally aim to solve a general problem involving sequential inputs. For various more specified tasks, specialized and constrained implementations tend to perform better BIBREF12 , BIBREF13 , BIBREF14 , BIBREF7 , BIBREF15 , BIBREF16 , BIBREF17 , BIBREF10 , BIBREF11 , BIBREF8 , BIBREF9 . Often, the improvement simply mitigates the exploding/vanishing gradient problem BIBREF18 , BIBREF19 , but, for many tasks, the improvement is more capable of generalizing the network's training for that task. Understanding better how and why certain networks excel at certain NLP tasks can lead to more performant networks, and networks that solve new problems. Advances in word embeddings have furnished the remainder of recent progress in NLP BIBREF20 , BIBREF21 , BIBREF22 , BIBREF23 , BIBREF24 , BIBREF25 . Although it is possible to train word embeddings end-to-end with the rest of a network, this is often either prohibitive due to exploding/vanishing gradients for long corpora, or results in poor embeddings for rare words BIBREF26 . Embeddings are thus typically constructed using powerful, but heuristically motivated, procedures to provide pre-trained vectors on top of which a network can be trained. As with the RNNs themselves, understanding better how and why optimal embeddings are constructed in, e.g., end-to-end training can provide the necessary insight to forge better embedding algorithms that can be deployed pre-network training. Beyond improving technologies and ensuring deep learning advances at a breakneck pace, gaining a better understanding of how these systems function is crucial for allaying public concerns surrounding the often inscrutable nature of deep neural networks. This is particularly important for RNNs, since nothing comparable to DeepDream or Lucid exists for them BIBREF27 . To these ends, the goal of this work is two fold. First, we wish to understand any emergent algebraic structure RNNs and word embeddings, trained end-to-end, may exhibit. Many algebraic structures are well understood, so any hints of structure would provide us with new perspectives from which and tools with which deep learning can be approached. Second, we wish to propose novel networks and word embedding schemes by appealing to any emergent structure, should it appear. The paper is structured as follows. Methods and experimental results comprise the bulk of the paper, so, for faster reference, § SECREF2 provides a convenient summary and intrepretation of the results, and outlines a new class of neural network and new word embedding scheme leveraging the results. § SECREF3 motivates the investigation into algebraic structures and explains the experimental setup. § SECREF4 Discusses the findings from each of the experiments. § SECREF5 interprets the results, and motivates the proposed network class and word embeddings. § SECREF6 provides closing remarks and discusses followup work, and § SECREF7 gives acknowledgments. To make a matter of notation clear going forward, we begin by referring to the space of words as INLINEFORM0 , and transition to INLINEFORM1 after analyzing the results in order to be consistent with notation in the literature on algebraic spaces. Summary of results We embedded words as vectors and used a uni-directional GRU connected to a dense layer to classify the account from which tweets may have originated. The embeddings and simple network were trained end-to-end to avoid imposing any artificial or heuristic constraints on the system. There are two primary takeaways from the work presented herein: The first point follows since 1) words are embedded in a continuous space; 2) an identity word exists that causes the RNN to act trivially on a hidden state; 3) word inverses exist that cause the RNN to undo its action on a hidden state; 4) the successive action of the RNN using two words is equivalent to the action of the RNN with a single third word, implying the multiplicative closure of words; and 5) words are not manifestly closed under any other binary action. The second point follows given that words embed on a manifold, sentences traces out paths on the manifold, and the difference equation the RNN solves bears a striking resemble to the first order equation for parallel transport, DISPLAYFORM0 where INLINEFORM0 is the INLINEFORM1 -th hidden state encountered when reading over a sentence and INLINEFORM2 is the RNN conditioned by the INLINEFORM3 -th word, INLINEFORM4 , acting on the hidden state. Since sentences trace out a path on the word manifold, and parallel transport operators for representations of the word manifold take values in the group, the RNN must parallel transport hidden states either on the group itself or on a base space, INLINEFORM5 , equipped with some word field, INLINEFORM6 , that connects the path in the base space to the path on the word manifold. Leveraging these results, we propose two new technologies. First, we propose a class of recurrent-like neural networks for NLP tasks that satisfy the differential equation DISPLAYFORM0 where DISPLAYFORM0 and where INLINEFORM0 and INLINEFORM1 are learned functions. INLINEFORM2 corresponds to traditional RNNs, with INLINEFORM3 . For INLINEFORM4 , this takes the form of RNN cells with either nested internal memories or dependencies that extend temporally beyond the immediately previous hidden state. In particular, using INLINEFORM5 for sentence generation is the topic of a manuscript presently in preparation. Second, we propose embedding schemes that explicitly embed words as elements of a Lie group. In practice, these embedding schemes would involve representing words as constrained matrices, and optimizing the elements, subject to the constraints, according to a loss function constructed from invariants of the matrices, and then applying the matrix log to obtain Lie vectors. A prototypical implementation, in which the words are assumed to be in the fundamental representation of the special orthogonal group, INLINEFORM0 , and are conditioned on losses sensitive to the relative actions of words, is the subject of another manuscript presently in preparation. The proposals are only briefly discussed herein, as they are the focus of followup work; the focus of the present work is on the experimental evidence for the emergent algebraic structure of RNNs and embeddings in NLP. Intuition and motivation We provide two points to motivate examining the potential algebraic properties of RNNs and their space of inputs in the context of NLP. First, a RNN provides a function, INLINEFORM0 , that successively updates a hidden memory vector, INLINEFORM1 , characterizing the information contained in a sequence of input vectors, INLINEFORM2 , as it reads over elements of the sequence. Explicitly, INLINEFORM3 . At face value, INLINEFORM4 takes the same form as a (nonlinear) representation of some general algebraic structure, INLINEFORM5 , with at least a binary action, INLINEFORM6 , on the vector space INLINEFORM7 . While demanding much structure on INLINEFORM8 generally places a strong constraint on the network's behavior, it would be fortuitous for such structure to emerge. Generally, constrained systems still capable of performing a required task will perform the task better, or, at least, generalize more reliably BIBREF28 , BIBREF29 , BIBREF30 , BIBREF31 , BIBREF32 , BIBREF33 . To this end, the suggestive form RNNs assume invites further examination to determine if there exist any reasonable constraints that may be placed on the network. To highlight the suggestiveness of this form in what follows, we represent the INLINEFORM9 argument of INLINEFORM10 as a subscript and the INLINEFORM11 argument by treating INLINEFORM12 as a left action on INLINEFORM13 , adopting the notation INLINEFORM14 . Since, in this paper, we consider RNNs vis-à-vis NLP, we take INLINEFORM15 as the (continuous) set of words. Second, in the massive exploration of hyperparameters presented in BIBREF5 , it was noted that, for a given word embedding dimension, the network's performance on a seq2seq task was largely insensitive to the hidden dimension of the RNN above a threshold ( INLINEFORM0 128). The dimension of admissible representations of a given algebraic structure is generally discrete and spaced out. Interpreting neurons as basis functions and the output of layers as elements of the span of the functions BIBREF34 , BIBREF35 , BIBREF36 , we would expect a network's performance to improve until an admissible dimension for the representation is found, after which the addition of hidden neurons would simply contribute to better learning the components of the proper representation by appearing in linear combinations with other neurons, and contribute minimally to improving the overall performance. In their hyperparameter search, a marginal improvement was found at a hidden dimension of 2024, suggesting a potentially better representation may have been found. These motivating factors may hint at an underlying algebraic structure in language, at least when using RNNs, but they raise the question: what structures are worth investigating? Groups present themselves as a candidate for consideration since they naturally appear in a variety of applications. Unitary weight matrices have already enjoyed much success in mitigating the exploding/vanishing gradients problem BIBREF13 , BIBREF14 , and RNNs even further constrained to act explicitly as nonlinear representations of unitary groups offer competitive results BIBREF15 . Moreover, intuitively, RNNs in NLP could plausibly behave as a group since: 1) the RNN must learn to ignore padding words used to square batches of training data, indicating an identity element of INLINEFORM0 must exist; 2) the existence of contractions, portmanteaus, and the Germanic tradition of representing sentences as singular words suggest INLINEFORM1 might be closed; and 3) the ability to backtrack and undo statements suggests language may admit natural inverses - that is, active, controlled “forgetting" in language may be tied to inversion. Indeed, groups seem reasonably promising. It is also possible portmanteaus only make sense for a finite subset of pairs of words, so INLINEFORM0 may take on the structure of a groupoid instead; moreover, it is possible, at least in classification tasks, that information is lost through successive applications of INLINEFORM1 , suggesting an inverse may not actually exist, leaving INLINEFORM2 as either a monoid or category. INLINEFORM3 may also actually admit additional structure, or an additional binary operation, rendering it a ring or algebra. To determine what, if any, algebraic structure INLINEFORM0 possesses, we tested if the following axiomatic properties of faithful representations of INLINEFORM1 hold: (Identity) INLINEFORM0 such that INLINEFORM1 , INLINEFORM2 (Closure under multiplication) INLINEFORM0 , INLINEFORM1 such that INLINEFORM2 , INLINEFORM3 (Inverse) INLINEFORM0 , INLINEFORM1 such that INLINEFORM2 , INLINEFORM3 (Closure under Lie bracket) INLINEFORM0 , INLINEFORM1 such that INLINEFORM2 , INLINEFORM3 Closure under Lie bracket simultaneously checks for ring and Lie algebra structures. Whatever structure, if any, INLINEFORM0 possesses, it must additionally be continuous since words are typically embedded in continuous spaces. This implies Lie groups (manifolds), Lie semigroups with an identity (also manifolds), and Lie algebras (vector spaces with a Lie bracket) are all plausible algebraic candidates. Data and methods We trained word embeddings and a uni-directional GRU connected to a dense layer end-to-end for text classification on a set of scraped tweets using cross-entropy as the loss function. End-to-end training was selected to impose as few heuristic constraints on the system as possible. Each tweet was tokenized using NLTK TweetTokenizer and classified as one of 10 potential accounts from which it may have originated. The accounts were chosen based on the distinct topics each is known to typically tweet about. Tokens that occurred fewer than 5 times were disregarded in the model. The model was trained on 22106 tweets over 10 epochs, while 5526 were reserved for validation and testing sets (2763 each). The network demonstrated an insensitivity to the initialization of the hidden state, so, for algebraic considerations, INLINEFORM0 was chosen for hidden dimension of INLINEFORM1 . A graph of the network is shown in Fig.( FIGREF13 ). Algebraic structures typically exhibit some relationship between the dimension of the structure and the dimension of admissible representations, so exploring the embedding and hidden dimensions for which certain algebraic properties hold is of interest. Additionally, beyond the present interest in algebraic properties, the network's insensitivity to the hidden dimension invites an investigation into its sensitivity to the word embedding dimension. To address both points of interest, we extend the hyperparameter search of BIBREF5 , and perform a comparative search over embedding dimensions and hidden dimensions to determine the impact of each on the network's performance and algebraic properties. Each dimension in the hyperparameter pair, INLINEFORM0 , runs from 20 to 280 by increments of 20. After training the network for each hyperparameter pair, the GRU model parameters and embedding matrix were frozen to begin testing for emergent algebraic structure. To satisfy the common “ INLINEFORM0 " requirement stated in § SECREF6 , real hidden states encountered in the testing data were saved to be randomly sampled when testing the actions of the GRU on states. 7 tests were conducted for each hyperparameter pair with randomly selected states: Identity (“arbitrary identity") Inverse of all words in corpus (“arbitrary inverse") Closure under multiplication of arbitrary pairs of words in total corpus (“arbitrary closure") Closure under commutation of arbitrary pairs of words in total corpus (“arbitrary commutativity") Closure under multiplication of random pairs of words from within each tweet (“intra-sentence closure") Closure of composition of long sequences of words in each tweet (“composite closure") Inverse of composition of long sequences of words in each tweet (“composite inverse") Tests 6 and 7 were performed since, if closure is upheld, the composition of multiple words must also be upheld. These tests were done to ensure mathematical consistency. To test for the existence of “words" that satisfy these conditions, vectors were searched for that, when inserted into the GRU, minimized the ratio of the Euclidean norms of the difference between the “searched" hidden vector and the correct hidden vector. For concreteness, the loss function for each algebraic property from § SECREF6 were defined as follows: (Identity) DISPLAYFORM0 (Closure under multiplication) DISPLAYFORM0 (Inverse) DISPLAYFORM0 (Closure under Lie bracket) DISPLAYFORM0 where INLINEFORM0 are random, learned word vectors, INLINEFORM1 is a hidden state, and INLINEFORM2 is the model parameter trained to minimize the loss. We refer to Eqs.( SECREF12 ) as the “axiomatic losses." It is worth noting that the non-zero hidden state initialization was chosen to prevent the denominators from vanishing when the initial state is selected as a candidate INLINEFORM3 in Eqs.( EQREF22 )&( EQREF26 ). The reported losses below are the average across all INLINEFORM4 's and INLINEFORM5 's that were examined. Optimization over the losses in Eqs.( SECREF12 ) was performed over 5000 epochs. For the associated condition to be satisfied, there must exist a word vector INLINEFORM6 that sufficiently minimizes the axiomatic losses. If it is indeed the case that the GRU attempts to learn a representation of an algebraic structure and each neuron serves as a basis function, it is not necessary that each neuron individually satisfies the above constraints. For clarity, recall the second motivating point that the addition of neurons, once a representation is found, simply contributes to learning the representation better. Instead, only a linear combination of the neurons must. We consider this possibility for the most task-performant hyperparameter pair, and two other capricious pairs. The target dimension of the linear combination, INLINEFORM0 , which we refer to as the “latent dimension," could generally be smaller than the hidden dimension, INLINEFORM1 . To compute the linear combination of the neurons, the outputs of the GRU were right-multiplied by a INLINEFORM2 matrix, INLINEFORM3 : DISPLAYFORM0 Since the linear combination is not à priori known, INLINEFORM0 is treated as a model parameter. The minimization task previously described was repeated with this combinatorial modification while scanning over latent dimensions, INLINEFORM0 , in steps of 20. The test was performed 10 times and the reported results averaged for each value of INLINEFORM1 to reduce fluctuations in the loss from differing local minima. INLINEFORM2 was trained to optimize various combinations of the algebraic axioms, the results of which were largely found to be redundant. In § SECREF4 , we address the case in which INLINEFORM3 was only trained to assist in optimizing a single condition, and frozen in other axiomatic tests; the commutative closure condition, however, was given a separate linear combination matrix for reasons that will be discussed later. Finally, the geometric structure of the resulting word vectors was explored, naively using the Euclidean metric. Sentences trace out (discrete) paths in the word embedding space, so it was natural to consider relationships between both word vectors and vectors “tangent" to the sentences' paths. Explicitly, the angles and distances between random pairs of words all words and the global average word vector random pairs of co-occurring words all words with a co-occurring word vector average adjacent tangent vectors tangent vectors with a co-occurring tangent vector average were computed to determine how word vectors are geometrically distributed. Intuitively, similar words are expected to affect hidden states similarly. To test this, and to gain insight into possible algebraic interpretations of word embeddings, the ratio of the Euclidean norm of the difference between hidden states produced by acting on a hidden state with two different words to the Euclidean norm of the original hidden state was computed as a function of the popular cosine similarity metric and distance between embeddings. This fractional difference, cosine similarity, and word distance were computed as, DISPLAYFORM0 where Einstein summation is applied to the (contravariant) vector indices. High-level descriptions of the methods will be briefly revisited in each subsection of § SECREF4 so that they are more self-contained and pedagogical. Hyperparameters and model accuracy We performed hyperparameter tuning over the word embedding dimension and the GRU hidden dimension to optimize the classifier's accuracy. Each dimension ran from 20 to 280 in increments of 20. A contour plot of the hyperparameter search is shown in Fig.( FIGREF39 ). For comparison, using pretrained, 50 dimensional GloVe vectors with this network architecture typically yielded accuracies on the order of INLINEFORM0 on this data set, even for more performant hidden dimensions. Thus, training the embeddings end-to-end is clearly advantageous for short text classification. It is worth noting that training them end-to-end is viable primarily because of the short length of tweets; for longer documents, exploding/vanishing gradients typically prohibits such training. The average Fisher information of each hyperparameter dimension over the searched region was computed to determine the relative sensitivities of the model to the hyperparameters. The Fisher information for the hidden dimension was INLINEFORM0 ; the Fisher information for the embedding dimension was INLINEFORM1 . Evidently, by this metric, the model was, on average in this region of parameter space, 1.76 times more sensitive to the hidden dimension than the embedding dimension. Nevertheless, a larger word embedding dimension was critical for the network to realize its full potential. The model performance generally behaved as expected across the hyperparameter search. Indeed, higher embedding and hidden dimensions tended to yield better results. Given time and resource constraints, the results are not averaged over many search attempts. Consequently, it is unclear if the pockets of entropy are indicative of anything deeper, or merely incidental fluctuations. It would be worthwhile to revisit this search in future work. Algebraic properties Seven tests were conducted for each hyperparameter pair to explore any emergent algebraic structure the GRU and word embeddings may exhibit. Specifically, the tests searched for 1) the existence of an identity element, 2) existence of an inverse word for each word, 3) multiplicative closure for arbitrary pairs of words, 4) commutative closure for arbitrary pairs of words, 5) multiplicative closure of pairs of words that co-occur within a tweet, 6) multiplicative closure of all sequences of words that appear in tweets, and 7) the existence of an inverse for all sequences of words that appear in tweets. The tests optimized the axiomatic losses defined in Eqs.( SECREF12 ). In what follows, we have chosen INLINEFORM0 (or, INLINEFORM1 error) as the criterion by which we declare a condition “satisfied." The tests can be broken roughly into two classes: 1) arbitrary solitary words and pairs of words, and 2) pairs and sequences of words co-occurring within a tweet. The results for class 1 are shown in Fig.( FIGREF41 ); the results for class 2 are shown in Fig.( FIGREF42 ). The identity condition was clearly satisfied for virtually all embedding and hidden dimensions, with possible exceptions for small embedding dimensions and large hidden dimensions. Although we did not explicitly check, it is likely that even the possible exceptions would be viable in the linear combination search. Arbitrary pairs of words were evidently not closed under multiplication without performing a linear combination search, with a minimum error of INLINEFORM0 across all dimensions. Moreover, the large entropy across the search does not suggest any fundamentally interesting or notable behavior, or any connections between the embedding dimension, hidden dimension, and closure property. Arbitrary pairs of words were very badly not closed under commutation, and it is unfathomable that even a linear combination search could rescue the property. One might consider the possibility that specific pairs of words might have still closed under commutation, and that the exceptional error was due to a handful of words that commute outright since this would push the loss up with a near-vanishing denominator. As previously stated, the hidden states were not initialized to be zero states, and separate experiments confirm that the zero state was not in the orbit of any non-zero state, so there would have been no hope to negate the vanishing denominator. Thus, this concern is in principle possible. However, explicitly removing examples with exploding denominators (norm INLINEFORM0 ) from the loss when performing linear combination searches still resulted in unacceptable errors ( INLINEFORM1 ), so this possibility is not actually realized. We did not explicitly check for this closure in class 2 tests since class 2 is a subset of class 1, and such a flagrant violation of the condition would not be possible if successful closure in class 2 were averaged into class 1 results. Even though commutative closure is not satisfied, it is curious to note that the error exhibited a mostly well-behaved stratification. The most interesting class 1 result was the arbitrary inverse. For embedding dimensions sufficiently large compared to the hidden dimension, inverses clearly existed even without a linear combination search. Even more remarkable was the well-behaved stratification of the axiomatic error, implying a very clear relationship between the embedding dimension, hidden dimension, and emergent algebraic structure of the model. It is not unreasonable to expect the inverse condition to be trivially satisfied in a linear combination search for a broad range of hyperparameter pairs. The same behavior of the inverse property is immediately apparent in all class 2 results. The stratification of the error was virtually identical, and all of the tested properties have acceptable errors for sufficiently large embedding dimensions for given hidden dimensions, even without a linear combination search. Linear combination search The optimal hyperparameter pair for this single pass of tuning was INLINEFORM0 , which resulted in a model accuracy of INLINEFORM1 . This was not a statistically significant result since multiple searches were not averaged, so random variations in validation sets and optimization running to differing local minima may have lead to fluctuations in the test accuracies. However, the selection provided a reasonable injection point to investigate the algebraic properties of linear combinations of the output of the GRU's neurons. For comparison, we also considered INLINEFORM2 and INLINEFORM3 . The tests were run with the linear combination matrix, INLINEFORM0 , trained to assist in optimizing the composite inverse. The learned INLINEFORM1 was then applied to the output hidden states for the other properties except for commutative closure, which was given its own linear combination matrix to determine if any existed that would render it an emergent property. The combination was trained to optimize a single condition because, if there exists an optimal linear combination for one condition, and there indeed exists an underlying algebraic structure incorporating other conditions, the linear combination would be optimal for all other conditions. Initial results for the INLINEFORM0 search is shown in Figs.( FIGREF45 )&( FIGREF46 ). Well-optimized properties are shown in Fig.( FIGREF45 ), while the expected poorly-optimized properties are shown in Fig.( FIGREF46 ). The four conditions examined in Fig.( FIGREF45 ) are clearly satisfied for all latent dimensions. They all also reach a minimum error in the same region. Composite closure, intra-sentence closure, and arbitrary inverse are all optimized for INLINEFORM0 ; composite inverse is optimized for INLINEFORM1 , though the variation in the range INLINEFORM2 is small ( INLINEFORM3 variation around the mean, or an absolute variation of INLINEFORM4 in the error). Arbitrary multiplicative closure and commutative closure are highly anti-correlated, and both conditions are badly violated. It is worth noting that the results in Fig.( FIGREF46 )(b) did not remove commutative pairs of words from the error, and yet the scale of the error in the linear combination search is virtually identical to what was separately observed with the commutative pairs removed. They both also exhibit a monotonic dependence on the latent dimension. Despite their violation, this dependence is well-behaved, and potentially indicative of some other structure. Before discussing the linear combination searches for the other selected hyperparameter pairs, it is worthwhile noting that retraining the network and performing the linear combination search again can yield differing results. Figs.( FIGREF47 )&( FIGREF48 ) show the linear combination results after retraining the model for the same hyperparameter pair, with a different network performance of INLINEFORM0 . Qualitatively, the results are mostly the same: there is a common minimizing region of INLINEFORM0 , and conditions are satisfied, at least in the common minimal region. However, the minimizing region starkly shifted down, and became sharper for composite closure, intra-sentence closure, and arbitrary inverse. Once more, the results are mostly the same. Arbitrary closure error drastically increased, but both are still highly anti-correlated, and mostly monotonic, despite the erratic fluctuations in the arbitrary closure error. Figs.( FIGREF49 )&( FIGREF50 ) show the linear combination search for INLINEFORM0 . The model was retrained, and achieved INLINEFORM1 for the displayed results. Interestingly, the optimal latent dimension occurs significantly higher than for the other reported hyperparameter pairs. This result, however, is not true for all retrainings at this INLINEFORM0 pair. The entropy in the arbitrary closure loss increased, and the commutative closure loss seemed to asymptote at higher latent dimension. Figs.( FIGREF51 )&( FIGREF52 ) show the linear combination search for INLINEFORM0 . The model was retrained, and achieved INLINEFORM1 for the displayed results. At lower dimensions, the optimal latent dimension was no longer shared between the satisfied conditions. The unsatisfied conditions displayed mostly the same behavior at lower dimensions. Embedding structure To explore the geometric distribution of word vectors, the angles and distances between 1) random pairs of words, 2) all words and the global average word vector, 3) random pairs of co-occurring words, 4) all words with a co-occurring word vector average, 5) adjacent tangent vectors, 6) tangent vectors with a co-occurring tangent vector average were computed. The magnitudes of the average word vectors, average co-occurring word vectors, and average tangent vectors were also computed. Additionally, the relative effect of words on states is computed verses their cosine similarities and relative distances, measured by Eqs.( EQREF37 )-(). In the figures that follow, there are, generally, three categories of word vectors explored: 1) random word vectors from the pool of all word vectors, 2) co-occurring word vectors, and 3) tangent vectors (the difference vector between adjacent words). Fig.( FIGREF54 ) shows the distribution in the Euclidean norms of the average vectors that were investigated. The tangent vectors and average word vectors had comparable norms. The non-zero value of the average word vector indicates that words do not perfectly distribute throughout space. The non-zero value of the average tangent vectors indicates that tweets in general progress in a preferred direction relative to the origin in embedding space; albeit, since the magnitudes are the smallest of the categories investigated, the preference is only slight. The norm of the average of co-occurring word vectors is significantly larger than the norms of others categories of vectors, indicating that the words in tweets typically occupy a more strongly preferred region of embedding space (e.g. in a cone, thus preventing component-wise cancellations when computing the average). Fig.( FIGREF55 ) shows the distribution of the Euclidean cosine similarities of both pairs of vectors and vectors relative to the categorical averages. The cosine similarity of pairs of random words and co-occurring words shared a very common distribution, albeit with the notable spikes are specific angles and a prominent spike at INLINEFORM0 for co-occurring pairs. The prominent spike could potentially be explained by the re-occurrence of punctuation within tweets, so it may not indicate anything of importance; the potential origin of the smaller spikes throughout the co-occurring distribution is unclear. Generally, the pairs strongly preferred to be orthogonal, which is unsurprising given recent investigations into the efficacy of orthogonal embeddings BIBREF37 . Adjacent pairs of tangent vectors, however, exhibited a very strong preference for obtuse relative angles, with a spike at INLINEFORM1 . Words tended to have at most a very slightly positive cosine similarity to the global average, which is again indicative of the fact words did not spread out uniformly. Co-occurring words tended to form acute angles with respect to the co-occurring average. Meanwhile, tangent vectors strongly preferred to be orthogonal to the average. The strong negative cosine similarity of adjacent tangent vectors, and the strong positive cosine similarity of words with their co-occurring average, indicate co-occurring words tended to form a grid structure in a cone. That is, adjacent words tended to be perpendicular to each other in the positive span of some set of word basis vectors. Of course, this was not strictly adhered to, but the preferred geometry is apparent. Fig.( FIGREF56 ) shows the distribution of the Euclidean distances of both pairs of vectors and vectors relative to the categorical averages. Distributions of random pairs of words and co-occurring words were virtually identical in both plots, indicating that most of the variation is attributable to the relative orientations of the vectors rather than the distances between them. Fig.( FIGREF57 ) shows the correlation of the similarity of the action of pairs of words to their cosine similarity and distances apart. Both plots confirm that the more similar words are, the more similar their actions on the hidden states are. The strongly linear, bi-modal dependence of the fractional difference on the distance between words indicates that word distance is a stronger predictor of the relative meaning of words than the popular cosine similarity. Interpretation of results The important take-aways from the results are: The GRU trivially learned an identity `word'. The action of the GRU for any individual word admits an inverse for sufficiently large embedding dimension relative to the hidden dimension. The successive action of the GRU for any arbitrary pair of words is not, generally, equivalent to the action of the GRU for any equivalent third `word'. The commutation of successive actions of the GRU for any arbitrary pair of words is not equivalent to the action of the GRU for any equivalent third `word'. The successive action of the GRU for any co-occurring pair of words is equivalent to the action of the GRU for an equivalent third `word' for sufficiently large embedding dimension relative to the hidden dimension. The successive action of the GRU for any series of co-occuring words is equivalent to the action of the GRU for an equivalent `word' for sufficiently large embedding dimension relative to the hidden dimension. The action of the GRU for any series of co-occurring words admits an inverse for sufficiently large embedding dimension relative to the hidden dimension. Any condition satisfied for a sufficiently large embedding dimension relative to the hidden dimension is true for any pair of dimensions given an appropriate linear combination of the outputs of the GRU projected into an appropriate lower dimension (latent dimension). The axiomatic errors for all satisfied conditions for the most performant models are minimized for specific, shared latent dimensions, and increases away from these latent dimensions; the optimal latent dimension is not shared for sufficiently small embedding dimensions. Models with lower test performance tend to optimally satisfy these conditions for lower latent dimensions. Co-occurring word vectors tend to be perpendicular to each other and occupy a cone in embedding space. The difference of the action of two word vectors on a hidden state increases linearly with the distance between the two words, and follows a generally bi-modal trend. Although there are still several outstanding points to consider, we offer an attempt to interpret these results in this section. Identity, inverse, and closure properties for co-occurring words are satisfied, and in such a way that they are all related under some algebraic structure. Since closure is not satisfied for arbitrary pairs of words, there are, essentially, two possible explanations for the observed structure: The union of all sets of co-occurring words is the Cartesian product of multiple Lie groups: DISPLAYFORM0 where INLINEFORM0 is the space of words, and INLINEFORM1 is a Lie group. Since multiplication between groups is not defined, the closure of arbitrary pairs of words is unsatisfied. The GRU's inability to properly close pairs of words it has never encountered together is the result of the generalization problem, and all words consequently embed in a larger Lie group: DISPLAYFORM0 In either case, words can be considered elements of a Lie group. Since Lie groups are also manifolds, the word vector components can be interpreted as coordinates on this Lie group. Traditionally, Lie groups are practically handled by considering the Lie algebra that generates them, INLINEFORM0 . The components of the Lie vectors in INLINEFORM1 are then typically taken to be the coordinates on the Lie group. This hints at a connection between INLINEFORM2 and the word vectors, but this connection was not made clear by any of the experiments. Furthermore, RNNs learn a nonlinear representation of the group on some latent space spanned by the hidden layer. Since sentences form paths on the embedding group, it's reasonable to attempt to form a more precise interpretation of the action of RNNs. We begin by considering their explicit action on hidden states as the path is traversed: DISPLAYFORM0 Eq.() takes the form of a difference equation. In particular, it looks very similar to the finite form of the differential equation governing the nonlinear parallel transport along a path, INLINEFORM0 , on a principal fibre bundle with base space INLINEFORM1 and group INLINEFORM2 . If the tangent vector at INLINEFORM3 is INLINEFORM4 , and the vector being transported at INLINEFORM5 is INLINEFORM6 then we have DISPLAYFORM0 where INLINEFORM0 is the (nonlinear) connection at INLINEFORM1 . If INLINEFORM2 were explicitly a function of INLINEFORM3 , Eq.( EQREF76 ) would take a more familiar form: DISPLAYFORM0 Given the striking resemblance between Eqs.( EQREF77 )&(), is it natural to consider either The word embedding group serving as the base space, INLINEFORM0 , so that the path INLINEFORM1 corresponds explicitly to the sentence path. A word field on the base space, INLINEFORM0 , so that there exists a mapping between INLINEFORM1 and the sentence path. The second option is more general, but requires both a candidate for INLINEFORM0 and a compelling way to connect INLINEFORM1 and INLINEFORM2 . This is also more challenging, since, generally, parallel transport operators, while taking values in the group, are not closed. If the path were on INLINEFORM3 itself, closure would be guaranteed, since any parallel transport operator would be an element of the co-occurring subgroup, and closure arises from an equivalence class of paths. To recapitulate the final interpretations of word embeddings and RNNs in NLP: Words naturally embed as elements in a Lie group, INLINEFORM0 , and end-to-end word vectors may be related to the generating Lie algebra. RNNs learn to parallel transport nonlinear representations of INLINEFORM0 either on the Lie group itself, or on a principal INLINEFORM1 -bundle. Proposal for class of recurrent-like networks The geometric derivative along a path parameterized by INLINEFORM0 is defined as: DISPLAYFORM0 where INLINEFORM0 is the tangent vector at INLINEFORM1 , and INLINEFORM2 is the connection. This implies RNNs learn the solution of the first-order geometric differential equation: DISPLAYFORM0 It is natural, then, to consider neural network solutions to higher-order generalizations: DISPLAYFORM0 Networks that solve Eq.( EQREF85 ) are recurrent-like. Updates to a hidden state will generally depend on states beyond the immediately preceding one; often, this dependence can be captured by evolving on the phase space of the hidden states, rather than on the sequences of the hidden states themselves. The latter results in a nested RNN structure for the recurrent-like cell, similar to the structure proposed in BIBREF12 . Applications of Eq.( EQREF85 ) are currently being explored. In particular, if no additional structure exists and RNNs parallel transport states along paths on the word embedding group itself (the first RNN interpretation), geodesics emerge as a natural candidate for sentence paths to lie on. Thus, sentence generation could potentially be modeled using the geodesic equation and a nonlinear adjoint representation: INLINEFORM0 , INLINEFORM1 in Eq.( EQREF85 ). This geodesic neural network (GeoNN) is the topic of a manuscript presently in preparation. Proposal for new word embeddings The embeddings trained end-to-end in this work provided highly performant results. Unfortunately, training embeddings on end-tasks with longer documents is challenging, and the resulting embeddings are often poor for rare words. However, it would seem constructing pre-trained word embeddings by leveraging the emergent Lie group structure observed herein could provide competitive results without the need for end-to-end training. Intuitively, it is unsurprising groups appear as a candidate to construct word embeddings. Evidently, the proximity of words is governed by their actions on hidden states, and groups are often the natural language to describe actions on vectors. Since groups are generally non-commutative, embedding words in a Lie group can additionally capture their order- and context-dependence. Lie groups are also generated by Lie algebras, so one group can act on the algebra of another group, and recursively form a hierarchical tower. Such an arrangement can explicitly capture the hierarchical structure language is expected to exhibit. E.g., the group structure in the first interpretation given by Eq.( EQREF72 ), DISPLAYFORM0 admits, for appropriately selected INLINEFORM0 , hierarchical representations of the form DISPLAYFORM0 where INLINEFORM0 . Such embedding schemes have the potential to generalize current attempts at capturing hierarchy, such as Poincaré embeddings BIBREF22 . Indeed, hyperbolic geometries, such as the Poincaré ball, owe their structure to their isometry groups. Indeed, it is well known that the hyperbolic INLINEFORM1 dimensional Minkowski space arises as a representation of INLINEFORM2 + translation symmetries. In practice, Lie group embedding schemes would involve representing words as constrained matrices and optimizing the elements, subject to the constraints, according to a loss function constructed from invariants of the matrices, and then applying the matrix log to obtain Lie vectors. A prototypical implementation, dubbed “LieGr," in which the words are assumed to be in the fundamental representation of the special orthogonal group, INLINEFORM0 , and are conditioned on losses sensitive to the relative actions of words, is the subject of another manuscript presently in preparation. Closing remarks The results presented herein offer insight into how RNNs and word embeddings naturally tend to structure themselves for text classification. Beyond elucidating the inner machinations of deep NLP, such results can be used to help construct novel network architectures and embeddings. There is, however, much immediate followup work worth pursuing. In particular, the uniqueness of identities, inverses, and multiplicative closure was not addressed in this work, which is critical to better understand the observed emergent algebraic structure. The cause for the hyperparameter stratification of the error in, and a more complete exploration of, commutative closure remains outstanding. Additionally, the cause of the breakdown of the common optimal latent dimension for low embedding dimension is unclear, and the bi-model, linear relationship between the action of words on hidden states and the Euclidean distance between end-to-end word embeddings invites much investigation. As a less critical, but still curious inquiry: is the additive relationship between words, e.g. “king - man + woman = queen," preserved, or is it replaced by something new? In light of the Lie group structure words trained on end tasks seem to exhibit, it would not be surprising if a new relationship, such as the Baker-Campbell-Hausdorff formula, applied. Acknowledgements The author would like to thank Robin Tully, Dr. John H. Cantrell, and Mark Laczin for providing useful discussions, of both linguistic and mathematical natures, as the work unfolded. Robin in particular provided essential feedback throughout the work, and helped explore the potential use of free groups in computational linguistics at the outset. John furnished many essential conversations that ensured the scientific and mathematical consistency of the experiments, and provided useful insights into the results. Mark prompted the investigation into potential emergent monoid structures since they appear frequently in state machines.
No
94bee0c58976b58b4fef9e0adf6856fe917232e5
94bee0c58976b58b4fef9e0adf6856fe917232e5_0
Q: How much bigger is Switchboard-2000 than Switchboard-300 database? Text: Introduction Powerful neural networks have enabled the use of “end-to-end” speech recognition models that directly map a sequence of acoustic features to a sequence of words without conditional independence assumptions. Typical examples are attention based encoder-decoder BIBREF0 and recurrent neural network transducer models BIBREF1. Due to training on full sequences, an utterance corresponds to a single observation from the view point of these models; thus, data sparsity is a general challenge for such approaches, and it is believed that these models are effective only when sufficient training data is available. Indeed, many end-to-end speech recognition papers focus on LibriSpeech, which has 960 hours of training audio. Nevertheless, the best performing systems follow the traditional hybrid approach BIBREF2, outperforming attention based encoder-decoder models BIBREF3, BIBREF4, BIBREF5, BIBREF6, and when less training data is used, the gap between “end-to-end” and hybrid models is more prominent BIBREF3, BIBREF7. Several methods have been proposed to tackle data sparsity and overfitting problems; a detailed list can be found in Sec. SECREF2. Recently, increasingly complex attention mechanisms have been proposed to improve seq2seq model performance, including stacking self and regular attention layers and using multiple attention heads in the encoder and decoder BIBREF4, BIBREF8. We show that consistent application of various regularization techniques brings a simple, single-head LSTM attention based encoder-decoder model to state-of-the-art performance on Switchboard-300, a task where data sparsity is more severe than LibriSpeech. We also note that remarkable performance has been achieved with single-head LSTM models in a recent study on language modeling BIBREF9. Methods to improve seq2seq models In contrast to traditional hybrid models, where even recurrent networks are trained on randomized, aligned chunks of labels and features BIBREF10, BIBREF11, whole sequence models are more prone to memorizing the training samples. In order to improve generalization, many of the methods we investigate introduce additional noise, either directly or indirectly, to stochastic gradient descent (SGD) training to avoid narrow, local optima. The other techniques we study address the highly non-convex nature of training neural networks, ease the optimization process, and speed up convergence. Weight decay adds the $l_2$ norm of the trainable parameters to the loss function, which encourages the weights to stay small unless necessary, and is one of the oldest techniques to improve neural network generalization. As shown in BIBREF12, weight decay can improve generalization by suppressing some of the effects of static noise on the targets. Dropout randomly deactivates neurons with a predefined probability in every training step BIBREF13 to reduce co-adaptation of neurons. DropConnect, which is similar in spirit to dropout, randomly deactivates connections between neurons by temporarily zeroing out weights BIBREF14. Zoneout, which is also inspired by dropout and was especially developed for recurrent models BIBREF15, stochastically forces some hidden units to maintain their previous values. In LSTMs, the method is applied on the cell state or on the recurrent feedback of the output. Label smoothing interpolates the hard label targets with a uniform distribution over targets, and improves generalization in many classification tasks BIBREF16. Batch normalization (BN) accelerates training by standardizing the distribution of each layer's input BIBREF17. In order to reduce the normalization mismatch between training and testing, we modify the original approach by freezing the batch normalization layers in the middle of the training when the magnitude of parameter updates is small. After freezing, the running statistics are not updated, batch statistics are ignored, and BN layers approximately operate as global normalization. Scheduled sampling stochastically uses the token produced by a sequence model instead of the true previous token during training to mitigate the effects of exposure bias BIBREF18. Residual networks address the problem of vanishing and exploding gradients by including skip connections BIBREF19 in the model that force the neural network to learn a residual mapping function using a stack of layers. Optimization of this residual mapping is easier, allowing the use of much deeper structures. Curriculum learning simplifies deep neural network training by presenting training examples in a meaningful order, usually by increasing order of difficulty BIBREF20. In seq2seq models, the input acoustic sequences are frequently sorted in order of increasing length BIBREF21. Speed and tempo perturbation changes the rate of speech, typically by $\pm $10%, with or without altering the pitch and timbre of the speech signal BIBREF22, BIBREF23. The goal of these methods is to increase the amount of training data for the model. Sequence noise injection adds structured sequence level noise generated from speech utterances to training examples to improve the generalization of seq2seq models BIBREF24. As previously shown, input noise during neural network training encourages convergence to a local optimum with lower curvature, which indicates better generalization BIBREF25. Weight noise adds noise directly to the network parameters to improve generalization BIBREF26. This form of noise can be interpreted as a simplified form of Bayesian inference that optimizes a minimum description length loss BIBREF27. SpecAugment masks blocks of frequency channels and blocks of time steps BIBREF3 and also warps the spectrogram along the time axis to perform data augmentation. It is closely related to BIBREF28. Experimental setup This study focuses on Switchboard-300, a standard 300-hour English conversational speech recognition task. Our acoustic and text data preparation follows the Kaldi BIBREF29 s5c recipe. Our attention based seq2seq model is similar to BIBREF30, BIBREF31 and follows the structure of BIBREF32. We extract 80-dimensional log-Mel filterbank features over 25ms frames every 10ms from the input speech signal. The input audio is speed and/or tempo perturbed with 56 probability. Following BIBREF24, sequence noise mixed from up to 4 utterances is injected with 40% probability and 0.3 weight. The filterbank output is mean-and-variance normalized at the speaker level, and first ($\Delta $) and second ($\Delta \Delta $) derivatives are also calculated. The final features presented to the network are also processed through a SpecAugment block that uses the SM policy BIBREF3 with $p=0.3$ and no time warping. The encoder network comprises 8 bidirectional LSTM layers with 1536 nodes per direction per layer BIBREF33, BIBREF34. As shown in Fig. FIGREF1, each LSTM block in the encoder includes a residual connection with a linear transformation that bypasses the LSTM, a 1024-dimensional linear reduction layer on the LSTM output, and batch-normalization (BN) of the block output. A pyramidal structure BIBREF31 in the first two LSTM layers reduces the frame rate by a factor of 4. The final dimension of the encoder output is 256, enforced by a linear bottleneck. We apply 30% dropout to the LSTM outputs and 30% drop-connect to the hidden-to-hidden matrices BIBREF14, BIBREF35. As suggested by BIBREF36, the weight dropout is fixed for a batch of sequences. The attention based decoder model is illustrated in Fig. FIGREF1. The decoder models the sequence of 600 BPE units estimated on characters BIBREF37, where the BPE units are embedded in 256 dimensions. We use additive, location aware attention, without key/value transformations, and the attention is smoothed by 256, 5-dimensional kernels BIBREF38. The decoder block consists of 2 unidirectional LSTM layers: one is a dedicated language-model-like component with 512 nodes that operates only on the embedded predicted symbol sequence, and the other is a 768 unit layer processing acoustic and symbol information. The output of both LSTMs is reduced to 256 dimensions by a linear bottleneck BIBREF39. Fixed sequence-level weight dropout of 15% is applied in the decoder LSTMs, a dropout of 5% is applied to the embeddings, and a dropout of 15% is applied to the decoder LSTM outputs. The second LSTM in the decoder also uses zoneout, where the cell state update is deactivated with 15% probability and the recurrent feedback from the output maintains its previous value with 5% probability. Overall, the model has 280M parameters, of which only 5.4M are in the decoder. Aiming at the best word error rate, this design choice is based on our observation that an external language model has significantly larger effect if the decoder is not over-parametrized BIBREF32. The model is trained for 250 epochs on 32 P100 GPUs in less than 4 days using a PyTorch BIBREF40 implementation of distributed synchronous SGD with up to 32 sequences per GPU per batch. Training uses a learning rate of 0.03 and Nesterov momentum BIBREF41 of 0.9. The weight decay parameter is 4e-6, the label smoothing parameter is 0.35, and teacher forcing is fixed to 0.8 throughout training. In the first 3 epochs the learning rate is warmed up and batch size is gradually increased from 8 to 32 BIBREF42. In the first 35 epochs, the neural network is trained on sequences sorted in ascending order of length of the input. Afterwards, batches are randomized within length buckets, ensuring that a batch always contains sequences with similar length. Weight noise from a normal distribution with mean 0.0 and variance 0.015 is switched on after 70 epochs. After 110 epochs, the updates of sufficient statistics in the batch-normalization layers are turned off, converting them into fixed affine transformations. The learning rate is annealed by 0.9 per epoch after 180 epochs of training, and simultaneously label smoothing is also switched off. The external language model (LM) is built on the BPE segmentation of 24M words from the Switchboard and Fisher corpora. It is trained for 40 epochs using label smoothing of 0.15 in the first 20 epochs. The baseline LM has 57M parameters and consists of 2 unidirectional LSTM layers with 2048 nodes BIBREF43 trained with drop-connect and dropout probabilities of 15%. The embedding layer has 512 nodes, and the output of the last LSTM is projected to 128 dimensions. When the LM is trained and evaluated across utterances, consecutive segments of a single-channel recording are grouped together up to 40 seconds. Perplexities (PPL) are measured at the word level on the concatenation of ground truth transcripts, while the WER is obtained by retaining the LM state of the single-best hypothesis of the preceding utterance. Decoding uses simple beam search with a beam width of 60 hypotheses and no lexical prefix tree constraint BIBREF44. The search performs shallow fusion of the encoder-decoder score, the external language model score, a length normalization term, and a coverage term BIBREF45, BIBREF46, BIBREF47. For more details, please refer to BIBREF32. Hub5'00 is used as a development set to optimize decoding hyperparameters, while Hub5'01 and RT03 are used as final test sets. Experimental results Our current setup is the result of incremental development. Keeping in mind that several other equally powerful setups probably exist, the focus of the following experiments is to investigate ours around the current optimum. Experimental results ::: Effect of data preparation We first investigate the importance of different data processing steps. The s5c Kaldi recipe includes a duplicate filtering step, in which the maximum number of occurrences of utterances with the same content is limited. We measure the impact of duplicate filtering and also the effect of filtering out word fragments and noise tokens from the training transcripts. Since the LM is trained on identically filtered transcripts from Fisher+Switchboard data, word fragment and noise token filters were applied consistently. The results are summarized in Table TABREF5. Deactivating the duplicate filter is never harmful when an external LM is used, and the gains on CallHome can be substantial. Considering performance on the complete Hub5'00 data, the best systems either explicitly handle both word fragments and noise tokens or filter them all out. When an external LM is used, the best results are obtained when word fragment and noise token filters are activated and the duplicate filter is deactivated. This setting is also appealing in cases where the external LM may be trained on text data that will not contain word fragments or noise; thus, the remaining experiments are carried out with this system setting. Experimental results ::: Ablation study In a second set of experiments, we characterize the importance of each of the regularization methods described in Sec. SECREF2 for our model performance by switching off one training method at a time without re-optimizing the remaining settings. In these experiments, decoding is performed without an external language model. Curriculum learning is evaluated by either switching to randomized batches after 35 epochs or leaving the sorting on throughout training. We also test the importance of $\Delta $ and $\Delta \Delta $ features BIBREF48. Sorting the results by decreasing number of absolute errors on Hub5'00, Table TABREF7 indicates that each regularization method contributes to the improved WER. SpecAugment is by far the most important method, while using $\Delta $ and $\Delta \Delta $ features or switching off the curriculum learning in the later stage of training have marginal but positive effects. Other direct input level perturbation steps (speed/tempo perturbation and sequence noise injection) are also key techniques that can be found in the upper half of the table. If we compare the worst and baseline models, we find that the relative performance difference between them is nearly unchanged by including the external LM in decoding. Without the LM, the gap is 18% relative, while with the LM the gap is 17% relative. This clearly underlines the importance of the regularization techniques. Experimental results ::: Optimizing the language model The following experiments summarize our optimization of the LM. Compared to our previous LM BIBREF24, we measure better perplexity and WER if no bottleneck is used before the softmax layer (rows 1 and 3 in Table TABREF9). Increasing the model capacity to 122M parameters results in a significant gain in PPL only after the dropout rates are tuned (rows 3, 5 and 6). Similar to BIBREF49, BIBREF50, significant PPL gain is observed if the LM was trained across utterances. However, this PPL improvement does not translate into reduced WER with a bigger model when cross utterance modeling is used (rows 4 and 7). Thus, in all other experiments we use the smaller, 57M-parameter model. Experimental results ::: Effect of beam size and number of parameters A 280M-parameter model may be larger than is practical in many applications. Thus, we also conduct experiments to see if this model size is necessary for reasonable ASR performance. Models are trained without changing the training configuration, except that the size or number of LSTM layers is reduced. As Table TABREF11 shows, although our smallest attention based model achieves reasonable results on this task, a significant loss is indeed observed with decreasing model size, especially on CallHome. Nevertheless, an external language model reduces the performance gap. A small, 57M-parameter model together with a similar size language model is only 5% relative worse than our largest model. We note that this model already outperforms the best published attention based seq2seq model BIBREF3, with roughly 66% fewer parameters. Additional experiments are carried out to characterize the search and modeling errors in decoding. The results of tuning the beam size and keeping the other search hyperparameters unchanged are shown in Fig. FIGREF12. “Small” denotes the 57M model, while “large” denotes the 280M model. When greedy search (beam 1) is used, the external language model increases WER, an effect that might be mitigated with re-optimized hyperparameters. Nevertheless, if a beam of at least 2 hypotheses is used, the positive effect of the language model is clear. We also observe that without the language model the search saturates much earlier, around beam 8, fluctuating within only a few absolute errors afterwards. On the contrary, decoding with the language model, we measure consistent but small gains with larger beams. The minimum number of word errors was measured with a relatively large beam of 240. The figure also shows that the effect of a cross-utterance language model grows with larger beams. Lastly, if the model is trained on 2000 hours of speech data (see next section), the extremely fast greedy decoding gives remarkably good performance. Although the importance of beam search decreases with an increased amount of training data, we still measure 10% relative degradation compared to a system with a cross-utterance LM and wide (240) beam search. Experimental results ::: Experiments on Switchboard-2000 As a contrast to our best results on Switchboard-300, we also train a seq2seq model on the 2000-hour Switchboard+Fisher data. This model consists of 10 encoder layers, and is trained for only 50 epochs. Our overall results on the Hub5'00 and other evaluation sets are summarized in Table TABREF14. The results in Fig. FIGREF12 and Table TABREF14 show that adding more training data greatly improves the system, by around 30% relative in some cases. For comparison with others, the 2000-hour system reaches 8.7% and 7.4% WER on rt02 and rt04. We observe that the regularization techniques, which are extremely important on the 300h setup, are still beneficial but have a significantly smaller effect. Comparison with the literature For comparison with results in the literature we refer to the Switchboard-300 results in BIBREF3, BIBREF7, BIBREF51, BIBREF52 and the Switchboard-2000 results in BIBREF50, BIBREF51, BIBREF53, BIBREF54, BIBREF55, BIBREF56. Our 300-hour model not only outperforms the previous best attention based encoder-decoder model BIBREF3 by a large margin, it also surpasses the best hybrid systems with multiple LMs BIBREF7. Our result on Switchboard-2000 is also better than any single system results reported to date, and reaches the performance of the best system combinations. Conclusions We presented an attention based encoder-decoder setup which achieves state-of-the-art performance on Switchboard-300. A rather simple model built from LSTM layers and a decoder with a single-headed attention mechanism outperforms the standard hybrid approach. This is particularly remarkable given that in our model neither a pronunciation lexicon nor a speech model with explicit hidden state representations is needed. We also demonstrated that excellent results are possible with smaller models and with practically search-free, greedy decoding. The best results were achieved with a speaker independent model in a single decoding pass, using a minimalistic search algorithm, and without any attention mechanism in the language model. Thus, we believe that further improvements are still possible if we apply a more complicated sequence-level training criterion and speaker adaptation.
Switchboard-2000 contains 1700 more hours of speech data.
7efbe48e84894971d7cd307faf5f6dae9d38da31
7efbe48e84894971d7cd307faf5f6dae9d38da31_0
Q: How big is Switchboard-300 database? Text: Introduction Powerful neural networks have enabled the use of “end-to-end” speech recognition models that directly map a sequence of acoustic features to a sequence of words without conditional independence assumptions. Typical examples are attention based encoder-decoder BIBREF0 and recurrent neural network transducer models BIBREF1. Due to training on full sequences, an utterance corresponds to a single observation from the view point of these models; thus, data sparsity is a general challenge for such approaches, and it is believed that these models are effective only when sufficient training data is available. Indeed, many end-to-end speech recognition papers focus on LibriSpeech, which has 960 hours of training audio. Nevertheless, the best performing systems follow the traditional hybrid approach BIBREF2, outperforming attention based encoder-decoder models BIBREF3, BIBREF4, BIBREF5, BIBREF6, and when less training data is used, the gap between “end-to-end” and hybrid models is more prominent BIBREF3, BIBREF7. Several methods have been proposed to tackle data sparsity and overfitting problems; a detailed list can be found in Sec. SECREF2. Recently, increasingly complex attention mechanisms have been proposed to improve seq2seq model performance, including stacking self and regular attention layers and using multiple attention heads in the encoder and decoder BIBREF4, BIBREF8. We show that consistent application of various regularization techniques brings a simple, single-head LSTM attention based encoder-decoder model to state-of-the-art performance on Switchboard-300, a task where data sparsity is more severe than LibriSpeech. We also note that remarkable performance has been achieved with single-head LSTM models in a recent study on language modeling BIBREF9. Methods to improve seq2seq models In contrast to traditional hybrid models, where even recurrent networks are trained on randomized, aligned chunks of labels and features BIBREF10, BIBREF11, whole sequence models are more prone to memorizing the training samples. In order to improve generalization, many of the methods we investigate introduce additional noise, either directly or indirectly, to stochastic gradient descent (SGD) training to avoid narrow, local optima. The other techniques we study address the highly non-convex nature of training neural networks, ease the optimization process, and speed up convergence. Weight decay adds the $l_2$ norm of the trainable parameters to the loss function, which encourages the weights to stay small unless necessary, and is one of the oldest techniques to improve neural network generalization. As shown in BIBREF12, weight decay can improve generalization by suppressing some of the effects of static noise on the targets. Dropout randomly deactivates neurons with a predefined probability in every training step BIBREF13 to reduce co-adaptation of neurons. DropConnect, which is similar in spirit to dropout, randomly deactivates connections between neurons by temporarily zeroing out weights BIBREF14. Zoneout, which is also inspired by dropout and was especially developed for recurrent models BIBREF15, stochastically forces some hidden units to maintain their previous values. In LSTMs, the method is applied on the cell state or on the recurrent feedback of the output. Label smoothing interpolates the hard label targets with a uniform distribution over targets, and improves generalization in many classification tasks BIBREF16. Batch normalization (BN) accelerates training by standardizing the distribution of each layer's input BIBREF17. In order to reduce the normalization mismatch between training and testing, we modify the original approach by freezing the batch normalization layers in the middle of the training when the magnitude of parameter updates is small. After freezing, the running statistics are not updated, batch statistics are ignored, and BN layers approximately operate as global normalization. Scheduled sampling stochastically uses the token produced by a sequence model instead of the true previous token during training to mitigate the effects of exposure bias BIBREF18. Residual networks address the problem of vanishing and exploding gradients by including skip connections BIBREF19 in the model that force the neural network to learn a residual mapping function using a stack of layers. Optimization of this residual mapping is easier, allowing the use of much deeper structures. Curriculum learning simplifies deep neural network training by presenting training examples in a meaningful order, usually by increasing order of difficulty BIBREF20. In seq2seq models, the input acoustic sequences are frequently sorted in order of increasing length BIBREF21. Speed and tempo perturbation changes the rate of speech, typically by $\pm $10%, with or without altering the pitch and timbre of the speech signal BIBREF22, BIBREF23. The goal of these methods is to increase the amount of training data for the model. Sequence noise injection adds structured sequence level noise generated from speech utterances to training examples to improve the generalization of seq2seq models BIBREF24. As previously shown, input noise during neural network training encourages convergence to a local optimum with lower curvature, which indicates better generalization BIBREF25. Weight noise adds noise directly to the network parameters to improve generalization BIBREF26. This form of noise can be interpreted as a simplified form of Bayesian inference that optimizes a minimum description length loss BIBREF27. SpecAugment masks blocks of frequency channels and blocks of time steps BIBREF3 and also warps the spectrogram along the time axis to perform data augmentation. It is closely related to BIBREF28. Experimental setup This study focuses on Switchboard-300, a standard 300-hour English conversational speech recognition task. Our acoustic and text data preparation follows the Kaldi BIBREF29 s5c recipe. Our attention based seq2seq model is similar to BIBREF30, BIBREF31 and follows the structure of BIBREF32. We extract 80-dimensional log-Mel filterbank features over 25ms frames every 10ms from the input speech signal. The input audio is speed and/or tempo perturbed with 56 probability. Following BIBREF24, sequence noise mixed from up to 4 utterances is injected with 40% probability and 0.3 weight. The filterbank output is mean-and-variance normalized at the speaker level, and first ($\Delta $) and second ($\Delta \Delta $) derivatives are also calculated. The final features presented to the network are also processed through a SpecAugment block that uses the SM policy BIBREF3 with $p=0.3$ and no time warping. The encoder network comprises 8 bidirectional LSTM layers with 1536 nodes per direction per layer BIBREF33, BIBREF34. As shown in Fig. FIGREF1, each LSTM block in the encoder includes a residual connection with a linear transformation that bypasses the LSTM, a 1024-dimensional linear reduction layer on the LSTM output, and batch-normalization (BN) of the block output. A pyramidal structure BIBREF31 in the first two LSTM layers reduces the frame rate by a factor of 4. The final dimension of the encoder output is 256, enforced by a linear bottleneck. We apply 30% dropout to the LSTM outputs and 30% drop-connect to the hidden-to-hidden matrices BIBREF14, BIBREF35. As suggested by BIBREF36, the weight dropout is fixed for a batch of sequences. The attention based decoder model is illustrated in Fig. FIGREF1. The decoder models the sequence of 600 BPE units estimated on characters BIBREF37, where the BPE units are embedded in 256 dimensions. We use additive, location aware attention, without key/value transformations, and the attention is smoothed by 256, 5-dimensional kernels BIBREF38. The decoder block consists of 2 unidirectional LSTM layers: one is a dedicated language-model-like component with 512 nodes that operates only on the embedded predicted symbol sequence, and the other is a 768 unit layer processing acoustic and symbol information. The output of both LSTMs is reduced to 256 dimensions by a linear bottleneck BIBREF39. Fixed sequence-level weight dropout of 15% is applied in the decoder LSTMs, a dropout of 5% is applied to the embeddings, and a dropout of 15% is applied to the decoder LSTM outputs. The second LSTM in the decoder also uses zoneout, where the cell state update is deactivated with 15% probability and the recurrent feedback from the output maintains its previous value with 5% probability. Overall, the model has 280M parameters, of which only 5.4M are in the decoder. Aiming at the best word error rate, this design choice is based on our observation that an external language model has significantly larger effect if the decoder is not over-parametrized BIBREF32. The model is trained for 250 epochs on 32 P100 GPUs in less than 4 days using a PyTorch BIBREF40 implementation of distributed synchronous SGD with up to 32 sequences per GPU per batch. Training uses a learning rate of 0.03 and Nesterov momentum BIBREF41 of 0.9. The weight decay parameter is 4e-6, the label smoothing parameter is 0.35, and teacher forcing is fixed to 0.8 throughout training. In the first 3 epochs the learning rate is warmed up and batch size is gradually increased from 8 to 32 BIBREF42. In the first 35 epochs, the neural network is trained on sequences sorted in ascending order of length of the input. Afterwards, batches are randomized within length buckets, ensuring that a batch always contains sequences with similar length. Weight noise from a normal distribution with mean 0.0 and variance 0.015 is switched on after 70 epochs. After 110 epochs, the updates of sufficient statistics in the batch-normalization layers are turned off, converting them into fixed affine transformations. The learning rate is annealed by 0.9 per epoch after 180 epochs of training, and simultaneously label smoothing is also switched off. The external language model (LM) is built on the BPE segmentation of 24M words from the Switchboard and Fisher corpora. It is trained for 40 epochs using label smoothing of 0.15 in the first 20 epochs. The baseline LM has 57M parameters and consists of 2 unidirectional LSTM layers with 2048 nodes BIBREF43 trained with drop-connect and dropout probabilities of 15%. The embedding layer has 512 nodes, and the output of the last LSTM is projected to 128 dimensions. When the LM is trained and evaluated across utterances, consecutive segments of a single-channel recording are grouped together up to 40 seconds. Perplexities (PPL) are measured at the word level on the concatenation of ground truth transcripts, while the WER is obtained by retaining the LM state of the single-best hypothesis of the preceding utterance. Decoding uses simple beam search with a beam width of 60 hypotheses and no lexical prefix tree constraint BIBREF44. The search performs shallow fusion of the encoder-decoder score, the external language model score, a length normalization term, and a coverage term BIBREF45, BIBREF46, BIBREF47. For more details, please refer to BIBREF32. Hub5'00 is used as a development set to optimize decoding hyperparameters, while Hub5'01 and RT03 are used as final test sets. Experimental results Our current setup is the result of incremental development. Keeping in mind that several other equally powerful setups probably exist, the focus of the following experiments is to investigate ours around the current optimum. Experimental results ::: Effect of data preparation We first investigate the importance of different data processing steps. The s5c Kaldi recipe includes a duplicate filtering step, in which the maximum number of occurrences of utterances with the same content is limited. We measure the impact of duplicate filtering and also the effect of filtering out word fragments and noise tokens from the training transcripts. Since the LM is trained on identically filtered transcripts from Fisher+Switchboard data, word fragment and noise token filters were applied consistently. The results are summarized in Table TABREF5. Deactivating the duplicate filter is never harmful when an external LM is used, and the gains on CallHome can be substantial. Considering performance on the complete Hub5'00 data, the best systems either explicitly handle both word fragments and noise tokens or filter them all out. When an external LM is used, the best results are obtained when word fragment and noise token filters are activated and the duplicate filter is deactivated. This setting is also appealing in cases where the external LM may be trained on text data that will not contain word fragments or noise; thus, the remaining experiments are carried out with this system setting. Experimental results ::: Ablation study In a second set of experiments, we characterize the importance of each of the regularization methods described in Sec. SECREF2 for our model performance by switching off one training method at a time without re-optimizing the remaining settings. In these experiments, decoding is performed without an external language model. Curriculum learning is evaluated by either switching to randomized batches after 35 epochs or leaving the sorting on throughout training. We also test the importance of $\Delta $ and $\Delta \Delta $ features BIBREF48. Sorting the results by decreasing number of absolute errors on Hub5'00, Table TABREF7 indicates that each regularization method contributes to the improved WER. SpecAugment is by far the most important method, while using $\Delta $ and $\Delta \Delta $ features or switching off the curriculum learning in the later stage of training have marginal but positive effects. Other direct input level perturbation steps (speed/tempo perturbation and sequence noise injection) are also key techniques that can be found in the upper half of the table. If we compare the worst and baseline models, we find that the relative performance difference between them is nearly unchanged by including the external LM in decoding. Without the LM, the gap is 18% relative, while with the LM the gap is 17% relative. This clearly underlines the importance of the regularization techniques. Experimental results ::: Optimizing the language model The following experiments summarize our optimization of the LM. Compared to our previous LM BIBREF24, we measure better perplexity and WER if no bottleneck is used before the softmax layer (rows 1 and 3 in Table TABREF9). Increasing the model capacity to 122M parameters results in a significant gain in PPL only after the dropout rates are tuned (rows 3, 5 and 6). Similar to BIBREF49, BIBREF50, significant PPL gain is observed if the LM was trained across utterances. However, this PPL improvement does not translate into reduced WER with a bigger model when cross utterance modeling is used (rows 4 and 7). Thus, in all other experiments we use the smaller, 57M-parameter model. Experimental results ::: Effect of beam size and number of parameters A 280M-parameter model may be larger than is practical in many applications. Thus, we also conduct experiments to see if this model size is necessary for reasonable ASR performance. Models are trained without changing the training configuration, except that the size or number of LSTM layers is reduced. As Table TABREF11 shows, although our smallest attention based model achieves reasonable results on this task, a significant loss is indeed observed with decreasing model size, especially on CallHome. Nevertheless, an external language model reduces the performance gap. A small, 57M-parameter model together with a similar size language model is only 5% relative worse than our largest model. We note that this model already outperforms the best published attention based seq2seq model BIBREF3, with roughly 66% fewer parameters. Additional experiments are carried out to characterize the search and modeling errors in decoding. The results of tuning the beam size and keeping the other search hyperparameters unchanged are shown in Fig. FIGREF12. “Small” denotes the 57M model, while “large” denotes the 280M model. When greedy search (beam 1) is used, the external language model increases WER, an effect that might be mitigated with re-optimized hyperparameters. Nevertheless, if a beam of at least 2 hypotheses is used, the positive effect of the language model is clear. We also observe that without the language model the search saturates much earlier, around beam 8, fluctuating within only a few absolute errors afterwards. On the contrary, decoding with the language model, we measure consistent but small gains with larger beams. The minimum number of word errors was measured with a relatively large beam of 240. The figure also shows that the effect of a cross-utterance language model grows with larger beams. Lastly, if the model is trained on 2000 hours of speech data (see next section), the extremely fast greedy decoding gives remarkably good performance. Although the importance of beam search decreases with an increased amount of training data, we still measure 10% relative degradation compared to a system with a cross-utterance LM and wide (240) beam search. Experimental results ::: Experiments on Switchboard-2000 As a contrast to our best results on Switchboard-300, we also train a seq2seq model on the 2000-hour Switchboard+Fisher data. This model consists of 10 encoder layers, and is trained for only 50 epochs. Our overall results on the Hub5'00 and other evaluation sets are summarized in Table TABREF14. The results in Fig. FIGREF12 and Table TABREF14 show that adding more training data greatly improves the system, by around 30% relative in some cases. For comparison with others, the 2000-hour system reaches 8.7% and 7.4% WER on rt02 and rt04. We observe that the regularization techniques, which are extremely important on the 300h setup, are still beneficial but have a significantly smaller effect. Comparison with the literature For comparison with results in the literature we refer to the Switchboard-300 results in BIBREF3, BIBREF7, BIBREF51, BIBREF52 and the Switchboard-2000 results in BIBREF50, BIBREF51, BIBREF53, BIBREF54, BIBREF55, BIBREF56. Our 300-hour model not only outperforms the previous best attention based encoder-decoder model BIBREF3 by a large margin, it also surpasses the best hybrid systems with multiple LMs BIBREF7. Our result on Switchboard-2000 is also better than any single system results reported to date, and reaches the performance of the best system combinations. Conclusions We presented an attention based encoder-decoder setup which achieves state-of-the-art performance on Switchboard-300. A rather simple model built from LSTM layers and a decoder with a single-headed attention mechanism outperforms the standard hybrid approach. This is particularly remarkable given that in our model neither a pronunciation lexicon nor a speech model with explicit hidden state representations is needed. We also demonstrated that excellent results are possible with smaller models and with practically search-free, greedy decoding. The best results were achieved with a speaker independent model in a single decoding pass, using a minimalistic search algorithm, and without any attention mechanism in the language model. Thus, we believe that further improvements are still possible if we apply a more complicated sequence-level training criterion and speaker adaptation.
300-hour English conversational speech
7f452eb145d486c15ac4d1107fc914e48ebba60f
7f452eb145d486c15ac4d1107fc914e48ebba60f_0
Q: What crowdsourcing platform is used for data collection and data validation? Text: Introduction The Common Voice project is a response to the current state of affairs in speech technology, in which training data is either prohibitively expensive or unavailable for most languages BIBREF0. We believe that speech technology (like all technology) should be decentralized and open, and the Common Voice project achieves this goal via a mix of community building, open source tooling, and a permissive licensing scheme. The corpus is designed to organically scale to new languages as community members use the provided tools to translate the interface, submit text sentences, and finally record and validate voices in their new language . The project was started with an initial focus on English in July 2017 and then in June 2018 was made available for any language. The remainder of the paper is laid out as follows: In Section (SECREF2) we motivate Common Voice and review previous multilingual corpora. Next, in Section (SECREF3) we describe the recording and validation process used to create the corpus. Next, in Section (SECREF4) we describe the current contents of Common Voice, and lastly in Section (SECREF5) we show multilingual Automatic Speech Recognition experiments using the corpus. Prior work Some notable multilingual speech corpora include VoxForge BIBREF1, Babel BIBREF2, and M-AILABS BIBREF3. Even though the Babel corpus contains high-quality data from 22 minority languages, it is not released under an open license. VoxForge is most similar to Common Voice in that it is community-driven, multilingual (17 languages), and released under an open license (GNU General Public License). However, the VoxForge does not have a sustainable data collection pipeline compared to Common Voice, and there is no data validation step in place. M-AILABS data contains 9 language varieties with a modified BSD 3-Clause License, however there is no community-driven aspect. Common Voice is a sustainable, open alternative to these projects which allows for collection of minority and majority languages alike. Corpus Creation The data presented in this paper was collected and validated via Mozilla's Common Voice initiative. Using either the Common Voice website or iPhone app, contributors record their voice by reading sentences displayed on the screen (see Figure (FIGREF5)). The recordings are later verified by other contributors using a simple voting system. Shown in Figure (FIGREF6), this validation interface has contributors mark $<$audio,transcript$>$ pairs as being either correct (up-vote) or incorrect (down-vote). A maximum of three contributors will listen to any audio clip. If an $<$audio,transcript$>$ pair first receives two up-votes, then the clip is marked as valid. If instead the clip first receives two down-votes, then it is marked as invalid. A contributor may switch between recording and validation as they wish. Only clips marked as valid are included in the official training, development, and testing sets for each language. Clips which did not recieve enough votes to be validated or invalidated by the time of release are released as “other”. The train, test, and development sets are bucketed such that any given speaker may appear in only one. This ensures that contributors seen at train time are not seen at test time, which would skew results. Additionally, repetitions of text sentences are removed from the train, test, and development sets of the corpus. The number of clips is divided among the three datasets according to statistical power analyses. Given the total number of validated clips in a language, the number of clips in the test set is equal to the number needed to acheive a confidence level of 99% with a margin of error of 1% relative to the number of clips in the training set. The same is true of the development set. The audio clips are released as mono-channel, 16bit MPEG-3 files with a 48kHz sampling rate. The choice to collect and release MPEG-3 as opposed to a lossless audio format (e.g. WAV) is largely due to the web-based nature of the Common Voice collection platform. MPEG-3 is the most universally supported audio format for the web, and as such is the most reliable recording/playback technique for various devices and browsers. Also practically speaking, the audio quality is appropriate for speech applications. Corpus Contents ::: Released Languages The data presented in Table (TABREF12) shows the currently available data. Each of the released languages is available for individual download as a compressed directory from the Mozilla Common Voice website. The directory contains six files with Tab-Separated Values (i.e. TSV files), and a single clips subdirectory which contains all of the audio data. Each of the six TSV files represents a different segment of the voice data, with all six having the following column headers: [client_id, path, sentence, up_votes, down_votes, age, gender, accent]. The first three columns refer to an anonymized ID for the speaker, the location of the audio file, and the text that was read. The next two columns contain information on how listeners judged the $<$audio,transcript$>$ pair. The last three columns represent demographic data which was optionally self-reported by the speaker of the audio. Corpus Contents ::: Adding a new Language In order to add a new language to the Common Voice project, two steps must be completed. First, the web-app user interface must be translated into the target language. For example, the text shown in Figure (FIGREF7) must be translated. Secondly, text prompts must be gathered in order to be read aloud. These texts are not translated, but gathered from scratch for each language – translation would be very slow and not scaleable. Translation of the interface is managed through the Pontoon platform. Pontoon allows community members to propose translations, and then the moderators for that language approve or decline the proposals. At the time of writing this paper there are 610 text strings used in the Common Voice interface, where each string can range in length from an isolated word to a paragraph of text. Collecting text for reading aloud is the second step of adding a new language to Common Voice. For languages with more than 500,000 Wikipedia articles, text sentences are extracted from Wikipedia using community provided rule-sets per language. These sentences make up the initial text prompts for the languages in question. Any language community can gather additional sentences through the Sentence Collector taking advantage of automatic validation mechanisms such as checks for sentence length, foreign alphabets, and numbers. Every sentence submitted through the Sentence Collector needs to be approved by two out of three reviewers, leading to a weekly export of new sentences into the Common Voice database. Once the website is translated and at least 5,000 sentences have been added, the language is enabled for voice recordings. Automatic Speech Recognition Experiments The following experiments demonstrate the potential to use the Common Voice corpus for multilingual speech research. These results represent work on an internal version of Common Voice from February 2019. The current corpus contains more languages and more data per language. These experiments use an End-to-End Transfer Learning approach which bypasses the need for linguistic resources or domain expertise BIBREF4. Certain layers are copied from a pre-trained English source model, new layers are initialized for a target language, the old and new layers are stitched together, and all layers are fine-tuned via gradient descent. Automatic Speech Recognition Experiments ::: Data We made dataset splits (c.f. Table (TABREF19)) such that one speaker's recordings are only present in one data split. This allows us to make a fair evaluation of speaker generalization, but as a result some training sets have very few speakers, making this an even more challenging scenario. The splits per language were made as close as possible to 80% train, 10% development, and 10% test. Results from this dataset are interesting because the text and audio are challenging, the range of languages is wider than any openly available speech corpus, and the amount of data per language ranges from very small (less than 1,000 clips for Slovenian) to relatively large (over 65,000 clips for German). Automatic Speech Recognition Experiments ::: Model architecture All reported results were obtained with Mozilla's DeepSpeech v0.3.0 — an open-source implementation of a variation of Baidu's first DeepSpeech paper BIBREF5. This architecture is an end-to-end Automatic Speech Recognition (ASR) model trained via stochastic gradient descent with a Connectionist Temporal Classification (CTC) loss function BIBREF6. The model is six layers deep: three fully connected layers followed by a unidirectional LSTM layer followed by two more fully connected layers (c.f. Figure (FIGREF21)). All hidden layers have a dimensionality of 2,048 and a clipped ReLU activation. The output layer has as many dimensions as characters in the alphabet of the target language (including any desired punctuation as well as the blank symbol used for CTC). The input layer accepts a vector of 19 spliced frames (9 past frames + 1 present frame + 9 future frames) with 26 MFCC features each (i.e. a single, 494-dimensional vector). All models were trained with the following hyperparameters on a single GPU. We use a batch-size of 24 for train and 48 for development, a dropout rate of 20%, and a learning rate of $0.0001$ with the ADAM optimizer. The new, target-language layers were initialized via Xavier initialization BIBREF7. After every epoch of backpropagation over the training set, the loss over the entire development set is calculated. This development loss is used to trigger early stopping. Early stopping is triggered when the loss on the held-out development set either (1) increases over a window of five sequential epochs, or (2) the most recent loss over the development set has not improved in a window of five epochs more than a mean loss threshold of $0.5$ and the window of losses shows a standard deviation of less than $0.5$. Results The results from all experiments can be found in Table (TABREF23). Each cell in the table contains the Character Error Rate of the resulting model on the test set, defined as the Levenshtein distance BIBREF8 of the characters between the ground-truth transcript and the decoding result. The results in Table (TABREF23) show how the number of layers transfered (columns) influence the performance on individual target languages (rows). Shaded cells indicate relative performance per language, where a darker cell represents a more accurate model. From this table we observe a trend in which four layers copied from pre-trained English DeepSpeech result in the best final model. This trend becomes more obvious in Figure (FIGREF24), where we average the improvement over all languages relative to the number of layers transfered from a source model. Concluding remarks We have presented Common Voice: a crowd-sourced, multilingual speech corpus which can scale to any language via community effort. All of the speech data is released under a Creative Commons CC0 license, making Common Voice the largest public domain corpus designed for Automatic Speech Recognition. In Section (SECREF3) we described the recording and validation process used to create the corpus. In Section (SECREF4) we presented the current contents of Common Voice, and lastly in Section (SECREF5) we show multilingual Automatic Speech Recognition experiments using the corpus. There are currently 38 language communities collecting data via Common Voice, and we welcome more languages and more volunteers. Acknowledgements Common Voice is a living project, and would not be possible without the thousands of hours given by volunteers. We thank all volunteers for their time, and especially the minority language activists who translate, find new texts, and organize Common Voice donation events. We thank George Roter, Gheorghe Railean, Rubén Martín, and Jane Scowcroft for their work on Common Voice, and all members of the Common Voice team, past and present. This material is based upon work when Josh Meyer was supported by the National Science Foundation under Grant No. (DGE-1746060). Opinions, findings, conclusions, and recommendations are those of the authors and do not necessarily reflect the views of the NSF.
the Common Voice website, iPhone app
bb71a638668a21c2d446b44cbf51676c839658f7
bb71a638668a21c2d446b44cbf51676c839658f7_0
Q: How is validation of the data performed? Text: Introduction The Common Voice project is a response to the current state of affairs in speech technology, in which training data is either prohibitively expensive or unavailable for most languages BIBREF0. We believe that speech technology (like all technology) should be decentralized and open, and the Common Voice project achieves this goal via a mix of community building, open source tooling, and a permissive licensing scheme. The corpus is designed to organically scale to new languages as community members use the provided tools to translate the interface, submit text sentences, and finally record and validate voices in their new language . The project was started with an initial focus on English in July 2017 and then in June 2018 was made available for any language. The remainder of the paper is laid out as follows: In Section (SECREF2) we motivate Common Voice and review previous multilingual corpora. Next, in Section (SECREF3) we describe the recording and validation process used to create the corpus. Next, in Section (SECREF4) we describe the current contents of Common Voice, and lastly in Section (SECREF5) we show multilingual Automatic Speech Recognition experiments using the corpus. Prior work Some notable multilingual speech corpora include VoxForge BIBREF1, Babel BIBREF2, and M-AILABS BIBREF3. Even though the Babel corpus contains high-quality data from 22 minority languages, it is not released under an open license. VoxForge is most similar to Common Voice in that it is community-driven, multilingual (17 languages), and released under an open license (GNU General Public License). However, the VoxForge does not have a sustainable data collection pipeline compared to Common Voice, and there is no data validation step in place. M-AILABS data contains 9 language varieties with a modified BSD 3-Clause License, however there is no community-driven aspect. Common Voice is a sustainable, open alternative to these projects which allows for collection of minority and majority languages alike. Corpus Creation The data presented in this paper was collected and validated via Mozilla's Common Voice initiative. Using either the Common Voice website or iPhone app, contributors record their voice by reading sentences displayed on the screen (see Figure (FIGREF5)). The recordings are later verified by other contributors using a simple voting system. Shown in Figure (FIGREF6), this validation interface has contributors mark $<$audio,transcript$>$ pairs as being either correct (up-vote) or incorrect (down-vote). A maximum of three contributors will listen to any audio clip. If an $<$audio,transcript$>$ pair first receives two up-votes, then the clip is marked as valid. If instead the clip first receives two down-votes, then it is marked as invalid. A contributor may switch between recording and validation as they wish. Only clips marked as valid are included in the official training, development, and testing sets for each language. Clips which did not recieve enough votes to be validated or invalidated by the time of release are released as “other”. The train, test, and development sets are bucketed such that any given speaker may appear in only one. This ensures that contributors seen at train time are not seen at test time, which would skew results. Additionally, repetitions of text sentences are removed from the train, test, and development sets of the corpus. The number of clips is divided among the three datasets according to statistical power analyses. Given the total number of validated clips in a language, the number of clips in the test set is equal to the number needed to acheive a confidence level of 99% with a margin of error of 1% relative to the number of clips in the training set. The same is true of the development set. The audio clips are released as mono-channel, 16bit MPEG-3 files with a 48kHz sampling rate. The choice to collect and release MPEG-3 as opposed to a lossless audio format (e.g. WAV) is largely due to the web-based nature of the Common Voice collection platform. MPEG-3 is the most universally supported audio format for the web, and as such is the most reliable recording/playback technique for various devices and browsers. Also practically speaking, the audio quality is appropriate for speech applications. Corpus Contents ::: Released Languages The data presented in Table (TABREF12) shows the currently available data. Each of the released languages is available for individual download as a compressed directory from the Mozilla Common Voice website. The directory contains six files with Tab-Separated Values (i.e. TSV files), and a single clips subdirectory which contains all of the audio data. Each of the six TSV files represents a different segment of the voice data, with all six having the following column headers: [client_id, path, sentence, up_votes, down_votes, age, gender, accent]. The first three columns refer to an anonymized ID for the speaker, the location of the audio file, and the text that was read. The next two columns contain information on how listeners judged the $<$audio,transcript$>$ pair. The last three columns represent demographic data which was optionally self-reported by the speaker of the audio. Corpus Contents ::: Adding a new Language In order to add a new language to the Common Voice project, two steps must be completed. First, the web-app user interface must be translated into the target language. For example, the text shown in Figure (FIGREF7) must be translated. Secondly, text prompts must be gathered in order to be read aloud. These texts are not translated, but gathered from scratch for each language – translation would be very slow and not scaleable. Translation of the interface is managed through the Pontoon platform. Pontoon allows community members to propose translations, and then the moderators for that language approve or decline the proposals. At the time of writing this paper there are 610 text strings used in the Common Voice interface, where each string can range in length from an isolated word to a paragraph of text. Collecting text for reading aloud is the second step of adding a new language to Common Voice. For languages with more than 500,000 Wikipedia articles, text sentences are extracted from Wikipedia using community provided rule-sets per language. These sentences make up the initial text prompts for the languages in question. Any language community can gather additional sentences through the Sentence Collector taking advantage of automatic validation mechanisms such as checks for sentence length, foreign alphabets, and numbers. Every sentence submitted through the Sentence Collector needs to be approved by two out of three reviewers, leading to a weekly export of new sentences into the Common Voice database. Once the website is translated and at least 5,000 sentences have been added, the language is enabled for voice recordings. Automatic Speech Recognition Experiments The following experiments demonstrate the potential to use the Common Voice corpus for multilingual speech research. These results represent work on an internal version of Common Voice from February 2019. The current corpus contains more languages and more data per language. These experiments use an End-to-End Transfer Learning approach which bypasses the need for linguistic resources or domain expertise BIBREF4. Certain layers are copied from a pre-trained English source model, new layers are initialized for a target language, the old and new layers are stitched together, and all layers are fine-tuned via gradient descent. Automatic Speech Recognition Experiments ::: Data We made dataset splits (c.f. Table (TABREF19)) such that one speaker's recordings are only present in one data split. This allows us to make a fair evaluation of speaker generalization, but as a result some training sets have very few speakers, making this an even more challenging scenario. The splits per language were made as close as possible to 80% train, 10% development, and 10% test. Results from this dataset are interesting because the text and audio are challenging, the range of languages is wider than any openly available speech corpus, and the amount of data per language ranges from very small (less than 1,000 clips for Slovenian) to relatively large (over 65,000 clips for German). Automatic Speech Recognition Experiments ::: Model architecture All reported results were obtained with Mozilla's DeepSpeech v0.3.0 — an open-source implementation of a variation of Baidu's first DeepSpeech paper BIBREF5. This architecture is an end-to-end Automatic Speech Recognition (ASR) model trained via stochastic gradient descent with a Connectionist Temporal Classification (CTC) loss function BIBREF6. The model is six layers deep: three fully connected layers followed by a unidirectional LSTM layer followed by two more fully connected layers (c.f. Figure (FIGREF21)). All hidden layers have a dimensionality of 2,048 and a clipped ReLU activation. The output layer has as many dimensions as characters in the alphabet of the target language (including any desired punctuation as well as the blank symbol used for CTC). The input layer accepts a vector of 19 spliced frames (9 past frames + 1 present frame + 9 future frames) with 26 MFCC features each (i.e. a single, 494-dimensional vector). All models were trained with the following hyperparameters on a single GPU. We use a batch-size of 24 for train and 48 for development, a dropout rate of 20%, and a learning rate of $0.0001$ with the ADAM optimizer. The new, target-language layers were initialized via Xavier initialization BIBREF7. After every epoch of backpropagation over the training set, the loss over the entire development set is calculated. This development loss is used to trigger early stopping. Early stopping is triggered when the loss on the held-out development set either (1) increases over a window of five sequential epochs, or (2) the most recent loss over the development set has not improved in a window of five epochs more than a mean loss threshold of $0.5$ and the window of losses shows a standard deviation of less than $0.5$. Results The results from all experiments can be found in Table (TABREF23). Each cell in the table contains the Character Error Rate of the resulting model on the test set, defined as the Levenshtein distance BIBREF8 of the characters between the ground-truth transcript and the decoding result. The results in Table (TABREF23) show how the number of layers transfered (columns) influence the performance on individual target languages (rows). Shaded cells indicate relative performance per language, where a darker cell represents a more accurate model. From this table we observe a trend in which four layers copied from pre-trained English DeepSpeech result in the best final model. This trend becomes more obvious in Figure (FIGREF24), where we average the improvement over all languages relative to the number of layers transfered from a source model. Concluding remarks We have presented Common Voice: a crowd-sourced, multilingual speech corpus which can scale to any language via community effort. All of the speech data is released under a Creative Commons CC0 license, making Common Voice the largest public domain corpus designed for Automatic Speech Recognition. In Section (SECREF3) we described the recording and validation process used to create the corpus. In Section (SECREF4) we presented the current contents of Common Voice, and lastly in Section (SECREF5) we show multilingual Automatic Speech Recognition experiments using the corpus. There are currently 38 language communities collecting data via Common Voice, and we welcome more languages and more volunteers. Acknowledgements Common Voice is a living project, and would not be possible without the thousands of hours given by volunteers. We thank all volunteers for their time, and especially the minority language activists who translate, find new texts, and organize Common Voice donation events. We thank George Roter, Gheorghe Railean, Rubén Martín, and Jane Scowcroft for their work on Common Voice, and all members of the Common Voice team, past and present. This material is based upon work when Josh Meyer was supported by the National Science Foundation under Grant No. (DGE-1746060). Opinions, findings, conclusions, and recommendations are those of the authors and do not necessarily reflect the views of the NSF.
A maximum of three contributors will listen to any audio clip. If an $<$audio,transcript$>$ pair first receives two up-votes, then the clip is marked as valid. If instead the clip first receives two down-votes, then it is marked as invalid.
5fa464a158dc8abf7cef8ca7d42a7080670c1edd
5fa464a158dc8abf7cef8ca7d42a7080670c1edd_0
Q: Is audio data per language balanced in dataset? Text: Introduction The Common Voice project is a response to the current state of affairs in speech technology, in which training data is either prohibitively expensive or unavailable for most languages BIBREF0. We believe that speech technology (like all technology) should be decentralized and open, and the Common Voice project achieves this goal via a mix of community building, open source tooling, and a permissive licensing scheme. The corpus is designed to organically scale to new languages as community members use the provided tools to translate the interface, submit text sentences, and finally record and validate voices in their new language . The project was started with an initial focus on English in July 2017 and then in June 2018 was made available for any language. The remainder of the paper is laid out as follows: In Section (SECREF2) we motivate Common Voice and review previous multilingual corpora. Next, in Section (SECREF3) we describe the recording and validation process used to create the corpus. Next, in Section (SECREF4) we describe the current contents of Common Voice, and lastly in Section (SECREF5) we show multilingual Automatic Speech Recognition experiments using the corpus. Prior work Some notable multilingual speech corpora include VoxForge BIBREF1, Babel BIBREF2, and M-AILABS BIBREF3. Even though the Babel corpus contains high-quality data from 22 minority languages, it is not released under an open license. VoxForge is most similar to Common Voice in that it is community-driven, multilingual (17 languages), and released under an open license (GNU General Public License). However, the VoxForge does not have a sustainable data collection pipeline compared to Common Voice, and there is no data validation step in place. M-AILABS data contains 9 language varieties with a modified BSD 3-Clause License, however there is no community-driven aspect. Common Voice is a sustainable, open alternative to these projects which allows for collection of minority and majority languages alike. Corpus Creation The data presented in this paper was collected and validated via Mozilla's Common Voice initiative. Using either the Common Voice website or iPhone app, contributors record their voice by reading sentences displayed on the screen (see Figure (FIGREF5)). The recordings are later verified by other contributors using a simple voting system. Shown in Figure (FIGREF6), this validation interface has contributors mark $<$audio,transcript$>$ pairs as being either correct (up-vote) or incorrect (down-vote). A maximum of three contributors will listen to any audio clip. If an $<$audio,transcript$>$ pair first receives two up-votes, then the clip is marked as valid. If instead the clip first receives two down-votes, then it is marked as invalid. A contributor may switch between recording and validation as they wish. Only clips marked as valid are included in the official training, development, and testing sets for each language. Clips which did not recieve enough votes to be validated or invalidated by the time of release are released as “other”. The train, test, and development sets are bucketed such that any given speaker may appear in only one. This ensures that contributors seen at train time are not seen at test time, which would skew results. Additionally, repetitions of text sentences are removed from the train, test, and development sets of the corpus. The number of clips is divided among the three datasets according to statistical power analyses. Given the total number of validated clips in a language, the number of clips in the test set is equal to the number needed to acheive a confidence level of 99% with a margin of error of 1% relative to the number of clips in the training set. The same is true of the development set. The audio clips are released as mono-channel, 16bit MPEG-3 files with a 48kHz sampling rate. The choice to collect and release MPEG-3 as opposed to a lossless audio format (e.g. WAV) is largely due to the web-based nature of the Common Voice collection platform. MPEG-3 is the most universally supported audio format for the web, and as such is the most reliable recording/playback technique for various devices and browsers. Also practically speaking, the audio quality is appropriate for speech applications. Corpus Contents ::: Released Languages The data presented in Table (TABREF12) shows the currently available data. Each of the released languages is available for individual download as a compressed directory from the Mozilla Common Voice website. The directory contains six files with Tab-Separated Values (i.e. TSV files), and a single clips subdirectory which contains all of the audio data. Each of the six TSV files represents a different segment of the voice data, with all six having the following column headers: [client_id, path, sentence, up_votes, down_votes, age, gender, accent]. The first three columns refer to an anonymized ID for the speaker, the location of the audio file, and the text that was read. The next two columns contain information on how listeners judged the $<$audio,transcript$>$ pair. The last three columns represent demographic data which was optionally self-reported by the speaker of the audio. Corpus Contents ::: Adding a new Language In order to add a new language to the Common Voice project, two steps must be completed. First, the web-app user interface must be translated into the target language. For example, the text shown in Figure (FIGREF7) must be translated. Secondly, text prompts must be gathered in order to be read aloud. These texts are not translated, but gathered from scratch for each language – translation would be very slow and not scaleable. Translation of the interface is managed through the Pontoon platform. Pontoon allows community members to propose translations, and then the moderators for that language approve or decline the proposals. At the time of writing this paper there are 610 text strings used in the Common Voice interface, where each string can range in length from an isolated word to a paragraph of text. Collecting text for reading aloud is the second step of adding a new language to Common Voice. For languages with more than 500,000 Wikipedia articles, text sentences are extracted from Wikipedia using community provided rule-sets per language. These sentences make up the initial text prompts for the languages in question. Any language community can gather additional sentences through the Sentence Collector taking advantage of automatic validation mechanisms such as checks for sentence length, foreign alphabets, and numbers. Every sentence submitted through the Sentence Collector needs to be approved by two out of three reviewers, leading to a weekly export of new sentences into the Common Voice database. Once the website is translated and at least 5,000 sentences have been added, the language is enabled for voice recordings. Automatic Speech Recognition Experiments The following experiments demonstrate the potential to use the Common Voice corpus for multilingual speech research. These results represent work on an internal version of Common Voice from February 2019. The current corpus contains more languages and more data per language. These experiments use an End-to-End Transfer Learning approach which bypasses the need for linguistic resources or domain expertise BIBREF4. Certain layers are copied from a pre-trained English source model, new layers are initialized for a target language, the old and new layers are stitched together, and all layers are fine-tuned via gradient descent. Automatic Speech Recognition Experiments ::: Data We made dataset splits (c.f. Table (TABREF19)) such that one speaker's recordings are only present in one data split. This allows us to make a fair evaluation of speaker generalization, but as a result some training sets have very few speakers, making this an even more challenging scenario. The splits per language were made as close as possible to 80% train, 10% development, and 10% test. Results from this dataset are interesting because the text and audio are challenging, the range of languages is wider than any openly available speech corpus, and the amount of data per language ranges from very small (less than 1,000 clips for Slovenian) to relatively large (over 65,000 clips for German). Automatic Speech Recognition Experiments ::: Model architecture All reported results were obtained with Mozilla's DeepSpeech v0.3.0 — an open-source implementation of a variation of Baidu's first DeepSpeech paper BIBREF5. This architecture is an end-to-end Automatic Speech Recognition (ASR) model trained via stochastic gradient descent with a Connectionist Temporal Classification (CTC) loss function BIBREF6. The model is six layers deep: three fully connected layers followed by a unidirectional LSTM layer followed by two more fully connected layers (c.f. Figure (FIGREF21)). All hidden layers have a dimensionality of 2,048 and a clipped ReLU activation. The output layer has as many dimensions as characters in the alphabet of the target language (including any desired punctuation as well as the blank symbol used for CTC). The input layer accepts a vector of 19 spliced frames (9 past frames + 1 present frame + 9 future frames) with 26 MFCC features each (i.e. a single, 494-dimensional vector). All models were trained with the following hyperparameters on a single GPU. We use a batch-size of 24 for train and 48 for development, a dropout rate of 20%, and a learning rate of $0.0001$ with the ADAM optimizer. The new, target-language layers were initialized via Xavier initialization BIBREF7. After every epoch of backpropagation over the training set, the loss over the entire development set is calculated. This development loss is used to trigger early stopping. Early stopping is triggered when the loss on the held-out development set either (1) increases over a window of five sequential epochs, or (2) the most recent loss over the development set has not improved in a window of five epochs more than a mean loss threshold of $0.5$ and the window of losses shows a standard deviation of less than $0.5$. Results The results from all experiments can be found in Table (TABREF23). Each cell in the table contains the Character Error Rate of the resulting model on the test set, defined as the Levenshtein distance BIBREF8 of the characters between the ground-truth transcript and the decoding result. The results in Table (TABREF23) show how the number of layers transfered (columns) influence the performance on individual target languages (rows). Shaded cells indicate relative performance per language, where a darker cell represents a more accurate model. From this table we observe a trend in which four layers copied from pre-trained English DeepSpeech result in the best final model. This trend becomes more obvious in Figure (FIGREF24), where we average the improvement over all languages relative to the number of layers transfered from a source model. Concluding remarks We have presented Common Voice: a crowd-sourced, multilingual speech corpus which can scale to any language via community effort. All of the speech data is released under a Creative Commons CC0 license, making Common Voice the largest public domain corpus designed for Automatic Speech Recognition. In Section (SECREF3) we described the recording and validation process used to create the corpus. In Section (SECREF4) we presented the current contents of Common Voice, and lastly in Section (SECREF5) we show multilingual Automatic Speech Recognition experiments using the corpus. There are currently 38 language communities collecting data via Common Voice, and we welcome more languages and more volunteers. Acknowledgements Common Voice is a living project, and would not be possible without the thousands of hours given by volunteers. We thank all volunteers for their time, and especially the minority language activists who translate, find new texts, and organize Common Voice donation events. We thank George Roter, Gheorghe Railean, Rubén Martín, and Jane Scowcroft for their work on Common Voice, and all members of the Common Voice team, past and present. This material is based upon work when Josh Meyer was supported by the National Science Foundation under Grant No. (DGE-1746060). Opinions, findings, conclusions, and recommendations are those of the authors and do not necessarily reflect the views of the NSF.
No
e1f559da7fa501d3190073bca9ce4d4a12149e80
e1f559da7fa501d3190073bca9ce4d4a12149e80_0
Q: What is the performance of their model? Text: Introduction Text classification is a fundamental task in Natural Language processing which has been found useful in a wide spectrum of applications ranging from search engines enabling users to identify content on websites, sentiment and social media analysis, customer relationship management systems, and spam detection. Over the past several years, text classification has been predominantly modeled as a supervised learning problem (e.g., BIBREF0 , BIBREF1 , BIBREF2 ) for which appropriately labeled data must be collected. Such data is often domain-dependent (i.e., covering specific topics such as those relating to “Business” or “Medicine”) and a classifier trained using data from one domain is likely to perform poorly on another. For example, the phrase “the mouse died quickly” may indicate negative sentiment in a customer review describing the hand-held pointing device or positive sentiment when describing a laboratory experiment performed on a rodent. The ability to handle a wide variety of domains has become more pertinent with the rise of data-hungry machine learning techniques like neural networks and their application to a plethora of textual media ranging from news articles to twitter, blog posts, medical journals, Reddit comments, and parliamentary debates BIBREF0 , BIBREF3 , BIBREF4 , BIBREF5 . The question of how to best deal with multiple domains when training data is available for one or few of them has met with much interest in the literature. The field of domain adaptation BIBREF6 , BIBREF7 , BIBREF8 , BIBREF9 , BIBREF10 aims at improving the learning of a predictive function in a target domain where there is little or no labeled data, using knowledge transferred from a source domain where sufficient labeled data is available. Another line of work BIBREF11 , BIBREF12 , BIBREF13 assumes that labeled data may exist for multiple domains, but in insufficient amounts to train classifiers for one or more of them. The aim of multi-domain text classification is to leverage all the available resources in order to improve system performance across domains simultaneously. In this paper we investigate the question of how domain-specific data might be obtained in order to enable the development of text classification tools as well as more domain aware applications such as summarization, question answering, and information extraction. We refer to this task as domain detection and assume a fairly common setting where the domains of a corpus collection are known and the aim is to identify textual segments which are domain-heavy, i.e., documents, sentences, or phrases providing evidence for a given domain. Domain detection can be formulated as a multilabel classification problem, where a model is trained to recognize domain evidence at the sentence-, phrase-, or word-level. By definition then, domain detection would require training data with fine-grained domain labels, thereby increasing the annotation burden; we must provide labels for training domain detectors and for modeling the task we care about in the first place. In this paper we consider the problem of fine-grained domain detection from the perspective of Multiple Instance Learning (MIL; BIBREF14 ) and develop domain models with very little human involvement. Instead of learning from individually labeled segments, our model only requires document-level supervision and optionally prior domain knowledge and learns to introspectively judge the domain of constituent segments. Importantly, we do not require document-level domain annotations either since we obtain these via distant supervision by leveraging information drawn from Wikipedia. Our domain detection framework comprises two neural network modules; an encoder learns representations for words and sentences together with prior domain information if the latter is available (e.g., domain definitions), while a detector generates domain-specific scores for words, sentences, and documents. We obtain a segment-level domain predictor which is trained end-to-end on document-level labels using a hierarchical, attention-based neural architecture BIBREF15 . We conduct domain detection experiments on English and Chinese and measure system performance using both automatic and human-based evaluation. Experimental results show that our model outperforms several strong baselines and is robust across languages and text genres, despite learning from weak supervision. We also showcase our model's application potential for text summarization. Our contributions in this work are threefold; we propose domain detection, as a new fine-grained multilabel learning problem which we argue would benefit the development of domain aware NLP tools; we introduce a weakly supervised encoder-detector model within the context of multiple instance learning; and demonstrate that it can be applied across languages and text genres without modification. Related Work Our work lies at the intersection of multiple research areas, including domain adaptation, representation learning, multiple instance learning, and topic modeling. We review related work below. Problem Formulation We formulate domain detection as a multilabel learning problem. Our model is trained on samples of document-label pairs. Each document consists of INLINEFORM0 sentences INLINEFORM1 and is associated with discrete labels INLINEFORM2 . In this work, domain labels are not annotated manually but extrapolated from Wikipedia (see Section SECREF6 for details). In a non-MIL framework, a model typically learns to predict document labels by directly conditioning on its sentence representations INLINEFORM3 or their aggregate. In contrast, INLINEFORM4 under MIL is a learned function INLINEFORM5 of latent instance-level labels, i.e., INLINEFORM6 . A MIL classifier will therefore first produce domain scores for all instances (aka sentences), and then learn to integrate instance scores into a bag (aka document) prediction. In this paper we further assume that the instance-bag relation applies to sentences and documents but also to words and sentences. In addition, we incorporate prior domain information to facilitate learning in a weakly supervised setting: each domain is associated with a definition INLINEFORM0 , i.e., a few sentences providing a high-level description of the domain at hand. For example, the definition of the “Lifestyle” domain is “the interests, opinions, behaviors, and behavioral orientations of an individual, group, or culture”. Figure FIGREF5 provides an overview of our Domain Detection Network, which we call DetNet. The model comprises two modules; an encoder learns representations for words and sentences whilst incorporating prior domain information; a detector generates domain scores for words, sentences, and documents by selectively attending to previously encoded information. We describe the two modules in more detail below. The Encoder Module We learn representations for words and sentences using identical encoders with separate learning parameters. Given a document, the two encoders implement the following steps: INLINEFORM0 For each sentence INLINEFORM0 , the word-level encoder yields contextualized word representations INLINEFORM1 and their attention weights INLINEFORM2 . Sentence embeddings INLINEFORM3 are obtained via weighted averaging and then provided as input to the sentence-level encoder which outputs contextualized representations INLINEFORM4 and their attention weights INLINEFORM5 . In this work we aim to model fairly long documents (e.g., Wikipedia articles; see Section SECREF6 for details). For this reason, our encoder builds on the Transformer architecture BIBREF15 , a recently proposed highly efficient model which has achieved state-of-the-art performance in machine translation BIBREF15 and question answering BIBREF35 . The Transformer aims at reducing the fundamental constraint of sequential computation which underlies most architectures based on recurrent neural networks. It eliminates recurrence in favor of applying a self-attention mechanism which directly models relationships between all words in a sentence, regardless of their position. The Detector Module DetNet adopts three detectors corresponding to words, sentences, and documents: INLINEFORM0 WordDet first produces word domain scores using both lexical semantic information INLINEFORM0 and prior (domain) knowledge INLINEFORM1 ; SentDet yields domain scores for sentences while integrating downstream instance signals INLINEFORM2 and sentence semantics INLINEFORM3 ; finally, DocDet makes the final document-level predictions based on sentence scores. Experimental Setup [t] Document Generation Input: INLINEFORM0 : Label combinations INLINEFORM0 : Sentence subcorpora INLINEFORM0 : Maximum document length Output: A synthetic document [0] Generate INLINEFORM0 Generate a document domain set INLINEFORM1 INLINEFORM2 INLINEFORM0 Number of domain labels Generate a noisy domain INLINEFORM1 INLINEFORM2 INLINEFORM0 ; A set of candidate domain sets INLINEFORM0 INLINEFORM1 INLINEFORM2 INLINEFORM0 Number of unused labels INLINEFORM1 Number of sentence blocks INLINEFORM2 For generated sentences INLINEFORM0 INLINEFORM1 Generate INLINEFORM2 Generate INLINEFORM3 sentences INLINEFORM4 INLINEFORM5 INLINEFORM6 INLINEFORM7 INLINEFORM8 Shuffle INLINEFORM9 INLINEFORM10 Automatic Evaluation In this section we present the results of our automatic evaluation for sentence and document predictions. Problematically, for sentence predictions we do not have gold-standard domain labels (we have only extrapolated these from Wikipedia for documents). We therefore developed an automatic approach for creating silver standard domain labels which we describe below. Human Evaluation Aside from automatic evaluation, we also assessed model performance against human elicited domain labels for sentences and words. The purpose of this experiment was threefold: (a) to validate the results obtained from automatic evaluation; (b) to evaluate finer-grained model performance at the word level; and (c) to examine whether our model generalizes to non-Wikipedia articles. For this, we created a third test set from the New York Times, in addition to our Wikipedia-based English and Chinese datasets. For all three corpora, we randomly sampled two documents for each domain, and then from each document, we sampled one long paragraph or a few consecutive short paragraphs containing 8–12 sentences. Amazon Mechanical Turkers were asked to read these sentences and assign a domain based on the seven labels used in this paper (multiple labels were allowed). Participants were provided with domain definitions. We obtained five annotations per sentence and adopted the majority label as the sentence's domain label. We obtained two annotated datasets for English (Wiki-en and NYT-en) and one for Chinese (Wiki-zh), consisting of 122/14, 111/11, and 117/12 sentences/documents each. Word-level domain evaluation is more challenging; taken out-of-context, individual words might be uninformative or carry meanings compatible with multiple domains. Expecting crowdworkers to annotate domain labels word-by-word with high confidence, might be therefore problematic. In order to reduce annotation complexity, we opted for a retrieval-style task for word evaluation. Specifically, AMT workers were given a sentence and its domain label (obtained from the sentence-level elicitation study described above), and asked to highlight which words they considered consistent with the domain of the sentence. We used the same corpora/sentences as in our first AMT study. Analogously, words in each sentence were annotated by five participants and their labels were determined by majority agreement. Fully hierarchical variants of our model (i.e., DetNet INLINEFORM0 , DetNet INLINEFORM1 ) and L-LDA are able to produce word-level predictions; we thus retrieved the words within a sentence whose domain score was above the threshold of 0 and compared them against the labels provided by crowdworkers. MilNet and DetNet INLINEFORM2 can only make sentence-level predictions. In this case, we assume that the sentence domain applies to all words therein. HierNet can only produce document-level predictions based on which we generate sentence labels and further assume that these apply to sentence words too. Again, we report INLINEFORM3 2prp+r INLINEFORM4 p INLINEFORM5 r INLINEFORM6 We show model performance against AMT domain labels in Table TABREF42 . Consistent with the automatic evaluation results, DetNet variants are the best performing models on the sentence-level task. On the Wikipedia datasets, DetNet INLINEFORM0 or DetNet INLINEFORM1 outperform all baselines and DetNet INLINEFORM2 by a large margin, showing that word-level signals can indeed help detect sentence domains. Although statistical models are typically less accurate when they are applied to data that has a different distribution from the training data, DetNet INLINEFORM3 works surprisingly well on NYT, substantially outperforming all other systems. We also notice that prior information is useful in making domain predictions for NYT sentences: since our models are trained on Wikipedia, prior domain definitions largely alleviate the genre shift to non-Wikipedia sentences. Table TABREF43 provides a breakdown of the performance of DetNet INLINEFORM4 across domains. Overall, the model performs worst on LIF and GEN domains (which are very broad) and best on BUS and MIL (which are very narrow). With regard to word-level evaluation, DetNet INLINEFORM0 and DetNet INLINEFORM1 are the best systems and are significantly better against all comparison models by a wide margin, except L-LDA. The latter is a strong domain detection system at the word-level since it is able to directly associate words with domain labels (see Equation ( EQREF34 )) without resorting to document- or sentence-level predictions. However, our two-level hierarchical model is superior considering all-around performance across sentences and documents. The results here accord with our intuition from previous experiments: hierarchical models outperform simpler variants (including MilNet) since they are able to capture and exploit fine-grained domain signals relatively accurately. Interestingly, prior information does not seem to have an effect on the Wikipedia datasets, but is useful when transferring to NYT. We also observe that models trained on the Chinese datasets perform consistently better than English. Analysis of the annotations provided by crowdworkers revealed that the ratio of domain words in Chinese is higher compared to English (27.47 INLINEFORM2 vs. 13.86 INLINEFORM3 in Wikipedia and 16.42 INLINEFORM4 in NYT), possibly rendering word retrieval in Chinese an easier task. Table TABREF44 shows the 15 most representative domain words identified by our model (DetNet INLINEFORM0 ) on Wiki-en for our seven domains. We obtained this list by weighting word domain scores INLINEFORM1 with their attention scores: DISPLAYFORM0 and ranking all words in the development set according to INLINEFORM0 , separately for each domain. Since words appearing in different contexts are usually associated with multiple domains, we determine a word's ranking for a given domain based on the highest score. As shown in Table TABREF44 , biosecurity and authoritarianism are prevalent in both GOV and LAW domains. Interestingly, with contextualized word representations, fairly general English words are recognized as domain heavy. For example, technique is a strong domain word in HEA and 420 in GOV (the latter is slang for the consumption of cannabis and highly associated with government regulations). For comparison, we also show the top domain words identified by L-LDA via matrix INLINEFORM0 (see Equation ( EQREF34 )). To produce meaningful output, we have removed stop words and punctuation tokens, which are given very high domain scores by L-LDA (this is not entirely surprising since INLINEFORM1 is based on simple co-occurrence). Notice that no such post-processing is necessary for our model. As shown in Table TABREF44 , the top domain words identified by L-LDA (on the right) are more general and less informative, compared to those from DetNet INLINEFORM2 (on the left). Domain-Specific Summarization In this section we illustrate how fine-grained domain scores can be used to produce domain summaries, following an extractive, unsupervised approach. We assume the user specifies the domains they are interested in a priori (e.g., LAW, HEA) and the system returns summaries targeting the semantics of these domains. Specifically, we introduce DetRank, an extension of the well-known TextRank algorithm BIBREF42 , which incorporates domain signals acquired by DetNet INLINEFORM0 . For each document, TextRank builds a directed graph INLINEFORM1 with nodes INLINEFORM2 corresponding to sentences, and undirected edges INLINEFORM3 whose weights are computed based on sentence similarity. Specifically, edge weights are represented with matrix INLINEFORM4 where each element INLINEFORM5 corresponds to the transition probability from vertex INLINEFORM6 to vertex INLINEFORM7 . Following barrios2016variations, INLINEFORM8 is computed with the Okapi BM25 algorithm BIBREF43 , a probabilistic version of TF-IDF, and small weights ( INLINEFORM9 ) are set to zeros. Unreachable nodes are further pruned to acquire the final vertex set INLINEFORM10 . To enhance TextRank with domain information, we first multiply sentence-level domain scores INLINEFORM0 with their corresponding attention scores: DISPLAYFORM0 and for a given domain INLINEFORM0 , we can extract a (domain) sentence score vector INLINEFORM1 . Then, from INLINEFORM2 , we produce vector INLINEFORM3 representing a distribution of domain signals over sentences: DISPLAYFORM0 In order to render domain signals in different sentences more discernible, we scale all elements in INLINEFORM0 to INLINEFORM1 before obtaining a legitimate distribution with the INLINEFORM2 function. Finally, we integrate the domain component into the original transition matrix as: DISPLAYFORM0 where INLINEFORM0 controls the extent to which domain-specific information influences sentence selection for the summarization task; higher INLINEFORM1 will lead to summaries which are more domain-relevant. Here, we empirically set INLINEFORM2 . The main difference between DetRank and TextRank is that TextRank treats INLINEFORM3 as a damping factor and a uniform probability distribution is applied to INLINEFORM4 . In order to decide which sentence to include in the summary, a node’s centrality is measured using a graph-based ranking algorithm BIBREF42 . Specifically, we run a Markov chain with INLINEFORM0 on INLINEFORM1 until it converges to the stationary distribution INLINEFORM2 where each element denotes the salience of a sentence. In the proposed DetRank algorithm, INLINEFORM3 jointly expresses the importance of a sentence in the document and its relevance to the given domain (controlled by INLINEFORM4 ). We rank sentences according to INLINEFORM5 and select the top INLINEFORM6 ones, subject to a budget (e.g., 100 words). We ran a judgment elicitation study on summaries produced by TextRank and DetRank. Participants were provided with domain definitions and asked to decide which summary was best according to the criteria of: Informativeness (does the summary contain more information about a specific domain, e.g., “Government and Politics”?), Succinctness (does the summary avoid unnecessary detail and redundant information?), and Coherence (does the summary make logical sense?). Amazon Mechanical Turk (AMT) workers were allowed to answer “Both” or “Neither” in cases where they could not discriminate between summaries. We sampled 50 summary pairs from the English Wikipedia development set. We collected three responses per summary pair and determined which system participants preferred based on majority agreement. Table TABREF51 shows the proportion of times AMT workers preferred each system according to the criteria of Informativeness, Succinctness, Coherence, and overall. As can be seen, participants find DetRank summaries more informative and coherent. While it is perhaps not surprising for DetRank to produce summaries which are domain informative since it explicitly takes domain signals into account, it is interesting to note that focusing on a specific domain also helps discard irrelevant information and produce more coherent summaries. This, on the other hand, possibly renders DetRank's summaries more verbose (see the Succinctness ratings in Table TABREF51 ). Figure FIGREF46 shows example summaries for the Wikipedia article Arms Industry for domains MIL and BUS. Both summaries begin with a sentence which introduces the arms industry to the reader. When MIL is the domain of interest, the summary focuses on military products such as guns and missiles. When the domain changes to BUS, the summary puts more emphasis on trade, e.g., market competition and companies doing military business, such as Boeing and Eurofighter. Conclusions In this work, we proposed an encoder-detector framework for domain detection. Leveraging only weak domain supervision, our model achieves results superior to competitive baselines across different languages, segment granularities, and text genres. Aside from identifying domain specific training data, we also show that our model holds promise for other natural language tasks, such as text summarization. Beyond domain detection, we hope that some of the work described here might be of relevance to other multilabel classification problems such as sentiment analysis BIBREF29 , relation extraction BIBREF44 , and named entity recognition BIBREF45 . More generally, our experiments show that the proposed framework can be applied to textual data using minimal supervision, significantly alleviating the annotation bottleneck for text classification problems. A key feature in achieving performance superior to competitive baselines is the hierarchical nature of our model, where representations are encoded step-by-step, first for words, then for sentences, and finally for documents. The framework flexibly integrates prior information which can be used to enhance the otherwise weak supervision signal or to render the model more robust across genres. In the future, we would like to investigate semi-supervised instantiations of MIL, where aside from bag labels, small amounts of instance labels are also available BIBREF23 . It would also be interesting to examine how the label space influences model performance, especially since in our scenario the labels are extrapolated from Wikipedia and might be naturally noisy and/or ambiguous. Acknowledgments The authors would like to thank the anonymous reviewers and the action editor, Yusuke Miyao, for their valuable feedback. We acknowledge the financial support of the European Research Council (Lapata; award number 681760). This research is based upon work supported in part by the Office of the Director of National Intelligence (ODNI), Intelligence Advanced Research Projects Activity (IARPA), via contract FA8650-17-C-9118. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of the ODNI, IARPA, or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright annotation therein.
Unanswerable
a96a1a354cb3a2a434b085e4d9c8844d0b672ec4
a96a1a354cb3a2a434b085e4d9c8844d0b672ec4_0
Q: Which text genres did they experiment with? Text: Introduction Text classification is a fundamental task in Natural Language processing which has been found useful in a wide spectrum of applications ranging from search engines enabling users to identify content on websites, sentiment and social media analysis, customer relationship management systems, and spam detection. Over the past several years, text classification has been predominantly modeled as a supervised learning problem (e.g., BIBREF0 , BIBREF1 , BIBREF2 ) for which appropriately labeled data must be collected. Such data is often domain-dependent (i.e., covering specific topics such as those relating to “Business” or “Medicine”) and a classifier trained using data from one domain is likely to perform poorly on another. For example, the phrase “the mouse died quickly” may indicate negative sentiment in a customer review describing the hand-held pointing device or positive sentiment when describing a laboratory experiment performed on a rodent. The ability to handle a wide variety of domains has become more pertinent with the rise of data-hungry machine learning techniques like neural networks and their application to a plethora of textual media ranging from news articles to twitter, blog posts, medical journals, Reddit comments, and parliamentary debates BIBREF0 , BIBREF3 , BIBREF4 , BIBREF5 . The question of how to best deal with multiple domains when training data is available for one or few of them has met with much interest in the literature. The field of domain adaptation BIBREF6 , BIBREF7 , BIBREF8 , BIBREF9 , BIBREF10 aims at improving the learning of a predictive function in a target domain where there is little or no labeled data, using knowledge transferred from a source domain where sufficient labeled data is available. Another line of work BIBREF11 , BIBREF12 , BIBREF13 assumes that labeled data may exist for multiple domains, but in insufficient amounts to train classifiers for one or more of them. The aim of multi-domain text classification is to leverage all the available resources in order to improve system performance across domains simultaneously. In this paper we investigate the question of how domain-specific data might be obtained in order to enable the development of text classification tools as well as more domain aware applications such as summarization, question answering, and information extraction. We refer to this task as domain detection and assume a fairly common setting where the domains of a corpus collection are known and the aim is to identify textual segments which are domain-heavy, i.e., documents, sentences, or phrases providing evidence for a given domain. Domain detection can be formulated as a multilabel classification problem, where a model is trained to recognize domain evidence at the sentence-, phrase-, or word-level. By definition then, domain detection would require training data with fine-grained domain labels, thereby increasing the annotation burden; we must provide labels for training domain detectors and for modeling the task we care about in the first place. In this paper we consider the problem of fine-grained domain detection from the perspective of Multiple Instance Learning (MIL; BIBREF14 ) and develop domain models with very little human involvement. Instead of learning from individually labeled segments, our model only requires document-level supervision and optionally prior domain knowledge and learns to introspectively judge the domain of constituent segments. Importantly, we do not require document-level domain annotations either since we obtain these via distant supervision by leveraging information drawn from Wikipedia. Our domain detection framework comprises two neural network modules; an encoder learns representations for words and sentences together with prior domain information if the latter is available (e.g., domain definitions), while a detector generates domain-specific scores for words, sentences, and documents. We obtain a segment-level domain predictor which is trained end-to-end on document-level labels using a hierarchical, attention-based neural architecture BIBREF15 . We conduct domain detection experiments on English and Chinese and measure system performance using both automatic and human-based evaluation. Experimental results show that our model outperforms several strong baselines and is robust across languages and text genres, despite learning from weak supervision. We also showcase our model's application potential for text summarization. Our contributions in this work are threefold; we propose domain detection, as a new fine-grained multilabel learning problem which we argue would benefit the development of domain aware NLP tools; we introduce a weakly supervised encoder-detector model within the context of multiple instance learning; and demonstrate that it can be applied across languages and text genres without modification. Related Work Our work lies at the intersection of multiple research areas, including domain adaptation, representation learning, multiple instance learning, and topic modeling. We review related work below. Problem Formulation We formulate domain detection as a multilabel learning problem. Our model is trained on samples of document-label pairs. Each document consists of INLINEFORM0 sentences INLINEFORM1 and is associated with discrete labels INLINEFORM2 . In this work, domain labels are not annotated manually but extrapolated from Wikipedia (see Section SECREF6 for details). In a non-MIL framework, a model typically learns to predict document labels by directly conditioning on its sentence representations INLINEFORM3 or their aggregate. In contrast, INLINEFORM4 under MIL is a learned function INLINEFORM5 of latent instance-level labels, i.e., INLINEFORM6 . A MIL classifier will therefore first produce domain scores for all instances (aka sentences), and then learn to integrate instance scores into a bag (aka document) prediction. In this paper we further assume that the instance-bag relation applies to sentences and documents but also to words and sentences. In addition, we incorporate prior domain information to facilitate learning in a weakly supervised setting: each domain is associated with a definition INLINEFORM0 , i.e., a few sentences providing a high-level description of the domain at hand. For example, the definition of the “Lifestyle” domain is “the interests, opinions, behaviors, and behavioral orientations of an individual, group, or culture”. Figure FIGREF5 provides an overview of our Domain Detection Network, which we call DetNet. The model comprises two modules; an encoder learns representations for words and sentences whilst incorporating prior domain information; a detector generates domain scores for words, sentences, and documents by selectively attending to previously encoded information. We describe the two modules in more detail below. The Encoder Module We learn representations for words and sentences using identical encoders with separate learning parameters. Given a document, the two encoders implement the following steps: INLINEFORM0 For each sentence INLINEFORM0 , the word-level encoder yields contextualized word representations INLINEFORM1 and their attention weights INLINEFORM2 . Sentence embeddings INLINEFORM3 are obtained via weighted averaging and then provided as input to the sentence-level encoder which outputs contextualized representations INLINEFORM4 and their attention weights INLINEFORM5 . In this work we aim to model fairly long documents (e.g., Wikipedia articles; see Section SECREF6 for details). For this reason, our encoder builds on the Transformer architecture BIBREF15 , a recently proposed highly efficient model which has achieved state-of-the-art performance in machine translation BIBREF15 and question answering BIBREF35 . The Transformer aims at reducing the fundamental constraint of sequential computation which underlies most architectures based on recurrent neural networks. It eliminates recurrence in favor of applying a self-attention mechanism which directly models relationships between all words in a sentence, regardless of their position. The Detector Module DetNet adopts three detectors corresponding to words, sentences, and documents: INLINEFORM0 WordDet first produces word domain scores using both lexical semantic information INLINEFORM0 and prior (domain) knowledge INLINEFORM1 ; SentDet yields domain scores for sentences while integrating downstream instance signals INLINEFORM2 and sentence semantics INLINEFORM3 ; finally, DocDet makes the final document-level predictions based on sentence scores. Experimental Setup [t] Document Generation Input: INLINEFORM0 : Label combinations INLINEFORM0 : Sentence subcorpora INLINEFORM0 : Maximum document length Output: A synthetic document [0] Generate INLINEFORM0 Generate a document domain set INLINEFORM1 INLINEFORM2 INLINEFORM0 Number of domain labels Generate a noisy domain INLINEFORM1 INLINEFORM2 INLINEFORM0 ; A set of candidate domain sets INLINEFORM0 INLINEFORM1 INLINEFORM2 INLINEFORM0 Number of unused labels INLINEFORM1 Number of sentence blocks INLINEFORM2 For generated sentences INLINEFORM0 INLINEFORM1 Generate INLINEFORM2 Generate INLINEFORM3 sentences INLINEFORM4 INLINEFORM5 INLINEFORM6 INLINEFORM7 INLINEFORM8 Shuffle INLINEFORM9 INLINEFORM10 Automatic Evaluation In this section we present the results of our automatic evaluation for sentence and document predictions. Problematically, for sentence predictions we do not have gold-standard domain labels (we have only extrapolated these from Wikipedia for documents). We therefore developed an automatic approach for creating silver standard domain labels which we describe below. Human Evaluation Aside from automatic evaluation, we also assessed model performance against human elicited domain labels for sentences and words. The purpose of this experiment was threefold: (a) to validate the results obtained from automatic evaluation; (b) to evaluate finer-grained model performance at the word level; and (c) to examine whether our model generalizes to non-Wikipedia articles. For this, we created a third test set from the New York Times, in addition to our Wikipedia-based English and Chinese datasets. For all three corpora, we randomly sampled two documents for each domain, and then from each document, we sampled one long paragraph or a few consecutive short paragraphs containing 8–12 sentences. Amazon Mechanical Turkers were asked to read these sentences and assign a domain based on the seven labels used in this paper (multiple labels were allowed). Participants were provided with domain definitions. We obtained five annotations per sentence and adopted the majority label as the sentence's domain label. We obtained two annotated datasets for English (Wiki-en and NYT-en) and one for Chinese (Wiki-zh), consisting of 122/14, 111/11, and 117/12 sentences/documents each. Word-level domain evaluation is more challenging; taken out-of-context, individual words might be uninformative or carry meanings compatible with multiple domains. Expecting crowdworkers to annotate domain labels word-by-word with high confidence, might be therefore problematic. In order to reduce annotation complexity, we opted for a retrieval-style task for word evaluation. Specifically, AMT workers were given a sentence and its domain label (obtained from the sentence-level elicitation study described above), and asked to highlight which words they considered consistent with the domain of the sentence. We used the same corpora/sentences as in our first AMT study. Analogously, words in each sentence were annotated by five participants and their labels were determined by majority agreement. Fully hierarchical variants of our model (i.e., DetNet INLINEFORM0 , DetNet INLINEFORM1 ) and L-LDA are able to produce word-level predictions; we thus retrieved the words within a sentence whose domain score was above the threshold of 0 and compared them against the labels provided by crowdworkers. MilNet and DetNet INLINEFORM2 can only make sentence-level predictions. In this case, we assume that the sentence domain applies to all words therein. HierNet can only produce document-level predictions based on which we generate sentence labels and further assume that these apply to sentence words too. Again, we report INLINEFORM3 2prp+r INLINEFORM4 p INLINEFORM5 r INLINEFORM6 We show model performance against AMT domain labels in Table TABREF42 . Consistent with the automatic evaluation results, DetNet variants are the best performing models on the sentence-level task. On the Wikipedia datasets, DetNet INLINEFORM0 or DetNet INLINEFORM1 outperform all baselines and DetNet INLINEFORM2 by a large margin, showing that word-level signals can indeed help detect sentence domains. Although statistical models are typically less accurate when they are applied to data that has a different distribution from the training data, DetNet INLINEFORM3 works surprisingly well on NYT, substantially outperforming all other systems. We also notice that prior information is useful in making domain predictions for NYT sentences: since our models are trained on Wikipedia, prior domain definitions largely alleviate the genre shift to non-Wikipedia sentences. Table TABREF43 provides a breakdown of the performance of DetNet INLINEFORM4 across domains. Overall, the model performs worst on LIF and GEN domains (which are very broad) and best on BUS and MIL (which are very narrow). With regard to word-level evaluation, DetNet INLINEFORM0 and DetNet INLINEFORM1 are the best systems and are significantly better against all comparison models by a wide margin, except L-LDA. The latter is a strong domain detection system at the word-level since it is able to directly associate words with domain labels (see Equation ( EQREF34 )) without resorting to document- or sentence-level predictions. However, our two-level hierarchical model is superior considering all-around performance across sentences and documents. The results here accord with our intuition from previous experiments: hierarchical models outperform simpler variants (including MilNet) since they are able to capture and exploit fine-grained domain signals relatively accurately. Interestingly, prior information does not seem to have an effect on the Wikipedia datasets, but is useful when transferring to NYT. We also observe that models trained on the Chinese datasets perform consistently better than English. Analysis of the annotations provided by crowdworkers revealed that the ratio of domain words in Chinese is higher compared to English (27.47 INLINEFORM2 vs. 13.86 INLINEFORM3 in Wikipedia and 16.42 INLINEFORM4 in NYT), possibly rendering word retrieval in Chinese an easier task. Table TABREF44 shows the 15 most representative domain words identified by our model (DetNet INLINEFORM0 ) on Wiki-en for our seven domains. We obtained this list by weighting word domain scores INLINEFORM1 with their attention scores: DISPLAYFORM0 and ranking all words in the development set according to INLINEFORM0 , separately for each domain. Since words appearing in different contexts are usually associated with multiple domains, we determine a word's ranking for a given domain based on the highest score. As shown in Table TABREF44 , biosecurity and authoritarianism are prevalent in both GOV and LAW domains. Interestingly, with contextualized word representations, fairly general English words are recognized as domain heavy. For example, technique is a strong domain word in HEA and 420 in GOV (the latter is slang for the consumption of cannabis and highly associated with government regulations). For comparison, we also show the top domain words identified by L-LDA via matrix INLINEFORM0 (see Equation ( EQREF34 )). To produce meaningful output, we have removed stop words and punctuation tokens, which are given very high domain scores by L-LDA (this is not entirely surprising since INLINEFORM1 is based on simple co-occurrence). Notice that no such post-processing is necessary for our model. As shown in Table TABREF44 , the top domain words identified by L-LDA (on the right) are more general and less informative, compared to those from DetNet INLINEFORM2 (on the left). Domain-Specific Summarization In this section we illustrate how fine-grained domain scores can be used to produce domain summaries, following an extractive, unsupervised approach. We assume the user specifies the domains they are interested in a priori (e.g., LAW, HEA) and the system returns summaries targeting the semantics of these domains. Specifically, we introduce DetRank, an extension of the well-known TextRank algorithm BIBREF42 , which incorporates domain signals acquired by DetNet INLINEFORM0 . For each document, TextRank builds a directed graph INLINEFORM1 with nodes INLINEFORM2 corresponding to sentences, and undirected edges INLINEFORM3 whose weights are computed based on sentence similarity. Specifically, edge weights are represented with matrix INLINEFORM4 where each element INLINEFORM5 corresponds to the transition probability from vertex INLINEFORM6 to vertex INLINEFORM7 . Following barrios2016variations, INLINEFORM8 is computed with the Okapi BM25 algorithm BIBREF43 , a probabilistic version of TF-IDF, and small weights ( INLINEFORM9 ) are set to zeros. Unreachable nodes are further pruned to acquire the final vertex set INLINEFORM10 . To enhance TextRank with domain information, we first multiply sentence-level domain scores INLINEFORM0 with their corresponding attention scores: DISPLAYFORM0 and for a given domain INLINEFORM0 , we can extract a (domain) sentence score vector INLINEFORM1 . Then, from INLINEFORM2 , we produce vector INLINEFORM3 representing a distribution of domain signals over sentences: DISPLAYFORM0 In order to render domain signals in different sentences more discernible, we scale all elements in INLINEFORM0 to INLINEFORM1 before obtaining a legitimate distribution with the INLINEFORM2 function. Finally, we integrate the domain component into the original transition matrix as: DISPLAYFORM0 where INLINEFORM0 controls the extent to which domain-specific information influences sentence selection for the summarization task; higher INLINEFORM1 will lead to summaries which are more domain-relevant. Here, we empirically set INLINEFORM2 . The main difference between DetRank and TextRank is that TextRank treats INLINEFORM3 as a damping factor and a uniform probability distribution is applied to INLINEFORM4 . In order to decide which sentence to include in the summary, a node’s centrality is measured using a graph-based ranking algorithm BIBREF42 . Specifically, we run a Markov chain with INLINEFORM0 on INLINEFORM1 until it converges to the stationary distribution INLINEFORM2 where each element denotes the salience of a sentence. In the proposed DetRank algorithm, INLINEFORM3 jointly expresses the importance of a sentence in the document and its relevance to the given domain (controlled by INLINEFORM4 ). We rank sentences according to INLINEFORM5 and select the top INLINEFORM6 ones, subject to a budget (e.g., 100 words). We ran a judgment elicitation study on summaries produced by TextRank and DetRank. Participants were provided with domain definitions and asked to decide which summary was best according to the criteria of: Informativeness (does the summary contain more information about a specific domain, e.g., “Government and Politics”?), Succinctness (does the summary avoid unnecessary detail and redundant information?), and Coherence (does the summary make logical sense?). Amazon Mechanical Turk (AMT) workers were allowed to answer “Both” or “Neither” in cases where they could not discriminate between summaries. We sampled 50 summary pairs from the English Wikipedia development set. We collected three responses per summary pair and determined which system participants preferred based on majority agreement. Table TABREF51 shows the proportion of times AMT workers preferred each system according to the criteria of Informativeness, Succinctness, Coherence, and overall. As can be seen, participants find DetRank summaries more informative and coherent. While it is perhaps not surprising for DetRank to produce summaries which are domain informative since it explicitly takes domain signals into account, it is interesting to note that focusing on a specific domain also helps discard irrelevant information and produce more coherent summaries. This, on the other hand, possibly renders DetRank's summaries more verbose (see the Succinctness ratings in Table TABREF51 ). Figure FIGREF46 shows example summaries for the Wikipedia article Arms Industry for domains MIL and BUS. Both summaries begin with a sentence which introduces the arms industry to the reader. When MIL is the domain of interest, the summary focuses on military products such as guns and missiles. When the domain changes to BUS, the summary puts more emphasis on trade, e.g., market competition and companies doing military business, such as Boeing and Eurofighter. Conclusions In this work, we proposed an encoder-detector framework for domain detection. Leveraging only weak domain supervision, our model achieves results superior to competitive baselines across different languages, segment granularities, and text genres. Aside from identifying domain specific training data, we also show that our model holds promise for other natural language tasks, such as text summarization. Beyond domain detection, we hope that some of the work described here might be of relevance to other multilabel classification problems such as sentiment analysis BIBREF29 , relation extraction BIBREF44 , and named entity recognition BIBREF45 . More generally, our experiments show that the proposed framework can be applied to textual data using minimal supervision, significantly alleviating the annotation bottleneck for text classification problems. A key feature in achieving performance superior to competitive baselines is the hierarchical nature of our model, where representations are encoded step-by-step, first for words, then for sentences, and finally for documents. The framework flexibly integrates prior information which can be used to enhance the otherwise weak supervision signal or to render the model more robust across genres. In the future, we would like to investigate semi-supervised instantiations of MIL, where aside from bag labels, small amounts of instance labels are also available BIBREF23 . It would also be interesting to examine how the label space influences model performance, especially since in our scenario the labels are extrapolated from Wikipedia and might be naturally noisy and/or ambiguous. Acknowledgments The authors would like to thank the anonymous reviewers and the action editor, Yusuke Miyao, for their valuable feedback. We acknowledge the financial support of the European Research Council (Lapata; award number 681760). This research is based upon work supported in part by the Office of the Director of National Intelligence (ODNI), Intelligence Advanced Research Projects Activity (IARPA), via contract FA8650-17-C-9118. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of the ODNI, IARPA, or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright annotation therein.
Unanswerable
427252648173c3ba78c211b86fa89fc9f4406653
427252648173c3ba78c211b86fa89fc9f4406653_0
Q: What domains are detected in this paper? Text: Introduction Text classification is a fundamental task in Natural Language processing which has been found useful in a wide spectrum of applications ranging from search engines enabling users to identify content on websites, sentiment and social media analysis, customer relationship management systems, and spam detection. Over the past several years, text classification has been predominantly modeled as a supervised learning problem (e.g., BIBREF0 , BIBREF1 , BIBREF2 ) for which appropriately labeled data must be collected. Such data is often domain-dependent (i.e., covering specific topics such as those relating to “Business” or “Medicine”) and a classifier trained using data from one domain is likely to perform poorly on another. For example, the phrase “the mouse died quickly” may indicate negative sentiment in a customer review describing the hand-held pointing device or positive sentiment when describing a laboratory experiment performed on a rodent. The ability to handle a wide variety of domains has become more pertinent with the rise of data-hungry machine learning techniques like neural networks and their application to a plethora of textual media ranging from news articles to twitter, blog posts, medical journals, Reddit comments, and parliamentary debates BIBREF0 , BIBREF3 , BIBREF4 , BIBREF5 . The question of how to best deal with multiple domains when training data is available for one or few of them has met with much interest in the literature. The field of domain adaptation BIBREF6 , BIBREF7 , BIBREF8 , BIBREF9 , BIBREF10 aims at improving the learning of a predictive function in a target domain where there is little or no labeled data, using knowledge transferred from a source domain where sufficient labeled data is available. Another line of work BIBREF11 , BIBREF12 , BIBREF13 assumes that labeled data may exist for multiple domains, but in insufficient amounts to train classifiers for one or more of them. The aim of multi-domain text classification is to leverage all the available resources in order to improve system performance across domains simultaneously. In this paper we investigate the question of how domain-specific data might be obtained in order to enable the development of text classification tools as well as more domain aware applications such as summarization, question answering, and information extraction. We refer to this task as domain detection and assume a fairly common setting where the domains of a corpus collection are known and the aim is to identify textual segments which are domain-heavy, i.e., documents, sentences, or phrases providing evidence for a given domain. Domain detection can be formulated as a multilabel classification problem, where a model is trained to recognize domain evidence at the sentence-, phrase-, or word-level. By definition then, domain detection would require training data with fine-grained domain labels, thereby increasing the annotation burden; we must provide labels for training domain detectors and for modeling the task we care about in the first place. In this paper we consider the problem of fine-grained domain detection from the perspective of Multiple Instance Learning (MIL; BIBREF14 ) and develop domain models with very little human involvement. Instead of learning from individually labeled segments, our model only requires document-level supervision and optionally prior domain knowledge and learns to introspectively judge the domain of constituent segments. Importantly, we do not require document-level domain annotations either since we obtain these via distant supervision by leveraging information drawn from Wikipedia. Our domain detection framework comprises two neural network modules; an encoder learns representations for words and sentences together with prior domain information if the latter is available (e.g., domain definitions), while a detector generates domain-specific scores for words, sentences, and documents. We obtain a segment-level domain predictor which is trained end-to-end on document-level labels using a hierarchical, attention-based neural architecture BIBREF15 . We conduct domain detection experiments on English and Chinese and measure system performance using both automatic and human-based evaluation. Experimental results show that our model outperforms several strong baselines and is robust across languages and text genres, despite learning from weak supervision. We also showcase our model's application potential for text summarization. Our contributions in this work are threefold; we propose domain detection, as a new fine-grained multilabel learning problem which we argue would benefit the development of domain aware NLP tools; we introduce a weakly supervised encoder-detector model within the context of multiple instance learning; and demonstrate that it can be applied across languages and text genres without modification. Related Work Our work lies at the intersection of multiple research areas, including domain adaptation, representation learning, multiple instance learning, and topic modeling. We review related work below. Problem Formulation We formulate domain detection as a multilabel learning problem. Our model is trained on samples of document-label pairs. Each document consists of INLINEFORM0 sentences INLINEFORM1 and is associated with discrete labels INLINEFORM2 . In this work, domain labels are not annotated manually but extrapolated from Wikipedia (see Section SECREF6 for details). In a non-MIL framework, a model typically learns to predict document labels by directly conditioning on its sentence representations INLINEFORM3 or their aggregate. In contrast, INLINEFORM4 under MIL is a learned function INLINEFORM5 of latent instance-level labels, i.e., INLINEFORM6 . A MIL classifier will therefore first produce domain scores for all instances (aka sentences), and then learn to integrate instance scores into a bag (aka document) prediction. In this paper we further assume that the instance-bag relation applies to sentences and documents but also to words and sentences. In addition, we incorporate prior domain information to facilitate learning in a weakly supervised setting: each domain is associated with a definition INLINEFORM0 , i.e., a few sentences providing a high-level description of the domain at hand. For example, the definition of the “Lifestyle” domain is “the interests, opinions, behaviors, and behavioral orientations of an individual, group, or culture”. Figure FIGREF5 provides an overview of our Domain Detection Network, which we call DetNet. The model comprises two modules; an encoder learns representations for words and sentences whilst incorporating prior domain information; a detector generates domain scores for words, sentences, and documents by selectively attending to previously encoded information. We describe the two modules in more detail below. The Encoder Module We learn representations for words and sentences using identical encoders with separate learning parameters. Given a document, the two encoders implement the following steps: INLINEFORM0 For each sentence INLINEFORM0 , the word-level encoder yields contextualized word representations INLINEFORM1 and their attention weights INLINEFORM2 . Sentence embeddings INLINEFORM3 are obtained via weighted averaging and then provided as input to the sentence-level encoder which outputs contextualized representations INLINEFORM4 and their attention weights INLINEFORM5 . In this work we aim to model fairly long documents (e.g., Wikipedia articles; see Section SECREF6 for details). For this reason, our encoder builds on the Transformer architecture BIBREF15 , a recently proposed highly efficient model which has achieved state-of-the-art performance in machine translation BIBREF15 and question answering BIBREF35 . The Transformer aims at reducing the fundamental constraint of sequential computation which underlies most architectures based on recurrent neural networks. It eliminates recurrence in favor of applying a self-attention mechanism which directly models relationships between all words in a sentence, regardless of their position. The Detector Module DetNet adopts three detectors corresponding to words, sentences, and documents: INLINEFORM0 WordDet first produces word domain scores using both lexical semantic information INLINEFORM0 and prior (domain) knowledge INLINEFORM1 ; SentDet yields domain scores for sentences while integrating downstream instance signals INLINEFORM2 and sentence semantics INLINEFORM3 ; finally, DocDet makes the final document-level predictions based on sentence scores. Experimental Setup [t] Document Generation Input: INLINEFORM0 : Label combinations INLINEFORM0 : Sentence subcorpora INLINEFORM0 : Maximum document length Output: A synthetic document [0] Generate INLINEFORM0 Generate a document domain set INLINEFORM1 INLINEFORM2 INLINEFORM0 Number of domain labels Generate a noisy domain INLINEFORM1 INLINEFORM2 INLINEFORM0 ; A set of candidate domain sets INLINEFORM0 INLINEFORM1 INLINEFORM2 INLINEFORM0 Number of unused labels INLINEFORM1 Number of sentence blocks INLINEFORM2 For generated sentences INLINEFORM0 INLINEFORM1 Generate INLINEFORM2 Generate INLINEFORM3 sentences INLINEFORM4 INLINEFORM5 INLINEFORM6 INLINEFORM7 INLINEFORM8 Shuffle INLINEFORM9 INLINEFORM10 Automatic Evaluation In this section we present the results of our automatic evaluation for sentence and document predictions. Problematically, for sentence predictions we do not have gold-standard domain labels (we have only extrapolated these from Wikipedia for documents). We therefore developed an automatic approach for creating silver standard domain labels which we describe below. Human Evaluation Aside from automatic evaluation, we also assessed model performance against human elicited domain labels for sentences and words. The purpose of this experiment was threefold: (a) to validate the results obtained from automatic evaluation; (b) to evaluate finer-grained model performance at the word level; and (c) to examine whether our model generalizes to non-Wikipedia articles. For this, we created a third test set from the New York Times, in addition to our Wikipedia-based English and Chinese datasets. For all three corpora, we randomly sampled two documents for each domain, and then from each document, we sampled one long paragraph or a few consecutive short paragraphs containing 8–12 sentences. Amazon Mechanical Turkers were asked to read these sentences and assign a domain based on the seven labels used in this paper (multiple labels were allowed). Participants were provided with domain definitions. We obtained five annotations per sentence and adopted the majority label as the sentence's domain label. We obtained two annotated datasets for English (Wiki-en and NYT-en) and one for Chinese (Wiki-zh), consisting of 122/14, 111/11, and 117/12 sentences/documents each. Word-level domain evaluation is more challenging; taken out-of-context, individual words might be uninformative or carry meanings compatible with multiple domains. Expecting crowdworkers to annotate domain labels word-by-word with high confidence, might be therefore problematic. In order to reduce annotation complexity, we opted for a retrieval-style task for word evaluation. Specifically, AMT workers were given a sentence and its domain label (obtained from the sentence-level elicitation study described above), and asked to highlight which words they considered consistent with the domain of the sentence. We used the same corpora/sentences as in our first AMT study. Analogously, words in each sentence were annotated by five participants and their labels were determined by majority agreement. Fully hierarchical variants of our model (i.e., DetNet INLINEFORM0 , DetNet INLINEFORM1 ) and L-LDA are able to produce word-level predictions; we thus retrieved the words within a sentence whose domain score was above the threshold of 0 and compared them against the labels provided by crowdworkers. MilNet and DetNet INLINEFORM2 can only make sentence-level predictions. In this case, we assume that the sentence domain applies to all words therein. HierNet can only produce document-level predictions based on which we generate sentence labels and further assume that these apply to sentence words too. Again, we report INLINEFORM3 2prp+r INLINEFORM4 p INLINEFORM5 r INLINEFORM6 We show model performance against AMT domain labels in Table TABREF42 . Consistent with the automatic evaluation results, DetNet variants are the best performing models on the sentence-level task. On the Wikipedia datasets, DetNet INLINEFORM0 or DetNet INLINEFORM1 outperform all baselines and DetNet INLINEFORM2 by a large margin, showing that word-level signals can indeed help detect sentence domains. Although statistical models are typically less accurate when they are applied to data that has a different distribution from the training data, DetNet INLINEFORM3 works surprisingly well on NYT, substantially outperforming all other systems. We also notice that prior information is useful in making domain predictions for NYT sentences: since our models are trained on Wikipedia, prior domain definitions largely alleviate the genre shift to non-Wikipedia sentences. Table TABREF43 provides a breakdown of the performance of DetNet INLINEFORM4 across domains. Overall, the model performs worst on LIF and GEN domains (which are very broad) and best on BUS and MIL (which are very narrow). With regard to word-level evaluation, DetNet INLINEFORM0 and DetNet INLINEFORM1 are the best systems and are significantly better against all comparison models by a wide margin, except L-LDA. The latter is a strong domain detection system at the word-level since it is able to directly associate words with domain labels (see Equation ( EQREF34 )) without resorting to document- or sentence-level predictions. However, our two-level hierarchical model is superior considering all-around performance across sentences and documents. The results here accord with our intuition from previous experiments: hierarchical models outperform simpler variants (including MilNet) since they are able to capture and exploit fine-grained domain signals relatively accurately. Interestingly, prior information does not seem to have an effect on the Wikipedia datasets, but is useful when transferring to NYT. We also observe that models trained on the Chinese datasets perform consistently better than English. Analysis of the annotations provided by crowdworkers revealed that the ratio of domain words in Chinese is higher compared to English (27.47 INLINEFORM2 vs. 13.86 INLINEFORM3 in Wikipedia and 16.42 INLINEFORM4 in NYT), possibly rendering word retrieval in Chinese an easier task. Table TABREF44 shows the 15 most representative domain words identified by our model (DetNet INLINEFORM0 ) on Wiki-en for our seven domains. We obtained this list by weighting word domain scores INLINEFORM1 with their attention scores: DISPLAYFORM0 and ranking all words in the development set according to INLINEFORM0 , separately for each domain. Since words appearing in different contexts are usually associated with multiple domains, we determine a word's ranking for a given domain based on the highest score. As shown in Table TABREF44 , biosecurity and authoritarianism are prevalent in both GOV and LAW domains. Interestingly, with contextualized word representations, fairly general English words are recognized as domain heavy. For example, technique is a strong domain word in HEA and 420 in GOV (the latter is slang for the consumption of cannabis and highly associated with government regulations). For comparison, we also show the top domain words identified by L-LDA via matrix INLINEFORM0 (see Equation ( EQREF34 )). To produce meaningful output, we have removed stop words and punctuation tokens, which are given very high domain scores by L-LDA (this is not entirely surprising since INLINEFORM1 is based on simple co-occurrence). Notice that no such post-processing is necessary for our model. As shown in Table TABREF44 , the top domain words identified by L-LDA (on the right) are more general and less informative, compared to those from DetNet INLINEFORM2 (on the left). Domain-Specific Summarization In this section we illustrate how fine-grained domain scores can be used to produce domain summaries, following an extractive, unsupervised approach. We assume the user specifies the domains they are interested in a priori (e.g., LAW, HEA) and the system returns summaries targeting the semantics of these domains. Specifically, we introduce DetRank, an extension of the well-known TextRank algorithm BIBREF42 , which incorporates domain signals acquired by DetNet INLINEFORM0 . For each document, TextRank builds a directed graph INLINEFORM1 with nodes INLINEFORM2 corresponding to sentences, and undirected edges INLINEFORM3 whose weights are computed based on sentence similarity. Specifically, edge weights are represented with matrix INLINEFORM4 where each element INLINEFORM5 corresponds to the transition probability from vertex INLINEFORM6 to vertex INLINEFORM7 . Following barrios2016variations, INLINEFORM8 is computed with the Okapi BM25 algorithm BIBREF43 , a probabilistic version of TF-IDF, and small weights ( INLINEFORM9 ) are set to zeros. Unreachable nodes are further pruned to acquire the final vertex set INLINEFORM10 . To enhance TextRank with domain information, we first multiply sentence-level domain scores INLINEFORM0 with their corresponding attention scores: DISPLAYFORM0 and for a given domain INLINEFORM0 , we can extract a (domain) sentence score vector INLINEFORM1 . Then, from INLINEFORM2 , we produce vector INLINEFORM3 representing a distribution of domain signals over sentences: DISPLAYFORM0 In order to render domain signals in different sentences more discernible, we scale all elements in INLINEFORM0 to INLINEFORM1 before obtaining a legitimate distribution with the INLINEFORM2 function. Finally, we integrate the domain component into the original transition matrix as: DISPLAYFORM0 where INLINEFORM0 controls the extent to which domain-specific information influences sentence selection for the summarization task; higher INLINEFORM1 will lead to summaries which are more domain-relevant. Here, we empirically set INLINEFORM2 . The main difference between DetRank and TextRank is that TextRank treats INLINEFORM3 as a damping factor and a uniform probability distribution is applied to INLINEFORM4 . In order to decide which sentence to include in the summary, a node’s centrality is measured using a graph-based ranking algorithm BIBREF42 . Specifically, we run a Markov chain with INLINEFORM0 on INLINEFORM1 until it converges to the stationary distribution INLINEFORM2 where each element denotes the salience of a sentence. In the proposed DetRank algorithm, INLINEFORM3 jointly expresses the importance of a sentence in the document and its relevance to the given domain (controlled by INLINEFORM4 ). We rank sentences according to INLINEFORM5 and select the top INLINEFORM6 ones, subject to a budget (e.g., 100 words). We ran a judgment elicitation study on summaries produced by TextRank and DetRank. Participants were provided with domain definitions and asked to decide which summary was best according to the criteria of: Informativeness (does the summary contain more information about a specific domain, e.g., “Government and Politics”?), Succinctness (does the summary avoid unnecessary detail and redundant information?), and Coherence (does the summary make logical sense?). Amazon Mechanical Turk (AMT) workers were allowed to answer “Both” or “Neither” in cases where they could not discriminate between summaries. We sampled 50 summary pairs from the English Wikipedia development set. We collected three responses per summary pair and determined which system participants preferred based on majority agreement. Table TABREF51 shows the proportion of times AMT workers preferred each system according to the criteria of Informativeness, Succinctness, Coherence, and overall. As can be seen, participants find DetRank summaries more informative and coherent. While it is perhaps not surprising for DetRank to produce summaries which are domain informative since it explicitly takes domain signals into account, it is interesting to note that focusing on a specific domain also helps discard irrelevant information and produce more coherent summaries. This, on the other hand, possibly renders DetRank's summaries more verbose (see the Succinctness ratings in Table TABREF51 ). Figure FIGREF46 shows example summaries for the Wikipedia article Arms Industry for domains MIL and BUS. Both summaries begin with a sentence which introduces the arms industry to the reader. When MIL is the domain of interest, the summary focuses on military products such as guns and missiles. When the domain changes to BUS, the summary puts more emphasis on trade, e.g., market competition and companies doing military business, such as Boeing and Eurofighter. Conclusions In this work, we proposed an encoder-detector framework for domain detection. Leveraging only weak domain supervision, our model achieves results superior to competitive baselines across different languages, segment granularities, and text genres. Aside from identifying domain specific training data, we also show that our model holds promise for other natural language tasks, such as text summarization. Beyond domain detection, we hope that some of the work described here might be of relevance to other multilabel classification problems such as sentiment analysis BIBREF29 , relation extraction BIBREF44 , and named entity recognition BIBREF45 . More generally, our experiments show that the proposed framework can be applied to textual data using minimal supervision, significantly alleviating the annotation bottleneck for text classification problems. A key feature in achieving performance superior to competitive baselines is the hierarchical nature of our model, where representations are encoded step-by-step, first for words, then for sentences, and finally for documents. The framework flexibly integrates prior information which can be used to enhance the otherwise weak supervision signal or to render the model more robust across genres. In the future, we would like to investigate semi-supervised instantiations of MIL, where aside from bag labels, small amounts of instance labels are also available BIBREF23 . It would also be interesting to examine how the label space influences model performance, especially since in our scenario the labels are extrapolated from Wikipedia and might be naturally noisy and/or ambiguous. Acknowledgments The authors would like to thank the anonymous reviewers and the action editor, Yusuke Miyao, for their valuable feedback. We acknowledge the financial support of the European Research Council (Lapata; award number 681760). This research is based upon work supported in part by the Office of the Director of National Intelligence (ODNI), Intelligence Advanced Research Projects Activity (IARPA), via contract FA8650-17-C-9118. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of the ODNI, IARPA, or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright annotation therein.
Answer with content missing: (Experimental setup not properly rendered) In our experiments we used seven target domains: “Business and Commerce” (BUS), “Government and Politics” (GOV), “Physical and Mental Health” (HEA), “Law and Order” (LAW), “Lifestyle” (LIF), “Military” (MIL), and “General Purpose” (GEN). Exceptionally, GEN does not have a natural root category.
b9025c39838ccc2a79c545bec4a676f7cc4600eb
b9025c39838ccc2a79c545bec4a676f7cc4600eb_0
Q: Why do they think this task is hard? What is the baseline performance? Text: Introduction Natural language processing (nlp) has achieved significant advances in reading comprehension tasks BIBREF0 , BIBREF1 . These are partially due to embedding methods BIBREF2 , BIBREF3 and neural networks BIBREF4 , BIBREF5 , BIBREF6 , but also to the availability of new resources and challenges. For instance, in cloze-form tasks BIBREF7 , BIBREF8 , the goal is to predict the missing word given a short context. weston2015towards presented baBI, a set of proxy tasks for reading comprenhension. In the SQuAD corpus BIBREF9 , the aim is to answer questions given a Wikipedia passage. 2017arXiv171207040K introduce NarrativeQA, where answering the questions requires to process entire stories. In a related line, 2017arXiv171011601F use fictional crime scene investigation data, from the CSI series, to define a task where the models try to answer the question: ‘who committed the crime?’. In an alternative line of work, script induction BIBREF10 has been also a useful approach to evaluate inference and semantic capabilities of nlp systems. Here, a model processes a document to infer new sequences that reflect events that are statistically probable (e.g. go to a restaurant, be seated, check the menu, ...). For example, chambers2008unsupervised introduce narrative event chains, a representation of structured knowledge of a set of events occurring around a protagonist. They then propose a method to learn statistical scripts, and also introduce two different evaluation strategies. With a related aim, Pichotta2014Statistical propose a multi-event representation of statistical scripts to be able to consider multiple entities. These same authors BIBREF11 have also studied the abilities of recurrent neural networks for learning scripts, generating upcoming events given a raw sequence of tokens, using bleu BIBREF12 for evaluation. This paper explores instead a new task: action prediction from natural language descriptions of scenes. The challenge is addressed as follows: given a natural language input sequence describing the scene, such as a piece of a story coming from a transcript, the goal is to infer which action is most likely to happen next. HPAC: The Harry Potter's Action prediction Corpus To build an action prediction corpus, we need to: (1) consider the set of actions, and (2) collect data where these occur. Data should come from different users, to approximate a real natural language task. Also, it needs to be annotated, determining that a piece of text ends up triggering an action. These tasks are however time consuming, as they require annotators to read vast amounts of large texts. In this context, machine comprehension resources usually establish a compromise between their complexity and the costs of building them BIBREF7 , BIBREF13 . Domain motivation We rely on an intuitive idea that uses transcripts from the Harry Potter world to build up a corpus for textual action prediction. The domain has a set of desirable properties to evaluate reading comprehension systems, which we now review. Harry Potter novels define a variety of spells. These are keywords cast by witches and wizards to achieve purposes, such as turning on a light (‘Lumos’), unlocking a door (‘Alohomora’) or killing (‘Avada Kedavra’). They abstract complex and non-ambiguous actions. Their use also makes it possible to build an automatic and self-annotated corpus for action prediction. The moment a spell occurs in a text represents a response to the environment, and hence, it can be used to label the preceding text fragment as a scene description that ends up triggering that action. Table 1 illustrates it with some examples from the original books. This makes it possible to consider texts from the magic world of Harry Potter as the domain for the action prediction corpus, and the spells as the set of eligible actions. Determining the length of the preceding context, namely snippet, that has to be considered as the scene description is however not trivial. This paper considers experiments (§ "Experiments" ) using snippets with the 32, 64, 96 and 128 previous tokens to an action. We provide the needed scripts to rebuild the corpus using arbitrary lengths. Data crawling The number of occurrences of spells in the original Harry Potter books is small (432 occurrences), which makes it difficult to train and test a machine learning model. However, the amount of available fan fiction for this saga allows to create a large corpus. For hpac, we used fan fiction (and only fan fiction texts) from https://www.fanfiction.net/book/Harry-Potter/ and a version of the crawler by milli2016beyond. We collected Harry Potter stories written in English and marked with the status ‘completed’. From these we extracted a total of 82 836 spell occurrences, that we used to obtain the scene descriptions. Table 2 details the statistics of the corpus (see also Appendix "Corpus distribution" ). Note that similar to Twitter corpora, fan fiction stories can be deleted over time by users or admins, causing losses in the dataset. We tokenized the samples with BIBREF14 and merged the occurrences of multi-word spells into a single token. Models This work addresses the task as a classification problem, and in particular as a sequence to label classification problem. For this reason, we rely on standard models used for this type of task: multinomial logistic regression, a multi-layered perceptron, convolutional neural networks and long short-term memory networks. We outline the essentials of each of these models, but will treat them as black boxes. In a related line, kaushik2018much discuss the need of providing rigorous baselines that help better understand the improvement coming from future and complex models, and also the need of not demanding architectural novelty when introducing new datasets. Although not done in this work, an alternative (but also natural) way to address the task is as a special case of language modelling, where the output vocabulary is restricted to the size of the `action' vocabulary. Also, note that the performance for this task is not expected to achieve a perfect accuracy, as there may be situations where more than one action is reasonable, and also because writers tell a story playing with elements such as surprise or uncertainty. The source code for the models can be found in the GitHub repository mentioned above. Machine learning models The input sentence $w_{1:n}$ is encoded as a one-hot vector, $\mathbf {v}$ (total occurrence weighting scheme). Let mlr $_\theta (\mathbf {v})$ be an abstraction of a multinomial logistic regression parametrized by $\theta $ , the output for an input $\mathbf {v}$ is computed as the $\operatornamewithlimits{arg\,max}_{a \in A}$ $P(y=a|\mathbf {v})$ , where $P(y=a|\mathbf {v})$ is a $softmax$ function, i.e, $P(y=a|\mathbf {v}) = \frac{e^{W_{a} \cdot \mathbf {v}}}{\sum _{a^{\prime }}^{A} e^{W_{a^{\prime }} \cdot \mathbf {v}}}$ . We use one hidden layer with a rectifier activation function ( $relu(x)$ = $max(0,x)$ ). The output is computed as mlp $_\theta (\mathbf {v})$ = $softmax(W_2 \cdot relu(W \cdot \mathbf {v} + \mathbf {b}) + \mathbf {b_2})$ . Sequential models The input sequence is represented as a sequence of word embeddings, $\mathbf {w}_{1:n}$ , where $\mathbf {w}_i$ is a concatenation of an internal embedding learned during the training process for the word $w_i$ , and a pre-trained embedding extracted from GloVe BIBREF15 , that is further fine-tuned. BIBREF5 : The output for an element $\mathbf {w}_i$ also depends on the output of $\mathbf {w}_{i-1}$ . The lstm $_\theta (\mathbf {w}_{1:n})$ takes as input a sequence of word embeddings and produces a sequence of hidden outputs, $\mathbf {h}_{1:n}$ ( $\mathbf {h}_{i}$ size set to 128). The last output of the lstm $_\theta $ , $\mathbf {h}_n$ , is fed to a mlp $_\theta $ . BIBREF16 , BIBREF17 . It captures local properties over continuous slices of text by applying a convolution layer made of different filters. We use a wide convolution, with a window slice size of length 3 and 250 different filters. The convolutional layer uses a $\mathit {relu}$ as the activation function. The output is fed to a max pooling layer, whose output vector is passed again as input to a mlp $_\theta $ . Conclusion We explored action prediction from written stories. We first introduced a corpus set in the world of Harry Potter's literature. Spells in these novels act as keywords that abstract actions. This idea was used to label a collection of fan fiction. We then evaluated standard nlp approaches, from logistic regression to sequential models such as lstms. The latter performed better in general, although vanilla models achieved a higher performance for actions that occurred a few times in the training set. An analysis over the output of the lstm approach also revealed difficulties to discriminate among semantically related actions. The challenge here proposed corresponded to a fictional domain. A future line of work we are interested in is to test whether the knowledge learned with this dataset could be transferred to real-word actions (i.e. real-domain setups), or if such transfer is not possible and a model needs to be trained from scratch. Acknowlegments This work has received support from the TELEPARES-UDC project (FFI2014-51978-C2-2-R) and the ANSWER-ASAP project (TIN2017-85160-C2-1-R) from MINECO, and from Xunta de Galicia (ED431B 2017/01), and from the European Research Council (ERC), under the European Union's Horizon 2020 research and innovation programme (FASTPARSE, grant agreement No 714150). Corpus distribution Table 6 summarizes the label distribution across the training, development and test sets of the hpac corpus.
1. there may be situations where more than one action is reasonable, and also because writers tell a story playing with elements such as surprise or uncertainty. 2. Macro F1 = 14.6 (MLR, length 96 snippet) Weighted F1 = 31.1 (LSTM, length 128 snippet)
be6971827707afcd13af3085d0a775a0bd61c5dd
be6971827707afcd13af3085d0a775a0bd61c5dd_0
Q: Isn't simple word association enough to predict the next spell? Text: Introduction Natural language processing (nlp) has achieved significant advances in reading comprehension tasks BIBREF0 , BIBREF1 . These are partially due to embedding methods BIBREF2 , BIBREF3 and neural networks BIBREF4 , BIBREF5 , BIBREF6 , but also to the availability of new resources and challenges. For instance, in cloze-form tasks BIBREF7 , BIBREF8 , the goal is to predict the missing word given a short context. weston2015towards presented baBI, a set of proxy tasks for reading comprenhension. In the SQuAD corpus BIBREF9 , the aim is to answer questions given a Wikipedia passage. 2017arXiv171207040K introduce NarrativeQA, where answering the questions requires to process entire stories. In a related line, 2017arXiv171011601F use fictional crime scene investigation data, from the CSI series, to define a task where the models try to answer the question: ‘who committed the crime?’. In an alternative line of work, script induction BIBREF10 has been also a useful approach to evaluate inference and semantic capabilities of nlp systems. Here, a model processes a document to infer new sequences that reflect events that are statistically probable (e.g. go to a restaurant, be seated, check the menu, ...). For example, chambers2008unsupervised introduce narrative event chains, a representation of structured knowledge of a set of events occurring around a protagonist. They then propose a method to learn statistical scripts, and also introduce two different evaluation strategies. With a related aim, Pichotta2014Statistical propose a multi-event representation of statistical scripts to be able to consider multiple entities. These same authors BIBREF11 have also studied the abilities of recurrent neural networks for learning scripts, generating upcoming events given a raw sequence of tokens, using bleu BIBREF12 for evaluation. This paper explores instead a new task: action prediction from natural language descriptions of scenes. The challenge is addressed as follows: given a natural language input sequence describing the scene, such as a piece of a story coming from a transcript, the goal is to infer which action is most likely to happen next. HPAC: The Harry Potter's Action prediction Corpus To build an action prediction corpus, we need to: (1) consider the set of actions, and (2) collect data where these occur. Data should come from different users, to approximate a real natural language task. Also, it needs to be annotated, determining that a piece of text ends up triggering an action. These tasks are however time consuming, as they require annotators to read vast amounts of large texts. In this context, machine comprehension resources usually establish a compromise between their complexity and the costs of building them BIBREF7 , BIBREF13 . Domain motivation We rely on an intuitive idea that uses transcripts from the Harry Potter world to build up a corpus for textual action prediction. The domain has a set of desirable properties to evaluate reading comprehension systems, which we now review. Harry Potter novels define a variety of spells. These are keywords cast by witches and wizards to achieve purposes, such as turning on a light (‘Lumos’), unlocking a door (‘Alohomora’) or killing (‘Avada Kedavra’). They abstract complex and non-ambiguous actions. Their use also makes it possible to build an automatic and self-annotated corpus for action prediction. The moment a spell occurs in a text represents a response to the environment, and hence, it can be used to label the preceding text fragment as a scene description that ends up triggering that action. Table 1 illustrates it with some examples from the original books. This makes it possible to consider texts from the magic world of Harry Potter as the domain for the action prediction corpus, and the spells as the set of eligible actions. Determining the length of the preceding context, namely snippet, that has to be considered as the scene description is however not trivial. This paper considers experiments (§ "Experiments" ) using snippets with the 32, 64, 96 and 128 previous tokens to an action. We provide the needed scripts to rebuild the corpus using arbitrary lengths. Data crawling The number of occurrences of spells in the original Harry Potter books is small (432 occurrences), which makes it difficult to train and test a machine learning model. However, the amount of available fan fiction for this saga allows to create a large corpus. For hpac, we used fan fiction (and only fan fiction texts) from https://www.fanfiction.net/book/Harry-Potter/ and a version of the crawler by milli2016beyond. We collected Harry Potter stories written in English and marked with the status ‘completed’. From these we extracted a total of 82 836 spell occurrences, that we used to obtain the scene descriptions. Table 2 details the statistics of the corpus (see also Appendix "Corpus distribution" ). Note that similar to Twitter corpora, fan fiction stories can be deleted over time by users or admins, causing losses in the dataset. We tokenized the samples with BIBREF14 and merged the occurrences of multi-word spells into a single token. Models This work addresses the task as a classification problem, and in particular as a sequence to label classification problem. For this reason, we rely on standard models used for this type of task: multinomial logistic regression, a multi-layered perceptron, convolutional neural networks and long short-term memory networks. We outline the essentials of each of these models, but will treat them as black boxes. In a related line, kaushik2018much discuss the need of providing rigorous baselines that help better understand the improvement coming from future and complex models, and also the need of not demanding architectural novelty when introducing new datasets. Although not done in this work, an alternative (but also natural) way to address the task is as a special case of language modelling, where the output vocabulary is restricted to the size of the `action' vocabulary. Also, note that the performance for this task is not expected to achieve a perfect accuracy, as there may be situations where more than one action is reasonable, and also because writers tell a story playing with elements such as surprise or uncertainty. The source code for the models can be found in the GitHub repository mentioned above. Machine learning models The input sentence $w_{1:n}$ is encoded as a one-hot vector, $\mathbf {v}$ (total occurrence weighting scheme). Let mlr $_\theta (\mathbf {v})$ be an abstraction of a multinomial logistic regression parametrized by $\theta $ , the output for an input $\mathbf {v}$ is computed as the $\operatornamewithlimits{arg\,max}_{a \in A}$ $P(y=a|\mathbf {v})$ , where $P(y=a|\mathbf {v})$ is a $softmax$ function, i.e, $P(y=a|\mathbf {v}) = \frac{e^{W_{a} \cdot \mathbf {v}}}{\sum _{a^{\prime }}^{A} e^{W_{a^{\prime }} \cdot \mathbf {v}}}$ . We use one hidden layer with a rectifier activation function ( $relu(x)$ = $max(0,x)$ ). The output is computed as mlp $_\theta (\mathbf {v})$ = $softmax(W_2 \cdot relu(W \cdot \mathbf {v} + \mathbf {b}) + \mathbf {b_2})$ . Sequential models The input sequence is represented as a sequence of word embeddings, $\mathbf {w}_{1:n}$ , where $\mathbf {w}_i$ is a concatenation of an internal embedding learned during the training process for the word $w_i$ , and a pre-trained embedding extracted from GloVe BIBREF15 , that is further fine-tuned. BIBREF5 : The output for an element $\mathbf {w}_i$ also depends on the output of $\mathbf {w}_{i-1}$ . The lstm $_\theta (\mathbf {w}_{1:n})$ takes as input a sequence of word embeddings and produces a sequence of hidden outputs, $\mathbf {h}_{1:n}$ ( $\mathbf {h}_{i}$ size set to 128). The last output of the lstm $_\theta $ , $\mathbf {h}_n$ , is fed to a mlp $_\theta $ . BIBREF16 , BIBREF17 . It captures local properties over continuous slices of text by applying a convolution layer made of different filters. We use a wide convolution, with a window slice size of length 3 and 250 different filters. The convolutional layer uses a $\mathit {relu}$ as the activation function. The output is fed to a max pooling layer, whose output vector is passed again as input to a mlp $_\theta $ . Conclusion We explored action prediction from written stories. We first introduced a corpus set in the world of Harry Potter's literature. Spells in these novels act as keywords that abstract actions. This idea was used to label a collection of fan fiction. We then evaluated standard nlp approaches, from logistic regression to sequential models such as lstms. The latter performed better in general, although vanilla models achieved a higher performance for actions that occurred a few times in the training set. An analysis over the output of the lstm approach also revealed difficulties to discriminate among semantically related actions. The challenge here proposed corresponded to a fictional domain. A future line of work we are interested in is to test whether the knowledge learned with this dataset could be transferred to real-word actions (i.e. real-domain setups), or if such transfer is not possible and a model needs to be trained from scratch. Acknowlegments This work has received support from the TELEPARES-UDC project (FFI2014-51978-C2-2-R) and the ANSWER-ASAP project (TIN2017-85160-C2-1-R) from MINECO, and from Xunta de Galicia (ED431B 2017/01), and from the European Research Council (ERC), under the European Union's Horizon 2020 research and innovation programme (FASTPARSE, grant agreement No 714150). Corpus distribution Table 6 summarizes the label distribution across the training, development and test sets of the hpac corpus.
Unanswerable
19608e727b527562b750949e41e763908566b58e
19608e727b527562b750949e41e763908566b58e_0
Q: Do they literally just treat this as "predict the next spell that appears in the text"? Text: Introduction Natural language processing (nlp) has achieved significant advances in reading comprehension tasks BIBREF0 , BIBREF1 . These are partially due to embedding methods BIBREF2 , BIBREF3 and neural networks BIBREF4 , BIBREF5 , BIBREF6 , but also to the availability of new resources and challenges. For instance, in cloze-form tasks BIBREF7 , BIBREF8 , the goal is to predict the missing word given a short context. weston2015towards presented baBI, a set of proxy tasks for reading comprenhension. In the SQuAD corpus BIBREF9 , the aim is to answer questions given a Wikipedia passage. 2017arXiv171207040K introduce NarrativeQA, where answering the questions requires to process entire stories. In a related line, 2017arXiv171011601F use fictional crime scene investigation data, from the CSI series, to define a task where the models try to answer the question: ‘who committed the crime?’. In an alternative line of work, script induction BIBREF10 has been also a useful approach to evaluate inference and semantic capabilities of nlp systems. Here, a model processes a document to infer new sequences that reflect events that are statistically probable (e.g. go to a restaurant, be seated, check the menu, ...). For example, chambers2008unsupervised introduce narrative event chains, a representation of structured knowledge of a set of events occurring around a protagonist. They then propose a method to learn statistical scripts, and also introduce two different evaluation strategies. With a related aim, Pichotta2014Statistical propose a multi-event representation of statistical scripts to be able to consider multiple entities. These same authors BIBREF11 have also studied the abilities of recurrent neural networks for learning scripts, generating upcoming events given a raw sequence of tokens, using bleu BIBREF12 for evaluation. This paper explores instead a new task: action prediction from natural language descriptions of scenes. The challenge is addressed as follows: given a natural language input sequence describing the scene, such as a piece of a story coming from a transcript, the goal is to infer which action is most likely to happen next. HPAC: The Harry Potter's Action prediction Corpus To build an action prediction corpus, we need to: (1) consider the set of actions, and (2) collect data where these occur. Data should come from different users, to approximate a real natural language task. Also, it needs to be annotated, determining that a piece of text ends up triggering an action. These tasks are however time consuming, as they require annotators to read vast amounts of large texts. In this context, machine comprehension resources usually establish a compromise between their complexity and the costs of building them BIBREF7 , BIBREF13 . Domain motivation We rely on an intuitive idea that uses transcripts from the Harry Potter world to build up a corpus for textual action prediction. The domain has a set of desirable properties to evaluate reading comprehension systems, which we now review. Harry Potter novels define a variety of spells. These are keywords cast by witches and wizards to achieve purposes, such as turning on a light (‘Lumos’), unlocking a door (‘Alohomora’) or killing (‘Avada Kedavra’). They abstract complex and non-ambiguous actions. Their use also makes it possible to build an automatic and self-annotated corpus for action prediction. The moment a spell occurs in a text represents a response to the environment, and hence, it can be used to label the preceding text fragment as a scene description that ends up triggering that action. Table 1 illustrates it with some examples from the original books. This makes it possible to consider texts from the magic world of Harry Potter as the domain for the action prediction corpus, and the spells as the set of eligible actions. Determining the length of the preceding context, namely snippet, that has to be considered as the scene description is however not trivial. This paper considers experiments (§ "Experiments" ) using snippets with the 32, 64, 96 and 128 previous tokens to an action. We provide the needed scripts to rebuild the corpus using arbitrary lengths. Data crawling The number of occurrences of spells in the original Harry Potter books is small (432 occurrences), which makes it difficult to train and test a machine learning model. However, the amount of available fan fiction for this saga allows to create a large corpus. For hpac, we used fan fiction (and only fan fiction texts) from https://www.fanfiction.net/book/Harry-Potter/ and a version of the crawler by milli2016beyond. We collected Harry Potter stories written in English and marked with the status ‘completed’. From these we extracted a total of 82 836 spell occurrences, that we used to obtain the scene descriptions. Table 2 details the statistics of the corpus (see also Appendix "Corpus distribution" ). Note that similar to Twitter corpora, fan fiction stories can be deleted over time by users or admins, causing losses in the dataset. We tokenized the samples with BIBREF14 and merged the occurrences of multi-word spells into a single token. Models This work addresses the task as a classification problem, and in particular as a sequence to label classification problem. For this reason, we rely on standard models used for this type of task: multinomial logistic regression, a multi-layered perceptron, convolutional neural networks and long short-term memory networks. We outline the essentials of each of these models, but will treat them as black boxes. In a related line, kaushik2018much discuss the need of providing rigorous baselines that help better understand the improvement coming from future and complex models, and also the need of not demanding architectural novelty when introducing new datasets. Although not done in this work, an alternative (but also natural) way to address the task is as a special case of language modelling, where the output vocabulary is restricted to the size of the `action' vocabulary. Also, note that the performance for this task is not expected to achieve a perfect accuracy, as there may be situations where more than one action is reasonable, and also because writers tell a story playing with elements such as surprise or uncertainty. The source code for the models can be found in the GitHub repository mentioned above. Machine learning models The input sentence $w_{1:n}$ is encoded as a one-hot vector, $\mathbf {v}$ (total occurrence weighting scheme). Let mlr $_\theta (\mathbf {v})$ be an abstraction of a multinomial logistic regression parametrized by $\theta $ , the output for an input $\mathbf {v}$ is computed as the $\operatornamewithlimits{arg\,max}_{a \in A}$ $P(y=a|\mathbf {v})$ , where $P(y=a|\mathbf {v})$ is a $softmax$ function, i.e, $P(y=a|\mathbf {v}) = \frac{e^{W_{a} \cdot \mathbf {v}}}{\sum _{a^{\prime }}^{A} e^{W_{a^{\prime }} \cdot \mathbf {v}}}$ . We use one hidden layer with a rectifier activation function ( $relu(x)$ = $max(0,x)$ ). The output is computed as mlp $_\theta (\mathbf {v})$ = $softmax(W_2 \cdot relu(W \cdot \mathbf {v} + \mathbf {b}) + \mathbf {b_2})$ . Sequential models The input sequence is represented as a sequence of word embeddings, $\mathbf {w}_{1:n}$ , where $\mathbf {w}_i$ is a concatenation of an internal embedding learned during the training process for the word $w_i$ , and a pre-trained embedding extracted from GloVe BIBREF15 , that is further fine-tuned. BIBREF5 : The output for an element $\mathbf {w}_i$ also depends on the output of $\mathbf {w}_{i-1}$ . The lstm $_\theta (\mathbf {w}_{1:n})$ takes as input a sequence of word embeddings and produces a sequence of hidden outputs, $\mathbf {h}_{1:n}$ ( $\mathbf {h}_{i}$ size set to 128). The last output of the lstm $_\theta $ , $\mathbf {h}_n$ , is fed to a mlp $_\theta $ . BIBREF16 , BIBREF17 . It captures local properties over continuous slices of text by applying a convolution layer made of different filters. We use a wide convolution, with a window slice size of length 3 and 250 different filters. The convolutional layer uses a $\mathit {relu}$ as the activation function. The output is fed to a max pooling layer, whose output vector is passed again as input to a mlp $_\theta $ . Conclusion We explored action prediction from written stories. We first introduced a corpus set in the world of Harry Potter's literature. Spells in these novels act as keywords that abstract actions. This idea was used to label a collection of fan fiction. We then evaluated standard nlp approaches, from logistic regression to sequential models such as lstms. The latter performed better in general, although vanilla models achieved a higher performance for actions that occurred a few times in the training set. An analysis over the output of the lstm approach also revealed difficulties to discriminate among semantically related actions. The challenge here proposed corresponded to a fictional domain. A future line of work we are interested in is to test whether the knowledge learned with this dataset could be transferred to real-word actions (i.e. real-domain setups), or if such transfer is not possible and a model needs to be trained from scratch. Acknowlegments This work has received support from the TELEPARES-UDC project (FFI2014-51978-C2-2-R) and the ANSWER-ASAP project (TIN2017-85160-C2-1-R) from MINECO, and from Xunta de Galicia (ED431B 2017/01), and from the European Research Council (ERC), under the European Union's Horizon 2020 research and innovation programme (FASTPARSE, grant agreement No 714150). Corpus distribution Table 6 summarizes the label distribution across the training, development and test sets of the hpac corpus.
Yes
0428e06f0550e1063a64d181210795053a8e6436
0428e06f0550e1063a64d181210795053a8e6436_0
Q: How well does a simple bag-of-words baseline do? Text: Introduction Natural language processing (nlp) has achieved significant advances in reading comprehension tasks BIBREF0 , BIBREF1 . These are partially due to embedding methods BIBREF2 , BIBREF3 and neural networks BIBREF4 , BIBREF5 , BIBREF6 , but also to the availability of new resources and challenges. For instance, in cloze-form tasks BIBREF7 , BIBREF8 , the goal is to predict the missing word given a short context. weston2015towards presented baBI, a set of proxy tasks for reading comprenhension. In the SQuAD corpus BIBREF9 , the aim is to answer questions given a Wikipedia passage. 2017arXiv171207040K introduce NarrativeQA, where answering the questions requires to process entire stories. In a related line, 2017arXiv171011601F use fictional crime scene investigation data, from the CSI series, to define a task where the models try to answer the question: ‘who committed the crime?’. In an alternative line of work, script induction BIBREF10 has been also a useful approach to evaluate inference and semantic capabilities of nlp systems. Here, a model processes a document to infer new sequences that reflect events that are statistically probable (e.g. go to a restaurant, be seated, check the menu, ...). For example, chambers2008unsupervised introduce narrative event chains, a representation of structured knowledge of a set of events occurring around a protagonist. They then propose a method to learn statistical scripts, and also introduce two different evaluation strategies. With a related aim, Pichotta2014Statistical propose a multi-event representation of statistical scripts to be able to consider multiple entities. These same authors BIBREF11 have also studied the abilities of recurrent neural networks for learning scripts, generating upcoming events given a raw sequence of tokens, using bleu BIBREF12 for evaluation. This paper explores instead a new task: action prediction from natural language descriptions of scenes. The challenge is addressed as follows: given a natural language input sequence describing the scene, such as a piece of a story coming from a transcript, the goal is to infer which action is most likely to happen next. HPAC: The Harry Potter's Action prediction Corpus To build an action prediction corpus, we need to: (1) consider the set of actions, and (2) collect data where these occur. Data should come from different users, to approximate a real natural language task. Also, it needs to be annotated, determining that a piece of text ends up triggering an action. These tasks are however time consuming, as they require annotators to read vast amounts of large texts. In this context, machine comprehension resources usually establish a compromise between their complexity and the costs of building them BIBREF7 , BIBREF13 . Domain motivation We rely on an intuitive idea that uses transcripts from the Harry Potter world to build up a corpus for textual action prediction. The domain has a set of desirable properties to evaluate reading comprehension systems, which we now review. Harry Potter novels define a variety of spells. These are keywords cast by witches and wizards to achieve purposes, such as turning on a light (‘Lumos’), unlocking a door (‘Alohomora’) or killing (‘Avada Kedavra’). They abstract complex and non-ambiguous actions. Their use also makes it possible to build an automatic and self-annotated corpus for action prediction. The moment a spell occurs in a text represents a response to the environment, and hence, it can be used to label the preceding text fragment as a scene description that ends up triggering that action. Table 1 illustrates it with some examples from the original books. This makes it possible to consider texts from the magic world of Harry Potter as the domain for the action prediction corpus, and the spells as the set of eligible actions. Determining the length of the preceding context, namely snippet, that has to be considered as the scene description is however not trivial. This paper considers experiments (§ "Experiments" ) using snippets with the 32, 64, 96 and 128 previous tokens to an action. We provide the needed scripts to rebuild the corpus using arbitrary lengths. Data crawling The number of occurrences of spells in the original Harry Potter books is small (432 occurrences), which makes it difficult to train and test a machine learning model. However, the amount of available fan fiction for this saga allows to create a large corpus. For hpac, we used fan fiction (and only fan fiction texts) from https://www.fanfiction.net/book/Harry-Potter/ and a version of the crawler by milli2016beyond. We collected Harry Potter stories written in English and marked with the status ‘completed’. From these we extracted a total of 82 836 spell occurrences, that we used to obtain the scene descriptions. Table 2 details the statistics of the corpus (see also Appendix "Corpus distribution" ). Note that similar to Twitter corpora, fan fiction stories can be deleted over time by users or admins, causing losses in the dataset. We tokenized the samples with BIBREF14 and merged the occurrences of multi-word spells into a single token. Models This work addresses the task as a classification problem, and in particular as a sequence to label classification problem. For this reason, we rely on standard models used for this type of task: multinomial logistic regression, a multi-layered perceptron, convolutional neural networks and long short-term memory networks. We outline the essentials of each of these models, but will treat them as black boxes. In a related line, kaushik2018much discuss the need of providing rigorous baselines that help better understand the improvement coming from future and complex models, and also the need of not demanding architectural novelty when introducing new datasets. Although not done in this work, an alternative (but also natural) way to address the task is as a special case of language modelling, where the output vocabulary is restricted to the size of the `action' vocabulary. Also, note that the performance for this task is not expected to achieve a perfect accuracy, as there may be situations where more than one action is reasonable, and also because writers tell a story playing with elements such as surprise or uncertainty. The source code for the models can be found in the GitHub repository mentioned above. Machine learning models The input sentence $w_{1:n}$ is encoded as a one-hot vector, $\mathbf {v}$ (total occurrence weighting scheme). Let mlr $_\theta (\mathbf {v})$ be an abstraction of a multinomial logistic regression parametrized by $\theta $ , the output for an input $\mathbf {v}$ is computed as the $\operatornamewithlimits{arg\,max}_{a \in A}$ $P(y=a|\mathbf {v})$ , where $P(y=a|\mathbf {v})$ is a $softmax$ function, i.e, $P(y=a|\mathbf {v}) = \frac{e^{W_{a} \cdot \mathbf {v}}}{\sum _{a^{\prime }}^{A} e^{W_{a^{\prime }} \cdot \mathbf {v}}}$ . We use one hidden layer with a rectifier activation function ( $relu(x)$ = $max(0,x)$ ). The output is computed as mlp $_\theta (\mathbf {v})$ = $softmax(W_2 \cdot relu(W \cdot \mathbf {v} + \mathbf {b}) + \mathbf {b_2})$ . Sequential models The input sequence is represented as a sequence of word embeddings, $\mathbf {w}_{1:n}$ , where $\mathbf {w}_i$ is a concatenation of an internal embedding learned during the training process for the word $w_i$ , and a pre-trained embedding extracted from GloVe BIBREF15 , that is further fine-tuned. BIBREF5 : The output for an element $\mathbf {w}_i$ also depends on the output of $\mathbf {w}_{i-1}$ . The lstm $_\theta (\mathbf {w}_{1:n})$ takes as input a sequence of word embeddings and produces a sequence of hidden outputs, $\mathbf {h}_{1:n}$ ( $\mathbf {h}_{i}$ size set to 128). The last output of the lstm $_\theta $ , $\mathbf {h}_n$ , is fed to a mlp $_\theta $ . BIBREF16 , BIBREF17 . It captures local properties over continuous slices of text by applying a convolution layer made of different filters. We use a wide convolution, with a window slice size of length 3 and 250 different filters. The convolutional layer uses a $\mathit {relu}$ as the activation function. The output is fed to a max pooling layer, whose output vector is passed again as input to a mlp $_\theta $ . Conclusion We explored action prediction from written stories. We first introduced a corpus set in the world of Harry Potter's literature. Spells in these novels act as keywords that abstract actions. This idea was used to label a collection of fan fiction. We then evaluated standard nlp approaches, from logistic regression to sequential models such as lstms. The latter performed better in general, although vanilla models achieved a higher performance for actions that occurred a few times in the training set. An analysis over the output of the lstm approach also revealed difficulties to discriminate among semantically related actions. The challenge here proposed corresponded to a fictional domain. A future line of work we are interested in is to test whether the knowledge learned with this dataset could be transferred to real-word actions (i.e. real-domain setups), or if such transfer is not possible and a model needs to be trained from scratch. Acknowlegments This work has received support from the TELEPARES-UDC project (FFI2014-51978-C2-2-R) and the ANSWER-ASAP project (TIN2017-85160-C2-1-R) from MINECO, and from Xunta de Galicia (ED431B 2017/01), and from the European Research Council (ERC), under the European Union's Horizon 2020 research and innovation programme (FASTPARSE, grant agreement No 714150). Corpus distribution Table 6 summarizes the label distribution across the training, development and test sets of the hpac corpus.
Unanswerable
3f7a7e81908a763e5ca720f90570c5f224ac64f6
3f7a7e81908a763e5ca720f90570c5f224ac64f6_0
Q: Do they study frequent user responses to help automate modelling of those? Text: Introduction There are several existing works that focus on modelling conversation using prior human to human conversational data BIBREF0 , BIBREF1 , BIBREF2 . BIBREF3 models the conversation from pairs of consecutive tweets. Deep learning based approaches have also been used to model the dialog in an end to end manner BIBREF4 , BIBREF5 . Memory networks have been used by Bordes et al Bor16 to model goal based dialog conversations. More recently, deep reinforcement learning models have been used for generating interactive and coherent dialogs BIBREF6 and negotiation dialogs BIBREF7 . Industry on the other hand has focused on building frameworks that allow manual specification of dialog models such as api.ai, Watson Conversational Services, and Microsoft Bot framework. These frameworks provide ways to specify intents, and a dialog flow. The user utterances are mapped to intents that are passed to a dialog flow manager. The dialog manager generates a response and updates the dialog state. See Figure FIGREF4 for an example of some intents and a dialog flow in a technical support domain. The dialog flow shows that when a user expresses an intent of # laptop_heat, then the system should respond with an utterance “Could you let me know the serial number of your machine ”. The designer needs to specify intents (for example # laptop_heat, # email_not_opening) and also provide corresponding system responses in the dialog flow. This way of specifying a dialog model using intents and corresponding system responses manually is more popular in industry than a data driven approach as it makes dialog model easy to interpret and debug as well as provides a better control to a dialog designer. However, this is very time consuming and laborious and thus involves huge costs. One approach to reduce the task of a dialog designer is to provide her with frequent user intents and possible corresponding system responses in a given domain. This can be done by analysing prior human to human conversations in the domain. Figure FIGREF5 (a) provides some example conversations in the technical support domain between users and agents. In order to identify frequent user intents, one can use existing clustering algorithms to group together all the utterances from the users. Here each cluster would correspond to a new intent and each utterance in the cluster would correspond to an example for the intent. Similarly the agents utterances can be clustered to identify system responses. However, we argue that rather than treating user utterances and agents responses in an isolated manner, there is merit in jointly clustering them. There is adjacency information of these utterances that can be utilized to identify better user intents and system responses. As an example, consider agent utterances A.2 in box A and A.2 in box C in Figure FIGREF5 (a). The utterances “Which operating system do you use?" and “What OS is installed in your machine" have no syntactic similarity and therefore may not be grouped together. However the fact that these utterances are adjacent to the similar user utterances “I am unable to start notes email client" and “Unable to start my email client" provides some evidence that the agent utterances might be similar. Similarly the user utterances “My system keeps getting rebooted" and “Machine is booting time and again" ( box B and D in Figure FIGREF5 (a))- that are syntactically not similar - could be grouped together since the adjacent agent utterances, “Is your machine heating up?" and “Is the machine heating?" are similar. Joint clustering of user utterances and agent utterances allow us to align the user utterance clusters with agent utterance clusters. Figure FIGREF5 (b) shows some examples of user utterance clusters and agent utterance clusters along with their alignments. Note that the user utterance clusters can be used by a dialog designer to specify intents, the agent utterance clusters can be used to create system responses and their alignment can be used to create part of the dialog flow. We propose two ways to take adjacency information into account. Firstly we propose a method called SimCluster for jointly or simultaneously clustering user utterances and agent utterances. SimCluster extends the K-means clustering method by incorporating additional penalty terms in the objective function that try to align the clusters together (described in Section SECREF3 ). The algorithm creates initial user utterance clusters as well as agent utterance clusters and then use bi-partite matching to get the best alignment across these clusters. Minimizing the objective function pushes the cluster centroids to move towards the centroids of the aligned clusters. The process implicitly ensures that the similarity of adjacent agent utterances affect the grouping of user utterances and conversely similarity of adjacent user utterances affect the grouping of agent utterances. In our second approach we use the information about neighbouring utterances for creating the vector representation of an utterance. For this we train a sequence to sequence model BIBREF8 to create the vectors (described in Section SECREF5 ). Our experiments described in section SECREF5 show that we achieve upto 10% absolute improvement in F1 scores over standard K-means using SimCluster. Also we observe that clustering of customer utterances gains significantly by using the adjacency information of agent utterances whereas the gain in clustering quality of agent utterances is moderate. This is because the agent utterances typically follow similar syntactic constructs whereas customer utterances are more varied. Considering the agent utterances into account while clustering users utterances is thus helpful. The organization of the rest of the paper is as follows. In Section SECREF2 we describe the related work. In Section SECREF3 we describe our problem formulation for clustering and the associated algorithm. Finally in sections SECREF4 and SECREF5 we discuss our experiments on synthetic and real datasets respectively. Related Work The notion of adjacency pairs was introduced by Sacks et al SSE74 to formalize the structure of a dialog. Adjacency pairs have been used to analyze the semantics of the dialog in computational linguistics community BIBREF9 . Clustering has been used for different tasks related to conversation. BIBREF10 considers the task of discovering dialog acts by clustering the raw utterances. We aim to obtain the frequent adjacency pairs through clustering. There have been several works regarding extensions of clustering to different scenarios such as:- The Proposed Approach In this section we describe our approach SimCluster that performs clustering in the two domains simultaneously and ensures that the generated clusters can be aligned with each other. We will describe the model in section SECREF9 and the algorithm in Section SECREF11 . Model We consider a problem setting where we are given a collection of pairs of consecutive utterances, with vector representations INLINEFORM0 where INLINEFORM1 s are in speaker 1's domain and INLINEFORM2 s are in speaker 2's domain. We need to simultaneously cluster the utterances in their respective domains to minimize the variations within each domain and also ensure that the clusters for both domains are close together. We denote the clusters for speaker 1's domain by INLINEFORM0 with their respective means INLINEFORM1 . We denote the clusters assignments for INLINEFORM2 by INLINEFORM3 . We denote the clusters for second speaker by INLINEFORM0 with their respective means INLINEFORM1 . We denote the clusters assignments for INLINEFORM2 by INLINEFORM3 . The usual energy function has the terms for distance of points from their corresponding cluster centroids. To be able to ensure that the clusters in each domain are similar, we also consider an alignment between the centroids of the two domains. Since the semantic representations in the two domains are not comparable we consider a notion of induced centroids. We define the induced centroids INLINEFORM0 as the arithmetic means of the points INLINEFORM1 s such that INLINEFORM2 's have the same cluster assigned to them. Similarly, we define INLINEFORM3 as the arithmetic means of INLINEFORM4 s such that INLINEFORM5 s have the same cluster assigned to them. More formally, we define these induced centroids as:- INLINEFORM6 and INLINEFORM0 The alignment between these clusters given by the function INLINEFORM0 , which is a bijective mapping from the cluster indices in speaker 1's domain to those in speaker 2's domain. Though there can be several choices for this alignment function, we consider this alignment to be a matching which maximizes the sum of number of common indices in the aligned clusters. More formally we define INLINEFORM1 Then the matching INLINEFORM0 is defined to be the bijective function which maximizes INLINEFORM1 . We consider a term in the cost function corresponding to the sum of distances between the original centroids and the matched induced centroids. Our overall cost function is now given by:- INLINEFORM2 We explain the above definition via an example. Consider the clusters shown in Figure FIGREF10 . Here the INLINEFORM0 would match INLINEFORM1 to INLINEFORM2 , INLINEFORM3 to INLINEFORM4 and INLINEFORM5 to INLINEFORM6 , giving a match score of 6. Since INLINEFORM7 , INLINEFORM8 and INLINEFORM9 are present in the cluster INLINEFORM10 , INLINEFORM11 is given by INLINEFORM12 . Similarly INLINEFORM13 In a similar manner, INLINEFORM0 s can also be defined. Now the alignment terms are given by:- INLINEFORM1 SimCluster Algorithm [] SimCluster [1] SimClusterInput: INLINEFORM0 ,k (No. of cluster) Output: A cluster assignment INLINEFORM1 for INLINEFORM2 s and a cluster assignment INLINEFORM3 for INLINEFORM4 s Initialize a set of centroids INLINEFORM5 , and INLINEFORM6 Perform simple clustering for a few iterations For each i, compute INLINEFORM7 as the index j among 1 to k which minimizes INLINEFORM8 . Similarly , compute INLINEFORM9 as the index j' among 1 to k which minimizes INLINEFORM10 . Update the centroids, INLINEFORM11 and INLINEFORM12 as:- INLINEFORM13 and INLINEFORM0 Perform a Hungarian matching between the cluster indices in the two domains with weights N(j,j') on edges from index j to index j'. convergence To minimize the above energy term we adopt an approach similar to Lloyd's clustering algorithm Llo82 . We assume that we are given a set of initial seeds for the cluster centroids INLINEFORM0 and INLINEFORM1 . We repeat the following steps iteratively:- Minimize the energy with respect to cluster assignment keeping centroids unchanged. As in standard K-means algorithm, this is achieved by updating the cluster assignment, INLINEFORM0 for each index i to be the cluster index j which minimizes INLINEFORM1 . Correspondingly for INLINEFORM2 , we pick the cluster index j' which minimizes INLINEFORM3 . Minimize the energy with respect to the centroids keeping cluster assignment unchanged. To achieve this step we need to minimize the energy function with respect to the centroids INLINEFORM0 and INLINEFORM1 . This is achieved by setting INLINEFORM2 for each j and INLINEFORM3 for each j. Setting INLINEFORM0 , we obtain INLINEFORM1 or equivalently INLINEFORM0 Similarly, setting INLINEFORM0 , we obtain INLINEFORM1 Finally we update the matching between the clusters. To do so, we need to find a bipartite matching match on the cluster indices so as to maximize INLINEFORM0 . We use Hungarian algorithm BIBREF13 to perform the same i.e. we define a bipartite graph with vertices consisting of cluster indices in the two domains. There is an edge from vertex representing cluster indices j (in domain 1) and j' in domain 2, with weight N(j,j'). We find a maximum weight bipartite matching in this graph. Similar to Lloyd's algorithm, each step of the above algorithm decreases the cost function. This ensures that the algorithm achieves a local minima of the cost function if it converges. See Algorithm SECREF11 for a formal description of the approach. The centroid update step of the above algorithm also has an intuitive explanation i.e. we are slightly moving away the centroid towards the matched induced centroid. This is consistent with our goal of aligning the clusters together in the two domains. Alignment The algorithm above maintains a mapping between the clusters in each speaker's domain. This mapping serves to give us the alignment between the clusters required to provide a corresponding response for a given user intent. Experiments on Synthetic Dataset We performed experiments on synthetically generated dataset since it gives us a better control over the distribution of the data. Specifically we compared the gains obtained using our approach versus the variance of the distribution. We created dataset from the following generative process. [H] Generative Process [1] Generate data Pick k points INLINEFORM0 as domain -1 means and a corresponding set of k points INLINEFORM1 as domain-2 means, and covariance matrices INLINEFORM2 iter INLINEFORM0 upto num INLINEFORM1 samples Sample class INLINEFORM2 Sample INLINEFORM3 Sample INLINEFORM4 Add q and a so sampled to the list of q,a pairs We generated the dataset from the above sampling process with means selected on a 2 dimensional grid of size INLINEFORM5 with variance set as INLINEFORM6 in each dimension.10000 sample points were generated. The parameter INLINEFORM7 of the above algorithm was set to 0.5 and k was set to 9 (since the points could be generated from one of the 9 gaussians with centroids on a INLINEFORM8 grid). We compared the results with simple K-means clustering with k set to 9. For each of these, the initialization of means was done using INLINEFORM0 sampling approach BIBREF14 . Evaluation and Results To evaluate the clusters we computed the following metrics ARI (Adjusted Rand Index): Standard Rand Index is a metric used to check the clustering quality against a given standard set of clusters by comparing the pairwise clustering decisions. It is defined as INLINEFORM0 , where a is the number of true positive pairs, b is the number of true negative pairs, c is the number of false positive pairs and d is the number of false negative pairs. Adjusted rand index corrects the standard rand index for chance and is defined as INLINEFORM1 BIBREF15 . We compute ARI score for both the source clusters as well as the target clusters. F1 scores: We also report F1 scores for the pairwise clustering decisions. In the above notation we considered the pair-precision as INLINEFORM0 and recall as INLINEFORM1 . The F1 measure is the Harmonic mean given as INLINEFORM2 . We used the gaussian index from which an utterance pair was generated as the ground truth label, which served to provide ground truth clusters for computation of the above evaluation metrics. Table TABREF15 shows a comparison of the results on SimCluster versus K-means algorithm. Here our SimCluster algorithm improves the F1-scores from 0.412 and 0.417 in the two domains to 0.442 and 0.441. The ARI scores also improve from 0.176 and 0.180 to 0.203 and 0.204. We also performed experiments to see how the performance of SimCluster is affected by the variance in the cluster (controlled by the generative process in Algorithm SECREF11 ). Intuitively we expect SimCluster to obtain an advantage over simple K-means when variance is larger. This is because at larger variance, the data points are more likely to be generated away from the centroid due to which they might be clustered incorrectly with the points from neighbouring cluster. However if the corresponding point from the other domain is generated closer to the centroid, it might help in clustering the given data point correctly. We performed these experiments with points generated from Algorithm SECREF11 at differet values of variance. We generated the points with centroids located on a grid of size INLINEFORM0 in each domain. The value of k was set to 9. The experiment was repeated for each value of variance between 0.1 to 1.0 in the intervals of 0.1. Figures FIGREF22 and FIGREF23 show the percentage improvement on ARI score and F1 score respectively achieved by SimCluster (over K-means) versus variance. Description and preprocessing of dataset We have experimented on a dataset containing Twitter conversations between customers and Amazon help. The dataset consisted of 92130 conversations between customers and amazon help. We considered the conversations with exactly two speakers Amazon Help and a customer. Consecutive utterances by the same speaker were concatenated and considered as a single utterance. From these we extracted adjacency pairs of the form of a customer utterance followed by an agent (Amazon Help) utterance. We then selected the utterance pairs from 8 different categories, like late delivery of item, refund, unable to sign into the account, replacement of item, claim of warranty, tracking delivery information etc. A total of 1944 utterance pairs were selected. To create the vector representation we had used two distinct approaches:- Paragraph to vector approach (Doc2Vec) by Le and Mikolov LM14. Here we trained the vectors using distributed memory algorithm and trained for 40 iterations. A window size of 4 was used. We also trained the vectors using sequence to sequence approach BIBREF8 , on the Twitter dataset where we considered the task of predicting the reply of Amazon Help for customer's query and vice versa. The encoded vector from the input sequence forms the corresponding vector representation. For the task of generating the agent's response for customer utterance the encoding from the input sequence (in the trained model) forms the vector representation for the customer utterance. Similarly for the task of generating the previous customer utterance from the agent's response, the intermediate encoding forms the vector representation for the agent utterance. We used an LSTM based 3-layered sequence to sequence model with attention for this task. We ran the K-means clustering algorithm for 5 iterations followed by our SimCluster algorithm for 30 iterations to form clusters in both the (customer and agent) domains. The hyper parameter( INLINEFORM0 ) is chosen based on a validation set. We varied the value of INLINEFORM1 from 0.5 to 1.0 at intervals of 0.025. The initialization of centroids was performed using INLINEFORM2 sampling approach BIBREF14 . Results For the clusters so obtained we have computed F1 and ARI measures as before and compared with the K-means approach. We used the partitioning formed by the 8 categories (from which the utterance pairs were selected) as the ground truth clustering. Table TABREF20 summarizes the results. We observe that for K-means algorithm, the vectors generated from sequence to sequence model perform better than the vectors generated using paragraph to vector for both the domains. This is expected as the vectors generated from sequence to sequence model encode some adjacency information as well. We further observe that the SimCluster approach performs better than the K-means approach for both the vector representations. It improves the F1-scores for Doc2Vec representation from 0.787 and 0.783 to 0.88 and 0.887 in the two domains. Also the F1-scores on Seq2Seq based representation improve from 0.83 and 0.9 to 0.86 and 0.916 using SimCluster. However the gains are much more in case of Doc2Vec representations than Seq2Seq representations since Doc2Vec did not have any information from the other domain where as some amount of this information is already captured by Seq2Seq representation. Moreover it is the clustering of customer utterances which is likely to see an improvement. This is because agent utterances tends to follow a generic pattern while customer utterances tend to be more varied. Considering agent utterances while generating clusters in the user domain thus tends to be more helpful than the other way round. Table TABREF25 shows qualitative results on the same dataset. Column 1 and 2 consists of clusters of utterances in customer domain and agent domain respectively. The utterances with usual font are representative utterances from clusters obtained through K-means clustering. The utterances in bold face indicate the similar utterances which were incorrectly classified in different clusters using K-means but were correctly classified together with the utterances by SimCluster algorithm. Conclusions One of the first steps to automate the construction of conversational systems could be to identify the frequent user utterances and their corresponding system responses. In this paper we proposed an approach to compute these groups of utterances by clustering the utterances in both the domains using our novel SimCluster algorithm which seeks to simultaneously cluster the utterances and align the utterances in two domains. Through our experiments on synthetically generated datset we have shown that SimCluster has more advantage over K-means on datasets with larger variance. Our technique improves upon the ARI and F1 scores on a real dataset containing Twitter conversations. Acknowledgments We thank Dr. David Nahamoo (CTO, Speech Technology and Fellow IBM Research ) for his valuable guidance and feedback. We also acknowledge the anonymous reviewers of IJCNLP 2017 for their comments.
Yes
28e7711f94e093137eb8828f0b1eff1b05e4fa38
28e7711f94e093137eb8828f0b1eff1b05e4fa38_0
Q: How do they divide text into utterances? Text: Introduction There are several existing works that focus on modelling conversation using prior human to human conversational data BIBREF0 , BIBREF1 , BIBREF2 . BIBREF3 models the conversation from pairs of consecutive tweets. Deep learning based approaches have also been used to model the dialog in an end to end manner BIBREF4 , BIBREF5 . Memory networks have been used by Bordes et al Bor16 to model goal based dialog conversations. More recently, deep reinforcement learning models have been used for generating interactive and coherent dialogs BIBREF6 and negotiation dialogs BIBREF7 . Industry on the other hand has focused on building frameworks that allow manual specification of dialog models such as api.ai, Watson Conversational Services, and Microsoft Bot framework. These frameworks provide ways to specify intents, and a dialog flow. The user utterances are mapped to intents that are passed to a dialog flow manager. The dialog manager generates a response and updates the dialog state. See Figure FIGREF4 for an example of some intents and a dialog flow in a technical support domain. The dialog flow shows that when a user expresses an intent of # laptop_heat, then the system should respond with an utterance “Could you let me know the serial number of your machine ”. The designer needs to specify intents (for example # laptop_heat, # email_not_opening) and also provide corresponding system responses in the dialog flow. This way of specifying a dialog model using intents and corresponding system responses manually is more popular in industry than a data driven approach as it makes dialog model easy to interpret and debug as well as provides a better control to a dialog designer. However, this is very time consuming and laborious and thus involves huge costs. One approach to reduce the task of a dialog designer is to provide her with frequent user intents and possible corresponding system responses in a given domain. This can be done by analysing prior human to human conversations in the domain. Figure FIGREF5 (a) provides some example conversations in the technical support domain between users and agents. In order to identify frequent user intents, one can use existing clustering algorithms to group together all the utterances from the users. Here each cluster would correspond to a new intent and each utterance in the cluster would correspond to an example for the intent. Similarly the agents utterances can be clustered to identify system responses. However, we argue that rather than treating user utterances and agents responses in an isolated manner, there is merit in jointly clustering them. There is adjacency information of these utterances that can be utilized to identify better user intents and system responses. As an example, consider agent utterances A.2 in box A and A.2 in box C in Figure FIGREF5 (a). The utterances “Which operating system do you use?" and “What OS is installed in your machine" have no syntactic similarity and therefore may not be grouped together. However the fact that these utterances are adjacent to the similar user utterances “I am unable to start notes email client" and “Unable to start my email client" provides some evidence that the agent utterances might be similar. Similarly the user utterances “My system keeps getting rebooted" and “Machine is booting time and again" ( box B and D in Figure FIGREF5 (a))- that are syntactically not similar - could be grouped together since the adjacent agent utterances, “Is your machine heating up?" and “Is the machine heating?" are similar. Joint clustering of user utterances and agent utterances allow us to align the user utterance clusters with agent utterance clusters. Figure FIGREF5 (b) shows some examples of user utterance clusters and agent utterance clusters along with their alignments. Note that the user utterance clusters can be used by a dialog designer to specify intents, the agent utterance clusters can be used to create system responses and their alignment can be used to create part of the dialog flow. We propose two ways to take adjacency information into account. Firstly we propose a method called SimCluster for jointly or simultaneously clustering user utterances and agent utterances. SimCluster extends the K-means clustering method by incorporating additional penalty terms in the objective function that try to align the clusters together (described in Section SECREF3 ). The algorithm creates initial user utterance clusters as well as agent utterance clusters and then use bi-partite matching to get the best alignment across these clusters. Minimizing the objective function pushes the cluster centroids to move towards the centroids of the aligned clusters. The process implicitly ensures that the similarity of adjacent agent utterances affect the grouping of user utterances and conversely similarity of adjacent user utterances affect the grouping of agent utterances. In our second approach we use the information about neighbouring utterances for creating the vector representation of an utterance. For this we train a sequence to sequence model BIBREF8 to create the vectors (described in Section SECREF5 ). Our experiments described in section SECREF5 show that we achieve upto 10% absolute improvement in F1 scores over standard K-means using SimCluster. Also we observe that clustering of customer utterances gains significantly by using the adjacency information of agent utterances whereas the gain in clustering quality of agent utterances is moderate. This is because the agent utterances typically follow similar syntactic constructs whereas customer utterances are more varied. Considering the agent utterances into account while clustering users utterances is thus helpful. The organization of the rest of the paper is as follows. In Section SECREF2 we describe the related work. In Section SECREF3 we describe our problem formulation for clustering and the associated algorithm. Finally in sections SECREF4 and SECREF5 we discuss our experiments on synthetic and real datasets respectively. Related Work The notion of adjacency pairs was introduced by Sacks et al SSE74 to formalize the structure of a dialog. Adjacency pairs have been used to analyze the semantics of the dialog in computational linguistics community BIBREF9 . Clustering has been used for different tasks related to conversation. BIBREF10 considers the task of discovering dialog acts by clustering the raw utterances. We aim to obtain the frequent adjacency pairs through clustering. There have been several works regarding extensions of clustering to different scenarios such as:- The Proposed Approach In this section we describe our approach SimCluster that performs clustering in the two domains simultaneously and ensures that the generated clusters can be aligned with each other. We will describe the model in section SECREF9 and the algorithm in Section SECREF11 . Model We consider a problem setting where we are given a collection of pairs of consecutive utterances, with vector representations INLINEFORM0 where INLINEFORM1 s are in speaker 1's domain and INLINEFORM2 s are in speaker 2's domain. We need to simultaneously cluster the utterances in their respective domains to minimize the variations within each domain and also ensure that the clusters for both domains are close together. We denote the clusters for speaker 1's domain by INLINEFORM0 with their respective means INLINEFORM1 . We denote the clusters assignments for INLINEFORM2 by INLINEFORM3 . We denote the clusters for second speaker by INLINEFORM0 with their respective means INLINEFORM1 . We denote the clusters assignments for INLINEFORM2 by INLINEFORM3 . The usual energy function has the terms for distance of points from their corresponding cluster centroids. To be able to ensure that the clusters in each domain are similar, we also consider an alignment between the centroids of the two domains. Since the semantic representations in the two domains are not comparable we consider a notion of induced centroids. We define the induced centroids INLINEFORM0 as the arithmetic means of the points INLINEFORM1 s such that INLINEFORM2 's have the same cluster assigned to them. Similarly, we define INLINEFORM3 as the arithmetic means of INLINEFORM4 s such that INLINEFORM5 s have the same cluster assigned to them. More formally, we define these induced centroids as:- INLINEFORM6 and INLINEFORM0 The alignment between these clusters given by the function INLINEFORM0 , which is a bijective mapping from the cluster indices in speaker 1's domain to those in speaker 2's domain. Though there can be several choices for this alignment function, we consider this alignment to be a matching which maximizes the sum of number of common indices in the aligned clusters. More formally we define INLINEFORM1 Then the matching INLINEFORM0 is defined to be the bijective function which maximizes INLINEFORM1 . We consider a term in the cost function corresponding to the sum of distances between the original centroids and the matched induced centroids. Our overall cost function is now given by:- INLINEFORM2 We explain the above definition via an example. Consider the clusters shown in Figure FIGREF10 . Here the INLINEFORM0 would match INLINEFORM1 to INLINEFORM2 , INLINEFORM3 to INLINEFORM4 and INLINEFORM5 to INLINEFORM6 , giving a match score of 6. Since INLINEFORM7 , INLINEFORM8 and INLINEFORM9 are present in the cluster INLINEFORM10 , INLINEFORM11 is given by INLINEFORM12 . Similarly INLINEFORM13 In a similar manner, INLINEFORM0 s can also be defined. Now the alignment terms are given by:- INLINEFORM1 SimCluster Algorithm [] SimCluster [1] SimClusterInput: INLINEFORM0 ,k (No. of cluster) Output: A cluster assignment INLINEFORM1 for INLINEFORM2 s and a cluster assignment INLINEFORM3 for INLINEFORM4 s Initialize a set of centroids INLINEFORM5 , and INLINEFORM6 Perform simple clustering for a few iterations For each i, compute INLINEFORM7 as the index j among 1 to k which minimizes INLINEFORM8 . Similarly , compute INLINEFORM9 as the index j' among 1 to k which minimizes INLINEFORM10 . Update the centroids, INLINEFORM11 and INLINEFORM12 as:- INLINEFORM13 and INLINEFORM0 Perform a Hungarian matching between the cluster indices in the two domains with weights N(j,j') on edges from index j to index j'. convergence To minimize the above energy term we adopt an approach similar to Lloyd's clustering algorithm Llo82 . We assume that we are given a set of initial seeds for the cluster centroids INLINEFORM0 and INLINEFORM1 . We repeat the following steps iteratively:- Minimize the energy with respect to cluster assignment keeping centroids unchanged. As in standard K-means algorithm, this is achieved by updating the cluster assignment, INLINEFORM0 for each index i to be the cluster index j which minimizes INLINEFORM1 . Correspondingly for INLINEFORM2 , we pick the cluster index j' which minimizes INLINEFORM3 . Minimize the energy with respect to the centroids keeping cluster assignment unchanged. To achieve this step we need to minimize the energy function with respect to the centroids INLINEFORM0 and INLINEFORM1 . This is achieved by setting INLINEFORM2 for each j and INLINEFORM3 for each j. Setting INLINEFORM0 , we obtain INLINEFORM1 or equivalently INLINEFORM0 Similarly, setting INLINEFORM0 , we obtain INLINEFORM1 Finally we update the matching between the clusters. To do so, we need to find a bipartite matching match on the cluster indices so as to maximize INLINEFORM0 . We use Hungarian algorithm BIBREF13 to perform the same i.e. we define a bipartite graph with vertices consisting of cluster indices in the two domains. There is an edge from vertex representing cluster indices j (in domain 1) and j' in domain 2, with weight N(j,j'). We find a maximum weight bipartite matching in this graph. Similar to Lloyd's algorithm, each step of the above algorithm decreases the cost function. This ensures that the algorithm achieves a local minima of the cost function if it converges. See Algorithm SECREF11 for a formal description of the approach. The centroid update step of the above algorithm also has an intuitive explanation i.e. we are slightly moving away the centroid towards the matched induced centroid. This is consistent with our goal of aligning the clusters together in the two domains. Alignment The algorithm above maintains a mapping between the clusters in each speaker's domain. This mapping serves to give us the alignment between the clusters required to provide a corresponding response for a given user intent. Experiments on Synthetic Dataset We performed experiments on synthetically generated dataset since it gives us a better control over the distribution of the data. Specifically we compared the gains obtained using our approach versus the variance of the distribution. We created dataset from the following generative process. [H] Generative Process [1] Generate data Pick k points INLINEFORM0 as domain -1 means and a corresponding set of k points INLINEFORM1 as domain-2 means, and covariance matrices INLINEFORM2 iter INLINEFORM0 upto num INLINEFORM1 samples Sample class INLINEFORM2 Sample INLINEFORM3 Sample INLINEFORM4 Add q and a so sampled to the list of q,a pairs We generated the dataset from the above sampling process with means selected on a 2 dimensional grid of size INLINEFORM5 with variance set as INLINEFORM6 in each dimension.10000 sample points were generated. The parameter INLINEFORM7 of the above algorithm was set to 0.5 and k was set to 9 (since the points could be generated from one of the 9 gaussians with centroids on a INLINEFORM8 grid). We compared the results with simple K-means clustering with k set to 9. For each of these, the initialization of means was done using INLINEFORM0 sampling approach BIBREF14 . Evaluation and Results To evaluate the clusters we computed the following metrics ARI (Adjusted Rand Index): Standard Rand Index is a metric used to check the clustering quality against a given standard set of clusters by comparing the pairwise clustering decisions. It is defined as INLINEFORM0 , where a is the number of true positive pairs, b is the number of true negative pairs, c is the number of false positive pairs and d is the number of false negative pairs. Adjusted rand index corrects the standard rand index for chance and is defined as INLINEFORM1 BIBREF15 . We compute ARI score for both the source clusters as well as the target clusters. F1 scores: We also report F1 scores for the pairwise clustering decisions. In the above notation we considered the pair-precision as INLINEFORM0 and recall as INLINEFORM1 . The F1 measure is the Harmonic mean given as INLINEFORM2 . We used the gaussian index from which an utterance pair was generated as the ground truth label, which served to provide ground truth clusters for computation of the above evaluation metrics. Table TABREF15 shows a comparison of the results on SimCluster versus K-means algorithm. Here our SimCluster algorithm improves the F1-scores from 0.412 and 0.417 in the two domains to 0.442 and 0.441. The ARI scores also improve from 0.176 and 0.180 to 0.203 and 0.204. We also performed experiments to see how the performance of SimCluster is affected by the variance in the cluster (controlled by the generative process in Algorithm SECREF11 ). Intuitively we expect SimCluster to obtain an advantage over simple K-means when variance is larger. This is because at larger variance, the data points are more likely to be generated away from the centroid due to which they might be clustered incorrectly with the points from neighbouring cluster. However if the corresponding point from the other domain is generated closer to the centroid, it might help in clustering the given data point correctly. We performed these experiments with points generated from Algorithm SECREF11 at differet values of variance. We generated the points with centroids located on a grid of size INLINEFORM0 in each domain. The value of k was set to 9. The experiment was repeated for each value of variance between 0.1 to 1.0 in the intervals of 0.1. Figures FIGREF22 and FIGREF23 show the percentage improvement on ARI score and F1 score respectively achieved by SimCluster (over K-means) versus variance. Description and preprocessing of dataset We have experimented on a dataset containing Twitter conversations between customers and Amazon help. The dataset consisted of 92130 conversations between customers and amazon help. We considered the conversations with exactly two speakers Amazon Help and a customer. Consecutive utterances by the same speaker were concatenated and considered as a single utterance. From these we extracted adjacency pairs of the form of a customer utterance followed by an agent (Amazon Help) utterance. We then selected the utterance pairs from 8 different categories, like late delivery of item, refund, unable to sign into the account, replacement of item, claim of warranty, tracking delivery information etc. A total of 1944 utterance pairs were selected. To create the vector representation we had used two distinct approaches:- Paragraph to vector approach (Doc2Vec) by Le and Mikolov LM14. Here we trained the vectors using distributed memory algorithm and trained for 40 iterations. A window size of 4 was used. We also trained the vectors using sequence to sequence approach BIBREF8 , on the Twitter dataset where we considered the task of predicting the reply of Amazon Help for customer's query and vice versa. The encoded vector from the input sequence forms the corresponding vector representation. For the task of generating the agent's response for customer utterance the encoding from the input sequence (in the trained model) forms the vector representation for the customer utterance. Similarly for the task of generating the previous customer utterance from the agent's response, the intermediate encoding forms the vector representation for the agent utterance. We used an LSTM based 3-layered sequence to sequence model with attention for this task. We ran the K-means clustering algorithm for 5 iterations followed by our SimCluster algorithm for 30 iterations to form clusters in both the (customer and agent) domains. The hyper parameter( INLINEFORM0 ) is chosen based on a validation set. We varied the value of INLINEFORM1 from 0.5 to 1.0 at intervals of 0.025. The initialization of centroids was performed using INLINEFORM2 sampling approach BIBREF14 . Results For the clusters so obtained we have computed F1 and ARI measures as before and compared with the K-means approach. We used the partitioning formed by the 8 categories (from which the utterance pairs were selected) as the ground truth clustering. Table TABREF20 summarizes the results. We observe that for K-means algorithm, the vectors generated from sequence to sequence model perform better than the vectors generated using paragraph to vector for both the domains. This is expected as the vectors generated from sequence to sequence model encode some adjacency information as well. We further observe that the SimCluster approach performs better than the K-means approach for both the vector representations. It improves the F1-scores for Doc2Vec representation from 0.787 and 0.783 to 0.88 and 0.887 in the two domains. Also the F1-scores on Seq2Seq based representation improve from 0.83 and 0.9 to 0.86 and 0.916 using SimCluster. However the gains are much more in case of Doc2Vec representations than Seq2Seq representations since Doc2Vec did not have any information from the other domain where as some amount of this information is already captured by Seq2Seq representation. Moreover it is the clustering of customer utterances which is likely to see an improvement. This is because agent utterances tends to follow a generic pattern while customer utterances tend to be more varied. Considering agent utterances while generating clusters in the user domain thus tends to be more helpful than the other way round. Table TABREF25 shows qualitative results on the same dataset. Column 1 and 2 consists of clusters of utterances in customer domain and agent domain respectively. The utterances with usual font are representative utterances from clusters obtained through K-means clustering. The utterances in bold face indicate the similar utterances which were incorrectly classified in different clusters using K-means but were correctly classified together with the utterances by SimCluster algorithm. Conclusions One of the first steps to automate the construction of conversational systems could be to identify the frequent user utterances and their corresponding system responses. In this paper we proposed an approach to compute these groups of utterances by clustering the utterances in both the domains using our novel SimCluster algorithm which seeks to simultaneously cluster the utterances and align the utterances in two domains. Through our experiments on synthetically generated datset we have shown that SimCluster has more advantage over K-means on datasets with larger variance. Our technique improves upon the ARI and F1 scores on a real dataset containing Twitter conversations. Acknowledgments We thank Dr. David Nahamoo (CTO, Speech Technology and Fellow IBM Research ) for his valuable guidance and feedback. We also acknowledge the anonymous reviewers of IJCNLP 2017 for their comments.
Unanswerable
49b38189b8336ce41d0f0b4c5c9459722736e15b
49b38189b8336ce41d0f0b4c5c9459722736e15b_0
Q: Do they use the same distance metric for both the SimCluster and K-means algorithm? Text: Introduction There are several existing works that focus on modelling conversation using prior human to human conversational data BIBREF0 , BIBREF1 , BIBREF2 . BIBREF3 models the conversation from pairs of consecutive tweets. Deep learning based approaches have also been used to model the dialog in an end to end manner BIBREF4 , BIBREF5 . Memory networks have been used by Bordes et al Bor16 to model goal based dialog conversations. More recently, deep reinforcement learning models have been used for generating interactive and coherent dialogs BIBREF6 and negotiation dialogs BIBREF7 . Industry on the other hand has focused on building frameworks that allow manual specification of dialog models such as api.ai, Watson Conversational Services, and Microsoft Bot framework. These frameworks provide ways to specify intents, and a dialog flow. The user utterances are mapped to intents that are passed to a dialog flow manager. The dialog manager generates a response and updates the dialog state. See Figure FIGREF4 for an example of some intents and a dialog flow in a technical support domain. The dialog flow shows that when a user expresses an intent of # laptop_heat, then the system should respond with an utterance “Could you let me know the serial number of your machine ”. The designer needs to specify intents (for example # laptop_heat, # email_not_opening) and also provide corresponding system responses in the dialog flow. This way of specifying a dialog model using intents and corresponding system responses manually is more popular in industry than a data driven approach as it makes dialog model easy to interpret and debug as well as provides a better control to a dialog designer. However, this is very time consuming and laborious and thus involves huge costs. One approach to reduce the task of a dialog designer is to provide her with frequent user intents and possible corresponding system responses in a given domain. This can be done by analysing prior human to human conversations in the domain. Figure FIGREF5 (a) provides some example conversations in the technical support domain between users and agents. In order to identify frequent user intents, one can use existing clustering algorithms to group together all the utterances from the users. Here each cluster would correspond to a new intent and each utterance in the cluster would correspond to an example for the intent. Similarly the agents utterances can be clustered to identify system responses. However, we argue that rather than treating user utterances and agents responses in an isolated manner, there is merit in jointly clustering them. There is adjacency information of these utterances that can be utilized to identify better user intents and system responses. As an example, consider agent utterances A.2 in box A and A.2 in box C in Figure FIGREF5 (a). The utterances “Which operating system do you use?" and “What OS is installed in your machine" have no syntactic similarity and therefore may not be grouped together. However the fact that these utterances are adjacent to the similar user utterances “I am unable to start notes email client" and “Unable to start my email client" provides some evidence that the agent utterances might be similar. Similarly the user utterances “My system keeps getting rebooted" and “Machine is booting time and again" ( box B and D in Figure FIGREF5 (a))- that are syntactically not similar - could be grouped together since the adjacent agent utterances, “Is your machine heating up?" and “Is the machine heating?" are similar. Joint clustering of user utterances and agent utterances allow us to align the user utterance clusters with agent utterance clusters. Figure FIGREF5 (b) shows some examples of user utterance clusters and agent utterance clusters along with their alignments. Note that the user utterance clusters can be used by a dialog designer to specify intents, the agent utterance clusters can be used to create system responses and their alignment can be used to create part of the dialog flow. We propose two ways to take adjacency information into account. Firstly we propose a method called SimCluster for jointly or simultaneously clustering user utterances and agent utterances. SimCluster extends the K-means clustering method by incorporating additional penalty terms in the objective function that try to align the clusters together (described in Section SECREF3 ). The algorithm creates initial user utterance clusters as well as agent utterance clusters and then use bi-partite matching to get the best alignment across these clusters. Minimizing the objective function pushes the cluster centroids to move towards the centroids of the aligned clusters. The process implicitly ensures that the similarity of adjacent agent utterances affect the grouping of user utterances and conversely similarity of adjacent user utterances affect the grouping of agent utterances. In our second approach we use the information about neighbouring utterances for creating the vector representation of an utterance. For this we train a sequence to sequence model BIBREF8 to create the vectors (described in Section SECREF5 ). Our experiments described in section SECREF5 show that we achieve upto 10% absolute improvement in F1 scores over standard K-means using SimCluster. Also we observe that clustering of customer utterances gains significantly by using the adjacency information of agent utterances whereas the gain in clustering quality of agent utterances is moderate. This is because the agent utterances typically follow similar syntactic constructs whereas customer utterances are more varied. Considering the agent utterances into account while clustering users utterances is thus helpful. The organization of the rest of the paper is as follows. In Section SECREF2 we describe the related work. In Section SECREF3 we describe our problem formulation for clustering and the associated algorithm. Finally in sections SECREF4 and SECREF5 we discuss our experiments on synthetic and real datasets respectively. Related Work The notion of adjacency pairs was introduced by Sacks et al SSE74 to formalize the structure of a dialog. Adjacency pairs have been used to analyze the semantics of the dialog in computational linguistics community BIBREF9 . Clustering has been used for different tasks related to conversation. BIBREF10 considers the task of discovering dialog acts by clustering the raw utterances. We aim to obtain the frequent adjacency pairs through clustering. There have been several works regarding extensions of clustering to different scenarios such as:- The Proposed Approach In this section we describe our approach SimCluster that performs clustering in the two domains simultaneously and ensures that the generated clusters can be aligned with each other. We will describe the model in section SECREF9 and the algorithm in Section SECREF11 . Model We consider a problem setting where we are given a collection of pairs of consecutive utterances, with vector representations INLINEFORM0 where INLINEFORM1 s are in speaker 1's domain and INLINEFORM2 s are in speaker 2's domain. We need to simultaneously cluster the utterances in their respective domains to minimize the variations within each domain and also ensure that the clusters for both domains are close together. We denote the clusters for speaker 1's domain by INLINEFORM0 with their respective means INLINEFORM1 . We denote the clusters assignments for INLINEFORM2 by INLINEFORM3 . We denote the clusters for second speaker by INLINEFORM0 with their respective means INLINEFORM1 . We denote the clusters assignments for INLINEFORM2 by INLINEFORM3 . The usual energy function has the terms for distance of points from their corresponding cluster centroids. To be able to ensure that the clusters in each domain are similar, we also consider an alignment between the centroids of the two domains. Since the semantic representations in the two domains are not comparable we consider a notion of induced centroids. We define the induced centroids INLINEFORM0 as the arithmetic means of the points INLINEFORM1 s such that INLINEFORM2 's have the same cluster assigned to them. Similarly, we define INLINEFORM3 as the arithmetic means of INLINEFORM4 s such that INLINEFORM5 s have the same cluster assigned to them. More formally, we define these induced centroids as:- INLINEFORM6 and INLINEFORM0 The alignment between these clusters given by the function INLINEFORM0 , which is a bijective mapping from the cluster indices in speaker 1's domain to those in speaker 2's domain. Though there can be several choices for this alignment function, we consider this alignment to be a matching which maximizes the sum of number of common indices in the aligned clusters. More formally we define INLINEFORM1 Then the matching INLINEFORM0 is defined to be the bijective function which maximizes INLINEFORM1 . We consider a term in the cost function corresponding to the sum of distances between the original centroids and the matched induced centroids. Our overall cost function is now given by:- INLINEFORM2 We explain the above definition via an example. Consider the clusters shown in Figure FIGREF10 . Here the INLINEFORM0 would match INLINEFORM1 to INLINEFORM2 , INLINEFORM3 to INLINEFORM4 and INLINEFORM5 to INLINEFORM6 , giving a match score of 6. Since INLINEFORM7 , INLINEFORM8 and INLINEFORM9 are present in the cluster INLINEFORM10 , INLINEFORM11 is given by INLINEFORM12 . Similarly INLINEFORM13 In a similar manner, INLINEFORM0 s can also be defined. Now the alignment terms are given by:- INLINEFORM1 SimCluster Algorithm [] SimCluster [1] SimClusterInput: INLINEFORM0 ,k (No. of cluster) Output: A cluster assignment INLINEFORM1 for INLINEFORM2 s and a cluster assignment INLINEFORM3 for INLINEFORM4 s Initialize a set of centroids INLINEFORM5 , and INLINEFORM6 Perform simple clustering for a few iterations For each i, compute INLINEFORM7 as the index j among 1 to k which minimizes INLINEFORM8 . Similarly , compute INLINEFORM9 as the index j' among 1 to k which minimizes INLINEFORM10 . Update the centroids, INLINEFORM11 and INLINEFORM12 as:- INLINEFORM13 and INLINEFORM0 Perform a Hungarian matching between the cluster indices in the two domains with weights N(j,j') on edges from index j to index j'. convergence To minimize the above energy term we adopt an approach similar to Lloyd's clustering algorithm Llo82 . We assume that we are given a set of initial seeds for the cluster centroids INLINEFORM0 and INLINEFORM1 . We repeat the following steps iteratively:- Minimize the energy with respect to cluster assignment keeping centroids unchanged. As in standard K-means algorithm, this is achieved by updating the cluster assignment, INLINEFORM0 for each index i to be the cluster index j which minimizes INLINEFORM1 . Correspondingly for INLINEFORM2 , we pick the cluster index j' which minimizes INLINEFORM3 . Minimize the energy with respect to the centroids keeping cluster assignment unchanged. To achieve this step we need to minimize the energy function with respect to the centroids INLINEFORM0 and INLINEFORM1 . This is achieved by setting INLINEFORM2 for each j and INLINEFORM3 for each j. Setting INLINEFORM0 , we obtain INLINEFORM1 or equivalently INLINEFORM0 Similarly, setting INLINEFORM0 , we obtain INLINEFORM1 Finally we update the matching between the clusters. To do so, we need to find a bipartite matching match on the cluster indices so as to maximize INLINEFORM0 . We use Hungarian algorithm BIBREF13 to perform the same i.e. we define a bipartite graph with vertices consisting of cluster indices in the two domains. There is an edge from vertex representing cluster indices j (in domain 1) and j' in domain 2, with weight N(j,j'). We find a maximum weight bipartite matching in this graph. Similar to Lloyd's algorithm, each step of the above algorithm decreases the cost function. This ensures that the algorithm achieves a local minima of the cost function if it converges. See Algorithm SECREF11 for a formal description of the approach. The centroid update step of the above algorithm also has an intuitive explanation i.e. we are slightly moving away the centroid towards the matched induced centroid. This is consistent with our goal of aligning the clusters together in the two domains. Alignment The algorithm above maintains a mapping between the clusters in each speaker's domain. This mapping serves to give us the alignment between the clusters required to provide a corresponding response for a given user intent. Experiments on Synthetic Dataset We performed experiments on synthetically generated dataset since it gives us a better control over the distribution of the data. Specifically we compared the gains obtained using our approach versus the variance of the distribution. We created dataset from the following generative process. [H] Generative Process [1] Generate data Pick k points INLINEFORM0 as domain -1 means and a corresponding set of k points INLINEFORM1 as domain-2 means, and covariance matrices INLINEFORM2 iter INLINEFORM0 upto num INLINEFORM1 samples Sample class INLINEFORM2 Sample INLINEFORM3 Sample INLINEFORM4 Add q and a so sampled to the list of q,a pairs We generated the dataset from the above sampling process with means selected on a 2 dimensional grid of size INLINEFORM5 with variance set as INLINEFORM6 in each dimension.10000 sample points were generated. The parameter INLINEFORM7 of the above algorithm was set to 0.5 and k was set to 9 (since the points could be generated from one of the 9 gaussians with centroids on a INLINEFORM8 grid). We compared the results with simple K-means clustering with k set to 9. For each of these, the initialization of means was done using INLINEFORM0 sampling approach BIBREF14 . Evaluation and Results To evaluate the clusters we computed the following metrics ARI (Adjusted Rand Index): Standard Rand Index is a metric used to check the clustering quality against a given standard set of clusters by comparing the pairwise clustering decisions. It is defined as INLINEFORM0 , where a is the number of true positive pairs, b is the number of true negative pairs, c is the number of false positive pairs and d is the number of false negative pairs. Adjusted rand index corrects the standard rand index for chance and is defined as INLINEFORM1 BIBREF15 . We compute ARI score for both the source clusters as well as the target clusters. F1 scores: We also report F1 scores for the pairwise clustering decisions. In the above notation we considered the pair-precision as INLINEFORM0 and recall as INLINEFORM1 . The F1 measure is the Harmonic mean given as INLINEFORM2 . We used the gaussian index from which an utterance pair was generated as the ground truth label, which served to provide ground truth clusters for computation of the above evaluation metrics. Table TABREF15 shows a comparison of the results on SimCluster versus K-means algorithm. Here our SimCluster algorithm improves the F1-scores from 0.412 and 0.417 in the two domains to 0.442 and 0.441. The ARI scores also improve from 0.176 and 0.180 to 0.203 and 0.204. We also performed experiments to see how the performance of SimCluster is affected by the variance in the cluster (controlled by the generative process in Algorithm SECREF11 ). Intuitively we expect SimCluster to obtain an advantage over simple K-means when variance is larger. This is because at larger variance, the data points are more likely to be generated away from the centroid due to which they might be clustered incorrectly with the points from neighbouring cluster. However if the corresponding point from the other domain is generated closer to the centroid, it might help in clustering the given data point correctly. We performed these experiments with points generated from Algorithm SECREF11 at differet values of variance. We generated the points with centroids located on a grid of size INLINEFORM0 in each domain. The value of k was set to 9. The experiment was repeated for each value of variance between 0.1 to 1.0 in the intervals of 0.1. Figures FIGREF22 and FIGREF23 show the percentage improvement on ARI score and F1 score respectively achieved by SimCluster (over K-means) versus variance. Description and preprocessing of dataset We have experimented on a dataset containing Twitter conversations between customers and Amazon help. The dataset consisted of 92130 conversations between customers and amazon help. We considered the conversations with exactly two speakers Amazon Help and a customer. Consecutive utterances by the same speaker were concatenated and considered as a single utterance. From these we extracted adjacency pairs of the form of a customer utterance followed by an agent (Amazon Help) utterance. We then selected the utterance pairs from 8 different categories, like late delivery of item, refund, unable to sign into the account, replacement of item, claim of warranty, tracking delivery information etc. A total of 1944 utterance pairs were selected. To create the vector representation we had used two distinct approaches:- Paragraph to vector approach (Doc2Vec) by Le and Mikolov LM14. Here we trained the vectors using distributed memory algorithm and trained for 40 iterations. A window size of 4 was used. We also trained the vectors using sequence to sequence approach BIBREF8 , on the Twitter dataset where we considered the task of predicting the reply of Amazon Help for customer's query and vice versa. The encoded vector from the input sequence forms the corresponding vector representation. For the task of generating the agent's response for customer utterance the encoding from the input sequence (in the trained model) forms the vector representation for the customer utterance. Similarly for the task of generating the previous customer utterance from the agent's response, the intermediate encoding forms the vector representation for the agent utterance. We used an LSTM based 3-layered sequence to sequence model with attention for this task. We ran the K-means clustering algorithm for 5 iterations followed by our SimCluster algorithm for 30 iterations to form clusters in both the (customer and agent) domains. The hyper parameter( INLINEFORM0 ) is chosen based on a validation set. We varied the value of INLINEFORM1 from 0.5 to 1.0 at intervals of 0.025. The initialization of centroids was performed using INLINEFORM2 sampling approach BIBREF14 . Results For the clusters so obtained we have computed F1 and ARI measures as before and compared with the K-means approach. We used the partitioning formed by the 8 categories (from which the utterance pairs were selected) as the ground truth clustering. Table TABREF20 summarizes the results. We observe that for K-means algorithm, the vectors generated from sequence to sequence model perform better than the vectors generated using paragraph to vector for both the domains. This is expected as the vectors generated from sequence to sequence model encode some adjacency information as well. We further observe that the SimCluster approach performs better than the K-means approach for both the vector representations. It improves the F1-scores for Doc2Vec representation from 0.787 and 0.783 to 0.88 and 0.887 in the two domains. Also the F1-scores on Seq2Seq based representation improve from 0.83 and 0.9 to 0.86 and 0.916 using SimCluster. However the gains are much more in case of Doc2Vec representations than Seq2Seq representations since Doc2Vec did not have any information from the other domain where as some amount of this information is already captured by Seq2Seq representation. Moreover it is the clustering of customer utterances which is likely to see an improvement. This is because agent utterances tends to follow a generic pattern while customer utterances tend to be more varied. Considering agent utterances while generating clusters in the user domain thus tends to be more helpful than the other way round. Table TABREF25 shows qualitative results on the same dataset. Column 1 and 2 consists of clusters of utterances in customer domain and agent domain respectively. The utterances with usual font are representative utterances from clusters obtained through K-means clustering. The utterances in bold face indicate the similar utterances which were incorrectly classified in different clusters using K-means but were correctly classified together with the utterances by SimCluster algorithm. Conclusions One of the first steps to automate the construction of conversational systems could be to identify the frequent user utterances and their corresponding system responses. In this paper we proposed an approach to compute these groups of utterances by clustering the utterances in both the domains using our novel SimCluster algorithm which seeks to simultaneously cluster the utterances and align the utterances in two domains. Through our experiments on synthetically generated datset we have shown that SimCluster has more advantage over K-means on datasets with larger variance. Our technique improves upon the ARI and F1 scores on a real dataset containing Twitter conversations. Acknowledgments We thank Dr. David Nahamoo (CTO, Speech Technology and Fellow IBM Research ) for his valuable guidance and feedback. We also acknowledge the anonymous reviewers of IJCNLP 2017 for their comments.
Yes
40c2bab4a6bf3c0628079fcf19e8b52f27f51d98
40c2bab4a6bf3c0628079fcf19e8b52f27f51d98_0
Q: How do they generate the synthetic dataset? Text: Introduction There are several existing works that focus on modelling conversation using prior human to human conversational data BIBREF0 , BIBREF1 , BIBREF2 . BIBREF3 models the conversation from pairs of consecutive tweets. Deep learning based approaches have also been used to model the dialog in an end to end manner BIBREF4 , BIBREF5 . Memory networks have been used by Bordes et al Bor16 to model goal based dialog conversations. More recently, deep reinforcement learning models have been used for generating interactive and coherent dialogs BIBREF6 and negotiation dialogs BIBREF7 . Industry on the other hand has focused on building frameworks that allow manual specification of dialog models such as api.ai, Watson Conversational Services, and Microsoft Bot framework. These frameworks provide ways to specify intents, and a dialog flow. The user utterances are mapped to intents that are passed to a dialog flow manager. The dialog manager generates a response and updates the dialog state. See Figure FIGREF4 for an example of some intents and a dialog flow in a technical support domain. The dialog flow shows that when a user expresses an intent of # laptop_heat, then the system should respond with an utterance “Could you let me know the serial number of your machine ”. The designer needs to specify intents (for example # laptop_heat, # email_not_opening) and also provide corresponding system responses in the dialog flow. This way of specifying a dialog model using intents and corresponding system responses manually is more popular in industry than a data driven approach as it makes dialog model easy to interpret and debug as well as provides a better control to a dialog designer. However, this is very time consuming and laborious and thus involves huge costs. One approach to reduce the task of a dialog designer is to provide her with frequent user intents and possible corresponding system responses in a given domain. This can be done by analysing prior human to human conversations in the domain. Figure FIGREF5 (a) provides some example conversations in the technical support domain between users and agents. In order to identify frequent user intents, one can use existing clustering algorithms to group together all the utterances from the users. Here each cluster would correspond to a new intent and each utterance in the cluster would correspond to an example for the intent. Similarly the agents utterances can be clustered to identify system responses. However, we argue that rather than treating user utterances and agents responses in an isolated manner, there is merit in jointly clustering them. There is adjacency information of these utterances that can be utilized to identify better user intents and system responses. As an example, consider agent utterances A.2 in box A and A.2 in box C in Figure FIGREF5 (a). The utterances “Which operating system do you use?" and “What OS is installed in your machine" have no syntactic similarity and therefore may not be grouped together. However the fact that these utterances are adjacent to the similar user utterances “I am unable to start notes email client" and “Unable to start my email client" provides some evidence that the agent utterances might be similar. Similarly the user utterances “My system keeps getting rebooted" and “Machine is booting time and again" ( box B and D in Figure FIGREF5 (a))- that are syntactically not similar - could be grouped together since the adjacent agent utterances, “Is your machine heating up?" and “Is the machine heating?" are similar. Joint clustering of user utterances and agent utterances allow us to align the user utterance clusters with agent utterance clusters. Figure FIGREF5 (b) shows some examples of user utterance clusters and agent utterance clusters along with their alignments. Note that the user utterance clusters can be used by a dialog designer to specify intents, the agent utterance clusters can be used to create system responses and their alignment can be used to create part of the dialog flow. We propose two ways to take adjacency information into account. Firstly we propose a method called SimCluster for jointly or simultaneously clustering user utterances and agent utterances. SimCluster extends the K-means clustering method by incorporating additional penalty terms in the objective function that try to align the clusters together (described in Section SECREF3 ). The algorithm creates initial user utterance clusters as well as agent utterance clusters and then use bi-partite matching to get the best alignment across these clusters. Minimizing the objective function pushes the cluster centroids to move towards the centroids of the aligned clusters. The process implicitly ensures that the similarity of adjacent agent utterances affect the grouping of user utterances and conversely similarity of adjacent user utterances affect the grouping of agent utterances. In our second approach we use the information about neighbouring utterances for creating the vector representation of an utterance. For this we train a sequence to sequence model BIBREF8 to create the vectors (described in Section SECREF5 ). Our experiments described in section SECREF5 show that we achieve upto 10% absolute improvement in F1 scores over standard K-means using SimCluster. Also we observe that clustering of customer utterances gains significantly by using the adjacency information of agent utterances whereas the gain in clustering quality of agent utterances is moderate. This is because the agent utterances typically follow similar syntactic constructs whereas customer utterances are more varied. Considering the agent utterances into account while clustering users utterances is thus helpful. The organization of the rest of the paper is as follows. In Section SECREF2 we describe the related work. In Section SECREF3 we describe our problem formulation for clustering and the associated algorithm. Finally in sections SECREF4 and SECREF5 we discuss our experiments on synthetic and real datasets respectively. Related Work The notion of adjacency pairs was introduced by Sacks et al SSE74 to formalize the structure of a dialog. Adjacency pairs have been used to analyze the semantics of the dialog in computational linguistics community BIBREF9 . Clustering has been used for different tasks related to conversation. BIBREF10 considers the task of discovering dialog acts by clustering the raw utterances. We aim to obtain the frequent adjacency pairs through clustering. There have been several works regarding extensions of clustering to different scenarios such as:- The Proposed Approach In this section we describe our approach SimCluster that performs clustering in the two domains simultaneously and ensures that the generated clusters can be aligned with each other. We will describe the model in section SECREF9 and the algorithm in Section SECREF11 . Model We consider a problem setting where we are given a collection of pairs of consecutive utterances, with vector representations INLINEFORM0 where INLINEFORM1 s are in speaker 1's domain and INLINEFORM2 s are in speaker 2's domain. We need to simultaneously cluster the utterances in their respective domains to minimize the variations within each domain and also ensure that the clusters for both domains are close together. We denote the clusters for speaker 1's domain by INLINEFORM0 with their respective means INLINEFORM1 . We denote the clusters assignments for INLINEFORM2 by INLINEFORM3 . We denote the clusters for second speaker by INLINEFORM0 with their respective means INLINEFORM1 . We denote the clusters assignments for INLINEFORM2 by INLINEFORM3 . The usual energy function has the terms for distance of points from their corresponding cluster centroids. To be able to ensure that the clusters in each domain are similar, we also consider an alignment between the centroids of the two domains. Since the semantic representations in the two domains are not comparable we consider a notion of induced centroids. We define the induced centroids INLINEFORM0 as the arithmetic means of the points INLINEFORM1 s such that INLINEFORM2 's have the same cluster assigned to them. Similarly, we define INLINEFORM3 as the arithmetic means of INLINEFORM4 s such that INLINEFORM5 s have the same cluster assigned to them. More formally, we define these induced centroids as:- INLINEFORM6 and INLINEFORM0 The alignment between these clusters given by the function INLINEFORM0 , which is a bijective mapping from the cluster indices in speaker 1's domain to those in speaker 2's domain. Though there can be several choices for this alignment function, we consider this alignment to be a matching which maximizes the sum of number of common indices in the aligned clusters. More formally we define INLINEFORM1 Then the matching INLINEFORM0 is defined to be the bijective function which maximizes INLINEFORM1 . We consider a term in the cost function corresponding to the sum of distances between the original centroids and the matched induced centroids. Our overall cost function is now given by:- INLINEFORM2 We explain the above definition via an example. Consider the clusters shown in Figure FIGREF10 . Here the INLINEFORM0 would match INLINEFORM1 to INLINEFORM2 , INLINEFORM3 to INLINEFORM4 and INLINEFORM5 to INLINEFORM6 , giving a match score of 6. Since INLINEFORM7 , INLINEFORM8 and INLINEFORM9 are present in the cluster INLINEFORM10 , INLINEFORM11 is given by INLINEFORM12 . Similarly INLINEFORM13 In a similar manner, INLINEFORM0 s can also be defined. Now the alignment terms are given by:- INLINEFORM1 SimCluster Algorithm [] SimCluster [1] SimClusterInput: INLINEFORM0 ,k (No. of cluster) Output: A cluster assignment INLINEFORM1 for INLINEFORM2 s and a cluster assignment INLINEFORM3 for INLINEFORM4 s Initialize a set of centroids INLINEFORM5 , and INLINEFORM6 Perform simple clustering for a few iterations For each i, compute INLINEFORM7 as the index j among 1 to k which minimizes INLINEFORM8 . Similarly , compute INLINEFORM9 as the index j' among 1 to k which minimizes INLINEFORM10 . Update the centroids, INLINEFORM11 and INLINEFORM12 as:- INLINEFORM13 and INLINEFORM0 Perform a Hungarian matching between the cluster indices in the two domains with weights N(j,j') on edges from index j to index j'. convergence To minimize the above energy term we adopt an approach similar to Lloyd's clustering algorithm Llo82 . We assume that we are given a set of initial seeds for the cluster centroids INLINEFORM0 and INLINEFORM1 . We repeat the following steps iteratively:- Minimize the energy with respect to cluster assignment keeping centroids unchanged. As in standard K-means algorithm, this is achieved by updating the cluster assignment, INLINEFORM0 for each index i to be the cluster index j which minimizes INLINEFORM1 . Correspondingly for INLINEFORM2 , we pick the cluster index j' which minimizes INLINEFORM3 . Minimize the energy with respect to the centroids keeping cluster assignment unchanged. To achieve this step we need to minimize the energy function with respect to the centroids INLINEFORM0 and INLINEFORM1 . This is achieved by setting INLINEFORM2 for each j and INLINEFORM3 for each j. Setting INLINEFORM0 , we obtain INLINEFORM1 or equivalently INLINEFORM0 Similarly, setting INLINEFORM0 , we obtain INLINEFORM1 Finally we update the matching between the clusters. To do so, we need to find a bipartite matching match on the cluster indices so as to maximize INLINEFORM0 . We use Hungarian algorithm BIBREF13 to perform the same i.e. we define a bipartite graph with vertices consisting of cluster indices in the two domains. There is an edge from vertex representing cluster indices j (in domain 1) and j' in domain 2, with weight N(j,j'). We find a maximum weight bipartite matching in this graph. Similar to Lloyd's algorithm, each step of the above algorithm decreases the cost function. This ensures that the algorithm achieves a local minima of the cost function if it converges. See Algorithm SECREF11 for a formal description of the approach. The centroid update step of the above algorithm also has an intuitive explanation i.e. we are slightly moving away the centroid towards the matched induced centroid. This is consistent with our goal of aligning the clusters together in the two domains. Alignment The algorithm above maintains a mapping between the clusters in each speaker's domain. This mapping serves to give us the alignment between the clusters required to provide a corresponding response for a given user intent. Experiments on Synthetic Dataset We performed experiments on synthetically generated dataset since it gives us a better control over the distribution of the data. Specifically we compared the gains obtained using our approach versus the variance of the distribution. We created dataset from the following generative process. [H] Generative Process [1] Generate data Pick k points INLINEFORM0 as domain -1 means and a corresponding set of k points INLINEFORM1 as domain-2 means, and covariance matrices INLINEFORM2 iter INLINEFORM0 upto num INLINEFORM1 samples Sample class INLINEFORM2 Sample INLINEFORM3 Sample INLINEFORM4 Add q and a so sampled to the list of q,a pairs We generated the dataset from the above sampling process with means selected on a 2 dimensional grid of size INLINEFORM5 with variance set as INLINEFORM6 in each dimension.10000 sample points were generated. The parameter INLINEFORM7 of the above algorithm was set to 0.5 and k was set to 9 (since the points could be generated from one of the 9 gaussians with centroids on a INLINEFORM8 grid). We compared the results with simple K-means clustering with k set to 9. For each of these, the initialization of means was done using INLINEFORM0 sampling approach BIBREF14 . Evaluation and Results To evaluate the clusters we computed the following metrics ARI (Adjusted Rand Index): Standard Rand Index is a metric used to check the clustering quality against a given standard set of clusters by comparing the pairwise clustering decisions. It is defined as INLINEFORM0 , where a is the number of true positive pairs, b is the number of true negative pairs, c is the number of false positive pairs and d is the number of false negative pairs. Adjusted rand index corrects the standard rand index for chance and is defined as INLINEFORM1 BIBREF15 . We compute ARI score for both the source clusters as well as the target clusters. F1 scores: We also report F1 scores for the pairwise clustering decisions. In the above notation we considered the pair-precision as INLINEFORM0 and recall as INLINEFORM1 . The F1 measure is the Harmonic mean given as INLINEFORM2 . We used the gaussian index from which an utterance pair was generated as the ground truth label, which served to provide ground truth clusters for computation of the above evaluation metrics. Table TABREF15 shows a comparison of the results on SimCluster versus K-means algorithm. Here our SimCluster algorithm improves the F1-scores from 0.412 and 0.417 in the two domains to 0.442 and 0.441. The ARI scores also improve from 0.176 and 0.180 to 0.203 and 0.204. We also performed experiments to see how the performance of SimCluster is affected by the variance in the cluster (controlled by the generative process in Algorithm SECREF11 ). Intuitively we expect SimCluster to obtain an advantage over simple K-means when variance is larger. This is because at larger variance, the data points are more likely to be generated away from the centroid due to which they might be clustered incorrectly with the points from neighbouring cluster. However if the corresponding point from the other domain is generated closer to the centroid, it might help in clustering the given data point correctly. We performed these experiments with points generated from Algorithm SECREF11 at differet values of variance. We generated the points with centroids located on a grid of size INLINEFORM0 in each domain. The value of k was set to 9. The experiment was repeated for each value of variance between 0.1 to 1.0 in the intervals of 0.1. Figures FIGREF22 and FIGREF23 show the percentage improvement on ARI score and F1 score respectively achieved by SimCluster (over K-means) versus variance. Description and preprocessing of dataset We have experimented on a dataset containing Twitter conversations between customers and Amazon help. The dataset consisted of 92130 conversations between customers and amazon help. We considered the conversations with exactly two speakers Amazon Help and a customer. Consecutive utterances by the same speaker were concatenated and considered as a single utterance. From these we extracted adjacency pairs of the form of a customer utterance followed by an agent (Amazon Help) utterance. We then selected the utterance pairs from 8 different categories, like late delivery of item, refund, unable to sign into the account, replacement of item, claim of warranty, tracking delivery information etc. A total of 1944 utterance pairs were selected. To create the vector representation we had used two distinct approaches:- Paragraph to vector approach (Doc2Vec) by Le and Mikolov LM14. Here we trained the vectors using distributed memory algorithm and trained for 40 iterations. A window size of 4 was used. We also trained the vectors using sequence to sequence approach BIBREF8 , on the Twitter dataset where we considered the task of predicting the reply of Amazon Help for customer's query and vice versa. The encoded vector from the input sequence forms the corresponding vector representation. For the task of generating the agent's response for customer utterance the encoding from the input sequence (in the trained model) forms the vector representation for the customer utterance. Similarly for the task of generating the previous customer utterance from the agent's response, the intermediate encoding forms the vector representation for the agent utterance. We used an LSTM based 3-layered sequence to sequence model with attention for this task. We ran the K-means clustering algorithm for 5 iterations followed by our SimCluster algorithm for 30 iterations to form clusters in both the (customer and agent) domains. The hyper parameter( INLINEFORM0 ) is chosen based on a validation set. We varied the value of INLINEFORM1 from 0.5 to 1.0 at intervals of 0.025. The initialization of centroids was performed using INLINEFORM2 sampling approach BIBREF14 . Results For the clusters so obtained we have computed F1 and ARI measures as before and compared with the K-means approach. We used the partitioning formed by the 8 categories (from which the utterance pairs were selected) as the ground truth clustering. Table TABREF20 summarizes the results. We observe that for K-means algorithm, the vectors generated from sequence to sequence model perform better than the vectors generated using paragraph to vector for both the domains. This is expected as the vectors generated from sequence to sequence model encode some adjacency information as well. We further observe that the SimCluster approach performs better than the K-means approach for both the vector representations. It improves the F1-scores for Doc2Vec representation from 0.787 and 0.783 to 0.88 and 0.887 in the two domains. Also the F1-scores on Seq2Seq based representation improve from 0.83 and 0.9 to 0.86 and 0.916 using SimCluster. However the gains are much more in case of Doc2Vec representations than Seq2Seq representations since Doc2Vec did not have any information from the other domain where as some amount of this information is already captured by Seq2Seq representation. Moreover it is the clustering of customer utterances which is likely to see an improvement. This is because agent utterances tends to follow a generic pattern while customer utterances tend to be more varied. Considering agent utterances while generating clusters in the user domain thus tends to be more helpful than the other way round. Table TABREF25 shows qualitative results on the same dataset. Column 1 and 2 consists of clusters of utterances in customer domain and agent domain respectively. The utterances with usual font are representative utterances from clusters obtained through K-means clustering. The utterances in bold face indicate the similar utterances which were incorrectly classified in different clusters using K-means but were correctly classified together with the utterances by SimCluster algorithm. Conclusions One of the first steps to automate the construction of conversational systems could be to identify the frequent user utterances and their corresponding system responses. In this paper we proposed an approach to compute these groups of utterances by clustering the utterances in both the domains using our novel SimCluster algorithm which seeks to simultaneously cluster the utterances and align the utterances in two domains. Through our experiments on synthetically generated datset we have shown that SimCluster has more advantage over K-means on datasets with larger variance. Our technique improves upon the ARI and F1 scores on a real dataset containing Twitter conversations. Acknowledgments We thank Dr. David Nahamoo (CTO, Speech Technology and Fellow IBM Research ) for his valuable guidance and feedback. We also acknowledge the anonymous reviewers of IJCNLP 2017 for their comments.
using generative process
33d2919f3400cd3c6fbb6960d74187ec80b41cd6
33d2919f3400cd3c6fbb6960d74187ec80b41cd6_0
Q: how are multiple answers from multiple reformulated questions aggregated? Text: Introduction Web and social media have become primary sources of information. Users' expectations and information seeking activities co-evolve with the increasing sophistication of these resources. Beyond navigation, document retrieval, and simple factual question answering, users seek direct answers to complex and compositional questions. Such search sessions may require multiple iterations, critical assessment, and synthesis BIBREF0 . The productivity of natural language yields a myriad of ways to formulate a question BIBREF1 . In the face of complex information needs, humans overcome uncertainty by reformulating questions, issuing multiple searches, and aggregating responses. Inspired by humans' ability to ask the right questions, we present an agent that learns to carry out this process for the user. The agent sits between the user and a backend QA system that we refer to as `the environment'. We call the agent AQA, as it implements an active question answering strategy. AQA aims to maximize the chance of getting the correct answer by sending a reformulated question to the environment. The agent seeks to find the best answer by asking many questions and aggregating the returned evidence. The internals of the environment are not available to the agent, so it must learn to probe a black-box optimally using only question strings. The key component of the AQA agent is a sequence-to-sequence model trained with reinforcement learning (RL) using a reward based on the answer returned by the environment. The second component to AQA combines the evidence from interacting with the environment using a convolutional neural network to select an answer. We evaluate on a dataset of Jeopardy! questions, SearchQA BIBREF2 . These questions are hard to answer by design because they use convoluted language, e.g., Travel doesn't seem to be an issue for this sorcerer & onetime surgeon; astral projection & teleportation are no prob (answer: Doctor Strange). Thus SearchQA tests the ability of AQA to reformulate questions such that the QA system has the best chance of returning the correct answer. AQA improves over the performance of a deep network built for QA, BiDAF BIBREF3 , which has produced state-of-the-art results on multiple tasks, by 11.4% absolute F1, a 32% relative F1 improvement. Additionally, AQA outperforms other competitive heuristic query reformulation benchmarks. AQA defines an instance of machine-machine communication. One side of the conversation, the AQA agent, is trying to adapt its language to improve the response from the other side, the QA environment. To shed some light on this process we perform a qualitative analysis of the language generated by the AQA agent. By evaluating on MSCOCO BIBREF4 , we find that the agent's question reformulations diverge significantly from natural language paraphrases. Remarkably, though, the agent is able to learn non-trivial and transparent policies. In particular, the agent is able to discover classic IR query operations such as term re-weighting, resembling tf-idf, and morphological simplification/stemming. A possible reason being that current machine comprehension tasks involve the ranking of short textual snippets, thus incentivizing relevance, more than deep language understanding. Related work BIBREF5 learned patterns of question variants by comparing dependency parsing trees. BIBREF6 showed that MT-based paraphrases can be useful in principle by providing significant headroom in oracle-based estimations of QA performance. Recently, BIBREF7 used paraphrasing to augment the training of a semantic parser by expanding through the paraphrases as a latent representation. Bilingual corpora and MT have been used to generate paraphrases by pivoting through a second language. Recent work uses neural translation models and multiple pivots BIBREF8 . In contrast, our approach does not use pivoting and is, to our knowledge, the first direct neural paraphrasing system. BIBREF9 propose phrase-based paraphrasing for query expansion. In contrast with this line of work, our goal is to generate full question reformulations while optimizing directly the end-to-end target performance metrics. Reinforcement learning is gaining traction in natural language understanding across many problems. For example, BIBREF10 use RL to learn control policies for multi-user dungeon games where the state of the game is summarized by a textual description, and BIBREF11 use RL for dialogue generation. Policy gradient methods have been investigated recently for MT and other sequence-to-sequence problems. They alleviate limitations inherent to the word-level optimization of the cross-entropy loss, allowing the use of sequence-level reward functions, like BLEU. Reward functions based on language models and reconstruction errors are used to bootstrap MT with fewer resources BIBREF12 . RL training can also prevent exposure bias; an inconsistency between training and inference time stemming from the fact that the model never sees its own mistakes during training BIBREF13 . We also use policy gradient to optimize our agent, however, we use end-to-end question answering quality as the reward. Uses of policy gradient for QA include BIBREF14 , who train a semantic parser to query a knowledge base, and BIBREF15 who propose query reduction networks that transform a query to answer questions that involve multi-hop common sense reasoning. The work of BIBREF16 is most related to ours. They identify a document containing an answer to a question by following links on a graph. Evaluating on a set of questions from the game Jeopardy!, they learn to walk the Wikipedia graph until they reach the predicted answer. In a follow-up, BIBREF17 improve document retrieval with an approach inspired by relevance feedback in combination with RL. They reformulate a query by adding terms from documents retrieved from a search engine for the original query. Our work differs in that we generate complete sequence reformulations rather than adding single terms, and we target question-answering rather than document retrieval. Active QA is also related to recent research on fact-checking: BIBREF18 propose to perturb database queries in order to estimate the support of quantitative claims. In Active QA questions are perturbed semantically with a similar purpose, although directly at the surface natural language form. Active Question Answering Model Figure 1 shows the Active Question Answering (AQA) agent-environment setup. The AQA model interacts with a black-box environment. AQA queries it with many versions of a question, and finally returns the best of the answers found. An episode starts with an original question $q_0$ . The agent then generates a set of reformulations $\lbrace q_i\rbrace _{i=1}^N$ . These are sent to the environment which returns answers $\lbrace a_i\rbrace _{i=1}^N$ . The selection model then picks the best from these candidates. Question-Answering Environment For the QA environment, we use a competitive neural question answering model, BiDirectional Attention Flow (BiDAF) BIBREF3 . BiDAF is an extractive QA system, it selects answers from contiguous spans of a given document. Given a question, the environment returns an answer and, during training, a reward. The reward may be any quality metric for the returned answer, we use token-level F1 score. Note that the reward for each answer $a_i$ is computed against the original question $q_0$ . We assume that the environment is opaque; the agent has no access to its parameters, activations or gradients. This setting enables one, in principle, to also interact with other information sources, possibly providing feedback in different modes such as images and structured data from knowledge bases. However, without propagating gradients through the environment we lose information, feedback on the quality of the question reformulations is noisy, presenting a challenge for training. Reformulation Model The reformulator is a sequence-to-sequence model, as is popular for neural machine translation. We build upon the implementation of BIBREF19 . The major departure from the standard MT setting is that our model reformulates utterances in the same language. Unlike in MT, there is little high quality training data available for monolingual paraphrasing. Effective training of highly parametrized neural networks relies on an abundance of data. We address this challenge by first pre-training on a related task, multilingual translation, and then using signals produced during the interaction with the environment for adaptation. Answer Selection Model During training, we have access to the reward for the answer returned for each reformulation $q_i$ . However, at test time we must predict the best answer $a^*$ . The selection model selects the best answer from the set $\lbrace a_i\rbrace _{i=1}^N$ observed during the interaction by predicting the difference of the F1 score to the average F1 of all variants. We use pre-trained embeddings for the tokens of query, rewrite, and answer. For each, we add a 1-dimensional CNN followed by max-pooling. The three resulting vectors are then concatenated and passed through a feed-forward network which produces the output. Question Answering Environment We train a model on the training set for the QA task at hand, see Section "Baselines and Benchmarks" for details. Afterwards, BiDAF becomes the black-box environment and its parameters are not updated further. In principle, we could train both the agent and the environment jointly to further improve performance. However, this is not our desired task: our aim is for the agent to learn to communicate using natural language with an environment over which is has no control. Policy Gradient Training of the Reformulation Model For a given question $q_0$ , we want to return the best possible answer $a^*$ , maximizing a reward $a^*=\operatorname{argmax}_a R(a|q_0)$ . Typically, ${R}$ is the token level F1 score on the answer. The answer $a = f(q)$ is an unknown function of a question $q$ , computed by the environment. The reward is computed with respect to the original question $q_0$ while the answer is provided for $q$ . The question is generated according to a policy $\pi _\theta $ where $\theta $ are the policy's parameters $a^*$0 . The policy, in this case, a sequence-to-sequence model, assigns a probability $$\pi _\theta (q|q_0) = \prod _{t=1}^Tp(w_t|w_1,\ldots ,w_{t-1},q_0)$$ (Eq. 7) to any possible question $q = w_1,\ldots ,w_{T}$ , where $T$ is the length of $q$ with tokens $w_t \in V$ from a fixed vocabulary $V$ . The goal is to maximize the expected reward of the answer returned under the policy, $\mathbb {E}_{q\sim \pi _\theta ({}\cdot {}|q_0)}[{R}(f(q))]$ . We optimize the reward directly with respect to parameters of the policy using Policy Gradient methods BIBREF20 . The expected reward cannot be computed in closed form, so we compute an unbiased estimate with Monte Carlo sampling, $$\mathbb {E}_{q\sim \pi _\theta ({}\cdot {}|q_0)}[{R}(f(q))] \approx \dfrac{1}{N} \sum _{i=1}^N {R}(f(q_i)),\quad q_i\sim \pi _\theta ({}\cdot {}|q_0)$$ (Eq. 8) To compute gradients for training we use REINFORCE BIBREF21 , $$\nabla \mathbb {E}_{q\sim \pi _\theta ({}\cdot {}|q_0)}[{R}(f(q))] &= \mathbb {E}_{q\sim \pi _\theta ({}\cdot {}|q_0)}\nabla _\theta \log (\pi _\theta (q|q_0))R(f(q))\\ &\approx \dfrac{1}{N} \sum _{i=1}^N \nabla _\theta \log (\pi (q_i|q_0))R(f(q_i)),\quad q_i\sim \pi _\theta ({}\cdot {}|q_0)$$ (Eq. 9) This estimator is often found to have high variance, leading to unstable training BIBREF22 . We reduce the variance by subtracting the following baseline reward: $B(q_0)=\mathbb {E}_{q\sim \pi _\theta ({}\cdot {}|q_0)}[R(f(q))]$ . This expectation is also computed by sampling from the policy given $q_0$ . We often observed collapse onto a sub-optimal deterministic policy. To address this we use entropy regularization $$H[\pi _{\theta }(q|q_0)] = - \sum _{t=1}^T \sum _{w_t\in V} p_{\theta }(w_t|w_{<t},q_0) \log p_{\theta }(w_t|w_{<t},q_0)$$ (Eq. 10) This final objective is: $$\mathbb {E}_{q\sim \pi _\theta ({}\cdot {}|q_0)}[{R}(f(q)) - B(q_0)] + \lambda H[\pi (q|q_0)],$$ (Eq. 11) where $\lambda $ is the regularization weight. Answer Selection Unlike the reformulation policy, we train the answer with either beam search or sampling. We can produce many rewrites of a single question from our reformulation system. We issue each rewrite to the QA environment, yielding a set of (query, rewrite, answer) tuples from which we need to pick the best instance. We train another neural network to pick the best answer from the candidates. We frame the task as binary classification, distinguishing between above and below average performance. In training, we compute the F1 score of the answer for every instance. If the rewrite produces an answer with an F1 score greater than the average score of the other rewrites the instance is assigned a positive label. We ignore questions where all rewrites yield equally good/bad answers. We evaluated FFNNs, LSTMs, and CNNs and found that the performance of all systems was comparable. We choose a CNN which offers good computational efficiency and accuracy (cf. "Training" ). Pretraining of the Reformulation Model We pre-train the policy by building a paraphrasing Neural MT model that can translate from English to English. While parallel corpora are available for many language pairs, English-English corpora are scarce. We first produce a multilingual translation system that translates between several languages BIBREF23 . This allows us to use available bilingual corpora. Multilingual training requires nothing more than adding two special tokens to every line which indicate the source and target languages. The encoder-decoder architecture of the translation model remains unchanged. As BIBREF23 show, this model can be used for zero-shot translation, i.e. to translate between language pairs for which it has seen no training examples. For example, after training English-Spanish, English-French, French-English, and Spanish-English the model has learned a single encoder that encodes English, Spanish, and French and a decoder for the same three languages. Thus, we can use the same model for French-Spanish, Spanish-French and also English-English translation by adding the respective tokens to the source. BIBREF23 note that zero-shot translation usually performs worse than bridging, an approach that uses the model twice: first, to translate into a pivot language, then into the target language. However, the performance gap can be closed by running a few training steps for the desired language pair. Thus, we first train on multilingual data, then on a small corpus of monolingual data. Question Answering Data and BiDAF training SearchQA BIBREF2 is a dataset built starting from a set of Jeopardy! clues. Clues are obfuscated queries such as This `Father of Our Country' didn't really chop down a cherry tree. Each clue is associated with the correct answer, e.g. George Washington, and a list of snippets from Google's top search results. SearchQA contains over 140k question/answer pairs and 6.9M snippets. We train our model on the pre-defined training split, perform model selection and tuning on the validation split and report results on the validation and test splits. The training, validation and test sets contain 99,820, 13,393 and 27,248 examples, respectively. We train BiDAF directly on the SearchQA training data. We join snippets to form the context from which BiDAF selects answer spans. For performance reasons, we limit the context to the top 10 snippets. This corresponds to finding the answer on the first page of Google results. The results are only mildly affected by this limitation, for 10% of the questions, there is no answer in this shorter context. These data points are all counted as losses. We trained with the Adam optimizer for 4500 steps, using learning rate 0.001, batch size 60. Question Reformulator Training For the pre-training of the reformulator, we use the multilingual United Nations Parallel Corpus v1.0 BIBREF24 . This dataset contains 11.4M sentences which are fully aligned across six UN languages: Arabic, English, Spanish, French, Russian, and Chinese. From all bilingual pairs, we produce a multilingual training corpus of 30 language pairs. This yields 340M training examples which we use to train the zero-shot neural MT system BIBREF23 . We tokenize our data using 16k sentence pieces. Following BIBREF19 we use a bidirectional LSTM as the encoder and a 4-layer stacked LSTM with attention as the decoder. The model converged after training on 400M instances using the Adam optimizer with a learning rate of 0.001 and batch size of 128. The model trained as described above has poor quality. For example, for the question What month, day and year did Super Bowl 50 take place?, the top rewrite is What month and year goes back to the morning and year?. To improve quality, we resume training on a smaller monolingual dataset, extracted from the Paralex database of question paraphrases BIBREF25 . Unfortunately, this data contains many noisy pairs. We filter many of these pairs out by keeping only those where the Jaccard coefficient between the sets of source and target terms is above 0.5. Further, since the number of paraphrases for each question can vary significantly, we keep at most 4 paraphrases for each question. After processing, we are left with about 1.5M pairs out of the original 35M. The refined model has visibly better quality than the zero-shot one; for the example question above it generates What year did superbowl take place?. We also tried training on the monolingual pairs alone. As in BIBREF23 , the quality was in between the multilingual and refined models. After pre-training the reformulator, we switch the optimizer from Adam to SGD and train for $100\text{k}$ RL steps of batch size 64 with a low learning rate of $0.001$ . We use an entropy regularization weight of $\lambda =0.001$ . For a stopping criterion, we monitor the reward from the best single rewrite, generated via greedy decoding, on the validation set. In contrast to our initial training which we ran on GPUs, this training phase is dominated by the latency of the QA system and we run inference and updates on CPU and the BiDAF environment on GPU. Training the Answer Selector For the selection model we use supervised learning: first, we train the reformulator, then we generate $N=20$ rewrites for each question in the SearchQA training and validation sets. After sending these to the environment we have about 2M (question, rewrite, answer) triples. We remove queries where all rewrites yield identical rewards, which removes about half of the training data. We use pre-trained 100-dimensional embeddings BIBREF26 for the tokens. Our CNN-based selection model encodes the three strings into 100-dimensional vectors using a 1D CNN with kernel width 3 and output dimension 100 over the embedded tokens, followed by max-pooling. The vectors are then concatenated and passed through a feed-forward network which produces the binary output, indicating whether the triple performs below or above average, relative to the other reformulations and respective answers. We use the training portion of the SearchQA data thrice, first for the initial training of the BiDAF model, then for the reinforcement-learning based tuning of the reformulator, and finally for the training of the selector. We carefully monitored that this didn’t cause severe overfitting. BiDAF alone has a generalization gap between the training and validation set errors of 3.4 F1. This gap remains virtually identical after training the rewriter. After training the CNN, AQA-Full has a slightly larger gap of 3.9 F1. We conclude that training AQA on BiDAF’s training set causes very little additional overfitting. We use the test set only for evaluation of the final model. Baselines and Benchmarks As a baseline, we report the results of the modified pointer network, called Attention Sum Reader (ASR), developed for SearchQA BIBREF2 . We also report the performance of the BiDAF environment used without the reformulator to answer the original question. We evaluate against several benchmarks. First, following BIBREF27 , we implement a system (MI-SubQuery) that generates reformulation candidates by enumerating all subqueries of the original SearchQA query and then keeps the top $N$ ranked by mutual information. From this set, we pick the highest scoring one as the top hypothesis to be used as a single rewrite. We also use the whole set to train a CNN answer selector for this specific source of rewrites. In this way, we can compare systems fairly both in single prediction or ensemble prediction modes. Additionally, we evaluate against another source of reformulations: the zero-shot monolingual NMT system trained on the U.N. corpus and Paralex (Base-NMT), without reinforcement learning. As with the MI-SubQuery benchmark, we evaluate the Base-NMT system both as a single reformulation predictor and as a source of $N$ best rewrites, for which we train a dedicated CNN answer selector. We also report human performance on SearchQA, based on a sample of the test set, from BIBREF2 . Results We evaluate several variants of AQA. For each query $q$ in the evaluation we generate a list of reformulations $q_{i}$ , for $i=1\ldots N$ , from the AQA reformulator trained as described in Section "Training" . We set $N=20$ in these experiments, the same value is used for the benchmarks. In AQA TopHyp we use the top hypothesis generated by the sequence model, $q_1$ . In AQA Voting we use BiDAF scores for a heuristic weighted voting scheme to implement deterministic selection. Let $a$ be the answer returned by BiDAF for query $q$ , with an associated score $s(a)$ . We pick the answer according to $\operatorname{argmax}_{a} \sum _{a^{\prime }=a} s(a^{\prime })$ . In AQA MaxConf we select the answer with the single highest BiDAF score across question reformulations. Finally, AQA CNN identifies the complete system with the learned CNN model described in Section "Reformulation Model" . Table 1 shows the results. We report exact match (EM) and F1 metrics, computed on token level between the predicted answer and the gold answer. We present results on the full validation and test sets (referred to as $n$ -gram in BIBREF2 ). Overall, SearchQA appears to be harder than other recent QA tasks such as SQuAD BIBREF28 , for both machines and humans. BiDAF's performance drops by 40 F1 points on SearchQA compared to SQuAD. However, BiDAF is still competitive on SeachQA, improving over the Attention Sum Reader network by 13.7 F1 points. Using the top hypothesis already yields an improvement of 2.2 F1 on the test set. This demonstrates that even the reformulator alone is capable to produce questions more easily answered by the environment. When generating a single prediction, both MI-SubQuery and Base-NMT benchmarks perform worse than BiDAF. Heuristic selection via both Voting and Max Conf yields a further performance boost. Both heuristics draw upon the intuition that when BiDAF is confident in its answer it is more likely to be correct, and that multiple instances of the same answer provide positive evidence (for MaxConf, the max operation implicitly rewards having an answer scored with respect to multiple questions). Finally, a trained selection function improves performance further, yielding an absolute increase of 11.4 F1 points (32% relative) over BiDAF with the original questions. In terms of exact match score, this more than closes half the gap between BiDAF and human performance. The benchmarks improve considerably when they generate $N$ candidates, and paired with a dedicated CNN selector. This is not surprising as it provides an ensemble prediction setup. However, the AQA CNN system outperforms both MI-SubQuery and Base-NMT in all conditions by about 3%. Finally, we consider the maximum performance possible that could be achieved by picking the answer with the highest F1 score from the set of those returned for all available reformulations. Here we find that the different sources of rewrites provide comparable headroom: the oracle Exact Match is near 50, while the oracle F1 is close to 58. Analysis of the agent's language The AQA agent can learn several types of sub-optimal policies. For example, it can converge to deterministic policies by learning to emit the same, meaningless, reformulation for any input question. This strategy can lead to local optima because the environment has built in strong priors on what looks like a likely answer, even ignoring the input question. Hence, convergence to non-negligible performance is easy. Entropy regularization typically fixes this behavior. Too much weight on the entropy regularizer, on the other hand, might yield random policies. A more competitive sub-optimal policy is one that generates minimal changes to the input, in order to stay close to the original question. This is a successful strategy because the environment has been trained on the original questions alone, which leads to baseline performance. It seems quite remarkable then that AQA is able to learn non-trivial reformulation policies, that differ significantly from all of the above. One can think of the policy as a language for formulating questions that the agent has developed while engaging in a machine-machine communication with the environment. In this section we look deeper into the agent's language. General properties We analyze input questions and reformulations on the development partition of SearchQA to gain insights on how the agent's language evolves during training via policy gradient. It is important to note that in the SearchQA dataset the original Jeopardy! clues have been preprocessed by lower-casing and stop word removal. The resulting preprocessed clues that form the sources (inputs) for the sequence-to-sequence reformulation model resemble more keyword-based search queries than grammatical questions. For example, the clue Gandhi was deeply influenced by this count who wrote "War and Peace" is simplified to gandhi deeply influenced count wrote war peace. The (preprocessed) SearchQA questions contain 9.6 words on average. They contain few repeated terms, computed as the mean term frequency (TF) per question. The average is 1.03, but for most of the queries (75%) TF is 1.0. We also compute the median document frequency (DF) per query, where the document is the context from which the answer is selected, as a measure of how informative a term is. As another measure of query performance, we also compute Query Clarity (QC) BIBREF29 . Figure 2 summarizes statistics of the questions and rewrites. We first consider the top hypothesis generated by the pre-trained NMT reformulation system, before reinforcement learning (Base-NMT). The Base-NMT rewrites differ greatly from their sources. They are shorter, 6.3 words on average, and have even fewer repeated terms (1.01). Interestingly, these reformulations are mostly syntactically well-formed questions. For example, the clue above becomes Who influenced count wrote war?. Base-NMT improves structural language quality by properly reinserting dropped function words and wh-phrases. We also verified the increased fluency by using a large language model and found that the Base-NMT rewrites are 50% more likely than the original questions. While more fluent, the Base-NMT rewrites involve lower DF terms. This is probably due to a domain mismatch between SearchQA and the NMT training corpus. The query clarity of the Base-NMT rewrites is also degraded as a result of the transduction process. We next consider the top hypothesis generated by the AQA question reformulator (AQA-QR) after the policy gradient training. The AQA-QR rewrites are those whose corresponding answers are evaluated as AQA TopHyp in Table 1 . These single rewrites alone outperform the original SearchQA queries by 2% on the test set. We analyze the top hypothesis instead of the final output of the full AQA agent to avoid confounding effects from the answer selection step. These rewrites look different from both the Base-NMT and the SearchQA ones. For the example above AQA-QR's top hypothesis is What is name gandhi gandhi influence wrote peace peace?. Surprisingly, 99.8% start with the prefix What is name. The second most frequent is What country is (81 times), followed by What is is (70) and What state (14). This is puzzling as it occurs in only 9 Base-NMT rewrites, and never in the original SearchQA questions. We speculate it might be related to the fact that virtually all answers involve names, of named entities (Micronesia) or generic concepts (pizza). AQA-QR's rewrites seem less fluent than both the SearchQA and the Base-MT counterparts. In terms of language model probability, they are less likely than both SearchQA and Base-NMT. However, they have more repeated terms (1.2 average TF), are significantly longer (11.9) than in Base-NMT and contain more informative context terms than SearchQA questions (lower DF). Also, the translation process does not affect query clarity much. Finally, we find that AQA-QR's reformulations contain morphological variants in 12.5% of cases. The number of questions that contain multiple tokens with the same stem doubles from SearchQA to AQA-QR. Singular forms are preferred over plurals. Morphological simplification is useful because it increases the chance that a word variant in the question matches the context. Paraphrasing quality We also investigate the general paraphrasing abilities of our model, focusing on the relation between paraphrasing quality and QA quality. To tease apart the relationship between paraphrasing and reformulation for QA we evaluated 3 variants of the reformulator: Base-NMT This is the model used to initialize RL training of the agent. Trained first on the multilingual U.N. corpus, then on the Paralex corpus, as detailed in Section "Question Reformulator Training" . Base-NMT-NoParalex This is the model above trained solely on the multilingual U.N. corpus, without the Paralex monolingual corpus. Base-NMT+Quora This is the same as Base-NMT, additionally trained on the Quora dataset which contains 150k duplicate questions. Following BIBREF30 , we evaluate all models on the MSCOCO BIBREF4 validation set (VAL2014). This dataset consists of images with 5 captions each, of which we select a random one as the source and the other four as references. We use beam search, to compute the top hypothesis and report uncased, moses-tokenized BLEU using multeval BIBREF31 . Please note, that the MSCOCO data is only used for evaluation purposes. Examples of all systems can be found in Appendix "Paraphrasing Examples" . The Base-NMT model performs at 11.4 BLEU (see Table 1 for the QA eval numbers). In contrast, Base-NMT-NoParalex performs poorly at 5.0 BLEU. Limiting training to the multilingual data alone also degrades QA performance: the scores of the Top Hypothesis are at least 5 points lower in all metrics and CNN scores are 2-3 points lower. By training on additional monolingual data, the Base-NMT+Quora model improves BLEU score slightly to 11.6. End-to-end QA performance also improves marginally, the maximum delta with respect to Base-NMT under all conditions is +0.5 points, but the difference is not statistically significant. Thus, adding the Quora training does not have a significant effect. This might be due to the fact that most of the improvement is captured by training on the larger Paralex data set. Improving raw paraphrasing quality as well as reformulation fluency helps AQA up to a point. However, they are only partially aligned with the main task, which is QA performance. The AQA-QR reformulator has a BLEU score of 8.6, well below both Base-NMT models trained on monolingual data. Yet, AQA-QR significantly outperforms all others in the QA task. Training the agent starting from the Base-NMT+Quora model yielded comparable results as starting from Base-NMT. Discussion Recently, BIBREF32 trained chatbots that negotiate via language utterances in order to complete a task. They report that the agent's language diverges from human language if there is no incentive for fluency in the reward function. Our findings seem related. The fact that the questions reformulated by AQA do not resemble natural language is not due to the keyword-like SearchQA input questions, because Base-NMT is capable of producing more fluent questions from the same input. AQA learns to re-weight terms by focusing on informative (lower document frequency), query-specific (high query clarity), terms while increasing term frequency (TF) via duplication. At the same time it learns to modify surface forms in ways akin to stemming and morphological analysis. Some of the techniques seem to adapt to the specific properties of current deep QA architectures such as character-based modeling and attention. Sometimes AQA learns to generate semantically nonsensical, novel, surface term variants; e.g., it might transform the adjective dense to densey. The only justification for this is that such forms can be still exploited by the character-based BiDAF question encoder. Finally, repetitions can directly increase the chances of alignment in the attention components. We hypothesize that, while there is no incentive for the model to use human language due to the nature of the task, AQA learns to ask BiDAF questions by optimizing a language that increases the likelihood of BiDAF ranking better the candidate answers. BIBREF33 argue that reading comprehension systems are not capable of significant language understanding and fail easily in adversarial settings. We speculate that current machine comprehension tasks involve mostly pattern matching and relevance modeling. As a consequence deep QA systems might implement sophisticated ranking systems trained to sort snippets of text from the context. As such, they resemble document retrieval systems which incentivizes the (re-)discovery of IR techniques, such as tf-idf re-weighting and stemming, that have been successful for decades BIBREF34 . Conclusion We propose a new framework to improve question answering. We call it active question answering (AQA), as it aims to improve answering by systematically perturbing input questions. We investigated a first system of this kind that has three components: a question reformulator, a black box QA system, and a candidate answer aggregator. The reformulator and aggregator form a trainable agent that seeks to elicit the best answers from the QA system. Importantly, the agent may only query the environment with natural language questions. Experimental results prove that the approach is highly effective and that the agent is able to learn non-trivial and somewhat interpretable reformulation policies. For future work, we will continue developing active question answering, investigating the sequential, iterative aspects of information seeking tasks, framed as end-to-end RL problems, thus, closing the loop between the reformulator and the selector. Acknowledgements We would like to thank the anonymous reviewers for their valuable comments and suggestions. We would also like to thank Jyrki Alakuijala, Gábor Bártok, Alexey Gronskiy, Rodrigo Nogueira and Hugo Penedones for insightful discussions and technical feedback. Reformulation Examples r|p6.5cm|p5.1cm Results of the qualitative analysis on SearchQA. For the original Jeopardy! questions we give the reference answer, otherwise the answer given by BiDAF. Model Query Reference / Answer from BiDAF (F1) Jeopardy! People of this nation AKA Nippon wrote with a brush, so painting became the preferred form of artistic expression japan SearchQA people nation aka nippon wrote brush , painting became preferred form artistic expression japan (1.0) MI nippon brush preferred julian (0) Base-NMT Aka nippon written form artistic expression? julian (0) AQA-QR What is name did people nation aka nippon wrote brush expression? japan (1.0) AQA-Full people nation aka nippon wrote brush , painting became preferred form artistic expression japan (1.0) Jeopardy! Michael Caine & Steve Martin teamed up as Lawrence & Freddy, a couple of these, the title of a 1988 film dirty rotten scoundrels SearchQA michael caine steve martin teamed lawrence freddy , couple , title 1988 film dirty rotten scoundrels (1.0) MI caine teamed freddy dirty rotten scoundrels (1.0) Base-NMT Who was lawrence of michael caine steve martin? rain man 1988 best picture fikkle [... 25 tokens] (0.18) AQA-QR What is name is name is name michael caine steve martin teamed lawrence freddy and title 1988 film? dirty rotten scoundrels (1.0) AQA-Full What is name is name where name is name michael caine steve martin teamed lawrence freddy and title 1988 film key 2000 ? dirty rotten scoundrels (1.0) Jeopardy! Used underwater, ammonia gelatin is a waterproof type of this explosive dynamite SearchQA used underwater , ammonia gelatin waterproof type explosive nitroglycerin (0) MI ammonia gelatin waterproof nitroglycerin (0) Base-NMT Where is ammonia gelatin waterproof? nitroglycerin (0) AQA-QR What is name is used under water with ammonia gelatin water waterproof type explosive? nitroglycerin (0) AQA-Full used underwater , ammonia gelatin waterproof type explosive nitroglycerin (0) Jeopardy! The Cleveland Peninsula is about 40 miles northwest of Ketchikan in this state alaska SearchQA cleveland peninsula 40 miles northwest ketchikan state alaska 's community information summary says [... 113 tokens] (0.02) MI cleveland peninsula ketchikan alaska 's dec 16 , 1997 [... 132 tokens] (0.01) Base-NMT The cleveland peninsula 40 miles? ketchikan , alaska located northwest tip [... 46 tokens] (0.04) AQA-QR What is name is cleveland peninsula state northwest state state state? alaska (1.0) AQA-Full What is name are cleveland peninsula state northwest state state state ? alaska (1.0) Jeopardy! Tess Ocean, Tinker Bell, Charlotte the Spider julia roberts SearchQA tess ocean , tinker bell , charlotte spider julia roberts tv com charlotte spider [... 87 tokens] (0.04) MI tess tinker spider julia roberts tv com charlotte spider [... 119 tokens] (0.01) Base-NMT What ocean tess tinker bell? julia roberts american actress producer made [... 206 tokens] (0.02) AQA-QR What is name tess ocean tinker bell link charlotte spider? julia roberts (1.0) AQA-Full What is name is name tess ocean tinker bell spider contain charlotte spider contain hump around the world winter au to finish au de mon moist julia roberts (1.0) Jeopardy! During the Tertiary Period, India plowed into Eurasia & this highest mountain range was formed himalayas SearchQA tertiary period , india plowed eurasia highest mountain range formed himalayas (1.0) MI tertiary plowed eurasia himalayas (1.0) Base-NMT What is eurasia highest mountain range? himalayas (1.0) AQA-QR What is name were tertiary period in india plowed eurasia? himalayas (1.0) AQA-Full tertiary period , india plowed eurasia highest mountain range formed himalayas (1.0) Jeopardy! The melody heard here is from the opera about Serse, better known to us as this "X"-rated Persian king xerxes SearchQA melody heard opera serse , better known us x rated persian king gilbert sullivan (0) MI melody opera persian gilbert sullivan (0) Base-NMT Melody heard opera serse thing? gilbert sullivan (0) AQA-QR What is name melody heard opera serse is better persian king? gilbert sullivan (0) AQA-Full What is name is name melody heard opera serse is better persian king persian K ? gilbert sullivan (0)
The selection model selects the best answer from the set $\lbrace a_i\rbrace _{i=1}^N$ observed during the interaction by predicting the difference of the F1 score to the average F1 of all variants.
281cd4e78b27a62713ec43249df5000812522a89
281cd4e78b27a62713ec43249df5000812522a89_0
Q: What is the average length of the claims? Text: Introduction Understanding most nontrivial claims requires insights from various perspectives. Today, we make use of search engines or recommendation systems to retrieve information relevant to a claim, but this process carries multiple forms of bias. In particular, they are optimized relative to the claim (query) presented, and the popularity of the relevant documents returned, rather than with respect to the diversity of the perspectives presented in them or whether they are supported by evidence. In this paper, we explore an approach to mitigating this selection bias BIBREF0 when studying (disputed) claims. Consider the claim shown in Figure FIGREF1 : “animals should have lawful rights.” One might compare the biological similarities/differences between humans and other animals to support/oppose the claim. Alternatively, one can base an argument on morality and rationality of animals, or lack thereof. Each of these arguments, which we refer to as perspectives throughout the paper, is an opinion, possibly conditional, in support of a given claim or against it. A perspective thus constitutes a particular attitude towards a given claim. Natural language understanding is at the heart of developing an ability to identify diverse perspectives for claims. In this work, we propose and study a setting that would facilitate discovering diverse perspectives and their supporting evidence with respect to a given claim. Our goal is to identify and formulate the key NLP challenges underlying this task, and develop a dataset that would allow a systematic study of these challenges. For example, for the claim in Figure FIGREF1 , multiple (non-redundant) perspectives should be retrieved from a pool of perspectives; one of them is “animals have no interest or rationality”, a perspective that should be identified as taking an opposing stance with respect to the claim. Each perspective should also be well-supported by evidence found in a pool of potential pieces of evidence. While it might be impractical to provide an exhaustive spectrum of ideas with respect to a claim, presenting a small but diverse set of perspectives could be an important step towards addressing the selection bias problem. Moreover, it would be impractical to develop an exhaustive pool of evidence for all perspectives, from a diverse set of credible sources. We are not attempting to do that. We aim at formulating the core NLP problems, and developing a dataset that will facilitate studying these problems from the NLP angle, realizing that using the outcomes of this research in practice requires addressing issues such as trustworthiness BIBREF1 , BIBREF2 and possibly others. Inherently, our objective requires understanding the relations between perspectives and claims, the nuances in the meaning of various perspectives in the context of claims, and relations between perspectives and evidence. This, we argue, can be done with a diverse enough, but not exhaustive, dataset. And it can be done without attending to the legitimacy and credibility of sources contributing evidence, an important problem but orthogonal to the one studied here. To facilitate the research towards developing solutions to such challenging issues, we propose [wave]390P[wave]415e[wave]440r[wave]465s[wave]485p[wave]525e[wave]535c[wave]595t[wave]610r[wave]635u[wave]660m, a dataset of claims, perspectives and evidence paragraphs. For a given claim and pools of perspectives and evidence paragraphs, a hypothetical system is expected to select the relevant perspectives and their supporting paragraphs. Our dataset contains 907 claims, 11,164 perspectives and 8,092 evidence paragraphs. In constructing it, we use online debate websites as our initial seed data, and augment it with search data and paraphrases to make it richer and more challenging. We make extensive use of crowdsourcing to increase the quality of the data and clean it from annotation noise. The contributions of this paper are as follows: Design Principles and Challenges In this section we provide a closer look into the challenge and propose a collection of tasks that move us closer to substantiated perspective discovery. To clarify our description we use to following notation. Let INLINEFORM0 indicate a target claim of interest (for example, the claims INLINEFORM1 and INLINEFORM2 in Figure FIGREF6 ). Each claim INLINEFORM3 is addressed by a collection of perspectives INLINEFORM4 that are grouped into clusters of equivalent perspectives. Additionally, each perspective INLINEFORM5 is supported, relative to INLINEFORM6 , by at least one evidence paragraph INLINEFORM7 , denoted INLINEFORM8 . Creating systems that would address our challenge in its full glory requires solving the following interdependent tasks: Determination of argue-worthy claims: not every claim requires an in-depth discussion of perspectives. For a system to be practical, it needs to be equipped with understanding argumentative structures BIBREF3 in order to discern disputed claims from those with straightforward responses. We set aside this problem in this work and assume that all the inputs to the systems are discussion-worthy claims. Discovery of pertinent perspectives: a system is expected to recognize argumentative sentences BIBREF4 that directly address the points raised in the disputed claim. For example, while the perspectives in Figure FIGREF6 are topically related to the claims, INLINEFORM0 do not directly address the focus of claim INLINEFORM1 (i.e., “use of animals” in “entertainment”). Perspective equivalence: a system is expected to extract a minimal and diverse set of perspectives. This requires the ability to discover equivalent perspectives INLINEFORM0 , with respect to a claim INLINEFORM1 : INLINEFORM2 . For instance, INLINEFORM3 and INLINEFORM4 are equivalent in the context of INLINEFORM5 ; however, they might not be equivalent with respect to any other claim. The conditional nature of perspective equivalence differentiates it from the paraphrasing task BIBREF5 . Stance classification of perspectives: a system is supposed to assess the stances of the perspectives with respect to the given claim (supporting, opposing, etc.) BIBREF6 . Substantiating the perspectives: a system is expected to find valid evidence paragraph(s) in support of each perspective. Conceptually, this is similar to the well-studied problem of textual entailment BIBREF7 except that here the entailment decisions depend on the choice of claims. Dataset construction In this section we describe a multi-step process, constructed with detailed analysis, substantial refinements and multiple pilots studies. We use crowdsourcing to annotate different aspects of the dataset. We used Amazon Mechanical Turk (AMT) for our annotations, restricting the task to workers in five English-speaking countries (USA, UK, Canada, New Zealand, and Australia), more than 1000 finished HITs and at least a 95% acceptance rate. To ensure the diversity of responses, we do not require additional qualifications or demographic information from our annotators. For any of the annotations steps described below, the users are guided to an external platform where they first read the instructions and try a verification step to make sure they have understood the instructions. Only after successful completion are they allowed to start the annotation tasks. Throughout our annotations, it is our aim to make sure that the workers are responding objectively to the tasks (as opposed to using their personal opinions or preferences). The screen-shots of the annotation interfaces for each step are included in the Appendix (Section SECREF56 ). In the steps outlined below, we filter out a subset of the data with low rater–rater agreement INLINEFORM0 (see Appendix SECREF47 ). In certain steps, we use an information retrieval (IR) system to generate the best candidates for the task at hand. We start by crawling the content of a few notable debating websites: idebate.com, debatewise.org, procon.org. This yields INLINEFORM0 claims, INLINEFORM1 perspectives and INLINEFORM2 evidence paragraphs (for complete statistics, see Table TABREF46 in the Appendix). This data is significantly noisy and lacks the structure we would like. In the following steps we explain how we denoise it and augment it with additional data. For each perspective we verify that it is a complete English sentence, with a clear stance with respect to the given claim. For a fixed pair of claim and perspective, we ask the crowd-workers to label the perspective with one of the five categories of support, oppose, mildly-support, mildly-oppose, or not a valid perspective. The reason that we ask for two levels of intensity is to distinguish mild or conditional arguments from those that express stronger positions. Every 10 claims (and their relevant perspectives) are bundled to form a HIT. Three independent annotators solve a HIT, and each gets paid $1.5-2 per HIT. To get rid of the ambiguous/noisy perspectives we measure rater-rater agreement on the resulting data and retain only the subset which has a significant agreement of INLINEFORM0 . To account for minor disagreements in the intensity of perspective stances, before measuring any notion of agreement, we collapse the five labels into three labels, by collapsing mildly-support and mildly-oppose into support and oppose, respectively. To assess the quality of these annotations, two of the authors independently annotate a random subset of instances in the previous step (328 perspectives for 10 claims). Afterwards, the differences were adjudicated. We measure the accuracy adjudicated results with AMT annotations to estimate the quality of our annotation. This results in an accuracy of 94%, which shows high-agreement with the crowdsourced annotations. To enrich the ways the perspectives are phrased, we crowdsource paraphrases of our perspectives. We ask annotators to generate two paraphrases for each of the 15 perspectives in each HIT, for a reward of $1.50. Subsequently, we perform another round of crowdsourcing to verify the generated paraphrases. We create HITs of 24 candidate paraphrases to be verified, with a reward of $1. Overall, this process gives us INLINEFORM0 paraphrased perspectives. The collected paraphrases form clusters of equivalent perspectives, which we refine further in the later steps. In order to ensure that our dataset contains more realistic sentences, we use web search to augment our pool of perspectives with additional sentences that are topically related to what we already have. Specifically, we use Bing search to extract sentences that are similar to our current pool of perspectives, by querying “claim+perspective”. We create a pool of relevant web sentences and use an IR system (introduced earlier) to retrieve the 10 most similar sentences. These candidate perspectives are annotated using (similar to step 2a) and only those that were agreed upon are retained. In a final round of annotation for perspectives, an expert annotator went over all the claims in order to verify that all the equivalent perspectives are clustered together. Subsequently, the expert annotator went over the most similar claim-pairs (and their perspectives), in order to annotate the missing perspectives shared between the two claims. To cut the space of claim pairs, the annotation was done on the top 350 most similar claim pairs retrieved by the IR system. The goal of this step is to decide whether a given evidence paragraph provides enough substantiations for a perspective or not. Performing these annotations exhaustively for any perspective-evidence pair is not possible. Instead, we make use of a retrieval system to annotate only the relevant pairs. In particular, we create an index of all the perspectives retained from step 2a. For a given evidence paragraph, we retrieve the top relevant perspectives. We ask the annotators to note whether a given evidence paragraph supports a given perspective or not. Each HIT contains a 20 evidence paragraphs and their top 8 relevant candidate perspectives. Each HIT is paid $1 and annotated by at least 4 independent annotators. In order to assess the quality of our annotations, a random subset of instances (4 evidence-perspective pairs) are annotated by two independent authors and the differences are adjudicated. We measure the accuracy of our adjudicated labels versus AMT labels, resulting in 87.7%. This indicates the high quality of the crowdsourced data. Statistics on the dataset We now provide a brief summary of [wave]390P[wave]415e[wave]440r[wave]465s[wave]485p[wave]525e[wave]535c[wave]595t[wave]610r[wave]635u[wave]660m. The dataset contains about INLINEFORM0 claims with a significant length diversity (Table TABREF19 ). Additionally, the dataset comes with INLINEFORM1 perspectives, most of which were generated through paraphrasing (step 2b). The perspectives which convey the same point with respect to a claim are grouped into clusters. On average, each cluster has a size of INLINEFORM2 which shows that, on average, many perspectives have equivalents. More granular details are available in Table TABREF19 . To better understand the topical breakdown of claims in the dataset, we crowdsource the set of “topics” associated with each claim (e.g., Law, Ethics, etc.) We observe that, as expected, the three topics of Politics, World, and Society have the biggest portions (Figure FIGREF21 ). Additionally, the included claims touch upon 10+ different topics. Figure FIGREF22 depicts a few popular categories and sampled questions from each. Required skills We perform a closer investigation of the abilities required to solve the stance classification task. One of the authors went through a random subset of claim-perspectives pairs and annotated each with the abilities required in determining their stances labels. We follow the common definitions used in prior work BIBREF37 , BIBREF38 . The result of this annotation is depicted in Figure FIGREF24 . As can be seen, the problem requires understanding of common-sense, i.e., an understanding that is commonly shared among humans and rarely gets explicitly mentioned in the text. Additionally, the task requires various types of coreference understanding, such as event coreference and entity coreference. Empirical Analysis In this section we provide empirical analysis to address the tasks. We create a split of 60%/15%/25% of the data train/dev/test. In order to make sure our baselines are not overfitting to the keywords of each topic (the “topic” annotation from Section SECREF20 ), we make sure to have claims with the same topic fall into the same split. For simplicity, we define a notation which we will extensively use for the rest of this paper. The clusters of equivalent perspectives are denoted as INLINEFORM0 , given a representative member INLINEFORM1 . Let INLINEFORM2 denote the collection of relevant perspectives to a claim INLINEFORM3 , which is the union of all the equivalent perspectives participating in the claim: INLINEFORM4 . Let INLINEFORM5 denote the set of evidence documents lending support to a perspective INLINEFORM6 . Additionally, denote the two pools of perspectives and evidence with INLINEFORM7 and INLINEFORM8 , respectively. Systems We make use of the following systems in our evaluation: (Information Retrieval). This baseline has been successfully used for related tasks like Question Answering BIBREF39 . We create two versions of this baseline: one with the pool of perspectives INLINEFORM0 and one with the pool of evidences INLINEFORM1 . We use this system to retrieve a ranked list of best matching perspective/evidence from the corresponding index. (Contextual representations). A recent state-of-the-art contextualized representation BIBREF40 . This system has been shown to be effective on a broad range of natural language understanding tasks. Human performance provides us with an estimate of the best achievable results on datasets. We use human annotators to measure human performance for each task. We randomly sample 10 claims from the test set, and instruct two expert annotators to solve each of T1 to T4. Evaluation metrics. We perform evaluations on four different subtasks in our dataset. In all of the following evaluations, the systems are given the two pools of perspectives INLINEFORM0 and evidences INLINEFORM1 . A system is expected to return the collection of mutually disjoint perspectives with respect to a given claim. Let INLINEFORM0 be the set of output perspectives. Define the precision and recall as INLINEFORM1 and INLINEFORM2 respectively. To calculate dataset metrics, the aforementioned per-claim metrics are averaged across all the claims in the test set. Given a claim, a system is expected to label every perspective in INLINEFORM0 with one of two labels support or oppose. We use the well-established definitions of precision-recall for this binary classification task. A system is expected to decide whether two given perspectives are equivalent or not, with respect to a given claim. We evaluate this task in a way similar to a clustering problem. For a pair of perspectives INLINEFORM0 , a system predicts whether the two are in the same cluster or not. The ground-truth is whether there is a cluster which contains both of the perspectives or not: INLINEFORM1 . We use this pairwise definition for all the pairs in INLINEFORM2 , for any claim INLINEFORM3 in the test set. Given a perspective INLINEFORM0 , we expect a system to return all the evidence INLINEFORM1 from the pool of evidence INLINEFORM2 . Let INLINEFORM3 and INLINEFORM4 be the predicted and gold evidence for a perspective INLINEFORM5 . Define macro-precision and macro-recall as INLINEFORM6 and INLINEFORM7 , respectively. The metrics are averaged across all the perspectives INLINEFORM8 participating in the test set. The goal is to get estimates of the overall performance of the systems. Instead of creating a complex measure that would take all the aspects into account, we approximate the overall performance by multiplying the disjoint measures in INLINEFORM0 , INLINEFORM1 and INLINEFORM2 . While this gives an estimate on the overall quality, it ignores the pipeline structure of the task (e.g., the propagation of the errors throughout the pipeline). We note that the task of INLINEFORM3 (perspective equivalence) is indirectly being measured within INLINEFORM4 . Furthermore, since we do not report an IR performance on INLINEFORM5 , we use the “always supp” baseline instead to estimate an overall performance for IR. Results Table TABREF40 shows a summary of the experimental results. To measure the performance of the IR system, we use the index containing INLINEFORM0 . Given each claim, we query the top INLINEFORM1 perspectives, ranked according to their retrieval scores. We tune INLINEFORM2 on our development set and report the results on the test section according to the tuned parameter. We use IR results as candidates for other solvers (including humans). For this task, IR with top-15 candidates yields INLINEFORM3 90% recall (for the PR-curve, see Figure FIGREF53 in the Appendix). In order to train BERT on this task, we use the IR candidates as the training instances. We then tune a threshold on the dev data to select the top relevant perspectives. In order to measure human performance, we create an interface where two human annotators see IR top- INLINEFORM4 and select a minimal set of perspectives (i.e., no two equivalent perspectives). We measure the quality of perspective stance classification, where the input is a claim-perspective pair, mapped to {support, oppose}. The candidate inputs are generated on the collection of perspectives INLINEFORM0 relevant to a claim INLINEFORM1 . To have an understanding of a lower bound for the metric, we measure the quality of an always-support baseline. We measure the performance of BERT on this task as well, which is about 20% below human performance. This might be because this task requires a deep understanding of commonsense knowledge/reasoning (as indicated earlier in Section SECREF5 ). Since a retrieval system is unlikely to distinguish perspectives with different stances, we do not report the IR performance for this task. We create instances in the form of INLINEFORM0 where INLINEFORM1 . The expected label is whether the two perspectives belong to the same equivalence class or not. In the experiments, we observe that BERT has a significant performance gain of INLINEFORM2 over the IR baseline. Meanwhile, this system is behind human performance by a margin of INLINEFORM3 . We evaluate the systems on the extraction of items from the pool of evidences INLINEFORM0 , given a claim-perspective pair. To measure the performance of the IR system working with the index containing INLINEFORM1 we issue a query containing the concatenation of a perspective-claim pair. Given the sorted results (according to their retrieval confidence score), we select the top candidates using a threshold parameter tuned on the dev set. We also use the IR system's candidates (top-60) for other baselines. This set of candidates yields a INLINEFORM2 85% recall (for the PR-curve, see Figure FIGREF53 in the Appendix). We train BERT system to map each (gold) claim-perspective pair to its corresponding evidence paragraph(s). Since each evidence paragraph could be long (hence hard to feed into BERT), we split each evidence paragraph into sliding windows of 3 sentences. For each claim-perspective pair, we use all 3-sentences windows of gold evidence paragraphs as positive examples, and rest of the IR candidates as negative examples. In the run-time, if a certain percentage (tuned on the dev set) of the sentences from a given evidence paragraph are predicted as positive by BERT, we consider the whole evidence as positive (i.e. it supports a given perspective). Overall, the performances on this task are lower, which could probably be expected, considering the length of the evidence paragraphs. Similar to the previous scenarios, the BERT solver has a significant gain over a trivial baseline, while standing behind human with a significant margin. Discussion As one of the key consequences of the information revolution, information pollution and over-personalization have already had detrimental effects on our life. In this work, we attempt to facilitate the development of systems that aid in better organization and access to information, with the hope that the access to more diverse information can address over-personalization too BIBREF41 . The dataset presented here is not intended to be exhaustive, nor does it attempt to reflect a true distribution of the important claims and perspectives in the world, or to associate any of the perspective and identified evidence with levels of expertise and trustworthiness. Moreover, it is important to note that when we ask crowd-workers to evaluate the validity of perspectives and evidence, their judgement process can potentially be influenced by their prior beliefs BIBREF42 . To avoid additional biases introduced in the process of dataset construction, we try to take the least restrictive approach in filtering dataset content beyond the necessary quality assurances. For this reason, we choose not to explicitly ask annotators to filter contents based on the intention of their creators (e.g. offensive content). A few algorithmic components were not addressed in this work, although they are important to the complete perspective discovery and presentation pipeline. For instance, one has to first verify that the input to the system is a reasonably well-phrased and an argue-worthy claim. And, to construct the pool of perspectives, one has to extract relevant arguments BIBREF43 . In a similar vein, since our main focus is the study of the relations between claims, perspectives, and evidence, we leave out important issues such as their degree of factuality BIBREF8 or trustworthiness BIBREF44 , BIBREF1 as separate aspects of problem. We hope that some of these challenges and limitations will be addressed in future work. Conclusion The importance of this work is three-fold; we define the problem of substantiated perspective discovery and characterize language understanding tasks necessary to address this problem. We combine online resources, web data and crowdsourcing and create a high-quality dataset, in order to drive research on this problem. Finally, we build and evaluate strong baseline supervised systems for this problem. Our hope is that this dataset would bring more attention to this important problem and would speed up the progress in this direction. There are two aspects that we defer to future work. First, the systems designed here assumed that the input are valid claim sentences. To make use of such systems, one needs to develop mechanisms to recognize valid argumentative structures. In addition, we ignore trustworthiness and credibility issues, important research issues that are addressed in other works. Acknowledgments The authors would like to thank Jennifer Sheffield, Stephen Mayhew, Shyam Upadhyay, Nitish Gupta and the anonymous reviewers for insightful comments and suggestions. This work was supported in part by a gift from Google and by Contract HR0011-15-2-0025 with the US Defense Advanced Research Projects Agency (DARPA). The views expressed are those of the authors and do not reflect the official policy or position of the Department of Defense or the U.S. Government. Statistics We provide brief statistics on the sources of different content in our dataset in Table TABREF46 . In particular, this table shows: the size of the data collected from online debate websites (step 1). the size of the data filtered out (step 2a). the size of the perspectives added by paraphrases (step 2b). the size of the perspective candidates added by web (step 2c). Measure of agreement We use the following definition formula in calculation of our measure of agreement. For a fixed subject (problem instance), let INLINEFORM0 represent the number of raters who assigned the given subject to the INLINEFORM1 -th category. The measure of agreement is defined as INLINEFORM2 where for INLINEFORM0 . Intuitively, this function measure concentration of values the vector INLINEFORM1 . Take the edge cases: Values concentrated: INLINEFORM0 (in other words INLINEFORM1 ) INLINEFORM2 . Least concentration (uniformly distribution): INLINEFORM0 . This definition is used in calculation of more extensive agreement measures (e.g, Fleiss' kappa BIBREF49 ). There multiple ways of interpreting this formula: It indicates how many rater–rater pairs are in agreement, relative to the number of all possible rater–rater pairs. One can interpret this measure by a simple combinatorial notions. Suppose we have sets INLINEFORM0 which are pairwise disjunct and for each INLINEFORM1 let INLINEFORM2 . We choose randomly two elements from INLINEFORM3 . Then the probability that they are from the same set is the expressed by INLINEFORM4 . We can write INLINEFORM0 in terms of INLINEFORM1 which is the conventional Chi-Square statistic for testing if the vector of INLINEFORM2 values comes from the all-categories-equally-likely flat multinomial model.
Average claim length is 8.9 tokens.
fb96c0cd777bb2961117feca19c6d41bfd8cfd42
fb96c0cd777bb2961117feca19c6d41bfd8cfd42_0
Q: What debate websites did they look at? Text: Introduction Understanding most nontrivial claims requires insights from various perspectives. Today, we make use of search engines or recommendation systems to retrieve information relevant to a claim, but this process carries multiple forms of bias. In particular, they are optimized relative to the claim (query) presented, and the popularity of the relevant documents returned, rather than with respect to the diversity of the perspectives presented in them or whether they are supported by evidence. In this paper, we explore an approach to mitigating this selection bias BIBREF0 when studying (disputed) claims. Consider the claim shown in Figure FIGREF1 : “animals should have lawful rights.” One might compare the biological similarities/differences between humans and other animals to support/oppose the claim. Alternatively, one can base an argument on morality and rationality of animals, or lack thereof. Each of these arguments, which we refer to as perspectives throughout the paper, is an opinion, possibly conditional, in support of a given claim or against it. A perspective thus constitutes a particular attitude towards a given claim. Natural language understanding is at the heart of developing an ability to identify diverse perspectives for claims. In this work, we propose and study a setting that would facilitate discovering diverse perspectives and their supporting evidence with respect to a given claim. Our goal is to identify and formulate the key NLP challenges underlying this task, and develop a dataset that would allow a systematic study of these challenges. For example, for the claim in Figure FIGREF1 , multiple (non-redundant) perspectives should be retrieved from a pool of perspectives; one of them is “animals have no interest or rationality”, a perspective that should be identified as taking an opposing stance with respect to the claim. Each perspective should also be well-supported by evidence found in a pool of potential pieces of evidence. While it might be impractical to provide an exhaustive spectrum of ideas with respect to a claim, presenting a small but diverse set of perspectives could be an important step towards addressing the selection bias problem. Moreover, it would be impractical to develop an exhaustive pool of evidence for all perspectives, from a diverse set of credible sources. We are not attempting to do that. We aim at formulating the core NLP problems, and developing a dataset that will facilitate studying these problems from the NLP angle, realizing that using the outcomes of this research in practice requires addressing issues such as trustworthiness BIBREF1 , BIBREF2 and possibly others. Inherently, our objective requires understanding the relations between perspectives and claims, the nuances in the meaning of various perspectives in the context of claims, and relations between perspectives and evidence. This, we argue, can be done with a diverse enough, but not exhaustive, dataset. And it can be done without attending to the legitimacy and credibility of sources contributing evidence, an important problem but orthogonal to the one studied here. To facilitate the research towards developing solutions to such challenging issues, we propose [wave]390P[wave]415e[wave]440r[wave]465s[wave]485p[wave]525e[wave]535c[wave]595t[wave]610r[wave]635u[wave]660m, a dataset of claims, perspectives and evidence paragraphs. For a given claim and pools of perspectives and evidence paragraphs, a hypothetical system is expected to select the relevant perspectives and their supporting paragraphs. Our dataset contains 907 claims, 11,164 perspectives and 8,092 evidence paragraphs. In constructing it, we use online debate websites as our initial seed data, and augment it with search data and paraphrases to make it richer and more challenging. We make extensive use of crowdsourcing to increase the quality of the data and clean it from annotation noise. The contributions of this paper are as follows: Design Principles and Challenges In this section we provide a closer look into the challenge and propose a collection of tasks that move us closer to substantiated perspective discovery. To clarify our description we use to following notation. Let INLINEFORM0 indicate a target claim of interest (for example, the claims INLINEFORM1 and INLINEFORM2 in Figure FIGREF6 ). Each claim INLINEFORM3 is addressed by a collection of perspectives INLINEFORM4 that are grouped into clusters of equivalent perspectives. Additionally, each perspective INLINEFORM5 is supported, relative to INLINEFORM6 , by at least one evidence paragraph INLINEFORM7 , denoted INLINEFORM8 . Creating systems that would address our challenge in its full glory requires solving the following interdependent tasks: Determination of argue-worthy claims: not every claim requires an in-depth discussion of perspectives. For a system to be practical, it needs to be equipped with understanding argumentative structures BIBREF3 in order to discern disputed claims from those with straightforward responses. We set aside this problem in this work and assume that all the inputs to the systems are discussion-worthy claims. Discovery of pertinent perspectives: a system is expected to recognize argumentative sentences BIBREF4 that directly address the points raised in the disputed claim. For example, while the perspectives in Figure FIGREF6 are topically related to the claims, INLINEFORM0 do not directly address the focus of claim INLINEFORM1 (i.e., “use of animals” in “entertainment”). Perspective equivalence: a system is expected to extract a minimal and diverse set of perspectives. This requires the ability to discover equivalent perspectives INLINEFORM0 , with respect to a claim INLINEFORM1 : INLINEFORM2 . For instance, INLINEFORM3 and INLINEFORM4 are equivalent in the context of INLINEFORM5 ; however, they might not be equivalent with respect to any other claim. The conditional nature of perspective equivalence differentiates it from the paraphrasing task BIBREF5 . Stance classification of perspectives: a system is supposed to assess the stances of the perspectives with respect to the given claim (supporting, opposing, etc.) BIBREF6 . Substantiating the perspectives: a system is expected to find valid evidence paragraph(s) in support of each perspective. Conceptually, this is similar to the well-studied problem of textual entailment BIBREF7 except that here the entailment decisions depend on the choice of claims. Dataset construction In this section we describe a multi-step process, constructed with detailed analysis, substantial refinements and multiple pilots studies. We use crowdsourcing to annotate different aspects of the dataset. We used Amazon Mechanical Turk (AMT) for our annotations, restricting the task to workers in five English-speaking countries (USA, UK, Canada, New Zealand, and Australia), more than 1000 finished HITs and at least a 95% acceptance rate. To ensure the diversity of responses, we do not require additional qualifications or demographic information from our annotators. For any of the annotations steps described below, the users are guided to an external platform where they first read the instructions and try a verification step to make sure they have understood the instructions. Only after successful completion are they allowed to start the annotation tasks. Throughout our annotations, it is our aim to make sure that the workers are responding objectively to the tasks (as opposed to using their personal opinions or preferences). The screen-shots of the annotation interfaces for each step are included in the Appendix (Section SECREF56 ). In the steps outlined below, we filter out a subset of the data with low rater–rater agreement INLINEFORM0 (see Appendix SECREF47 ). In certain steps, we use an information retrieval (IR) system to generate the best candidates for the task at hand. We start by crawling the content of a few notable debating websites: idebate.com, debatewise.org, procon.org. This yields INLINEFORM0 claims, INLINEFORM1 perspectives and INLINEFORM2 evidence paragraphs (for complete statistics, see Table TABREF46 in the Appendix). This data is significantly noisy and lacks the structure we would like. In the following steps we explain how we denoise it and augment it with additional data. For each perspective we verify that it is a complete English sentence, with a clear stance with respect to the given claim. For a fixed pair of claim and perspective, we ask the crowd-workers to label the perspective with one of the five categories of support, oppose, mildly-support, mildly-oppose, or not a valid perspective. The reason that we ask for two levels of intensity is to distinguish mild or conditional arguments from those that express stronger positions. Every 10 claims (and their relevant perspectives) are bundled to form a HIT. Three independent annotators solve a HIT, and each gets paid $1.5-2 per HIT. To get rid of the ambiguous/noisy perspectives we measure rater-rater agreement on the resulting data and retain only the subset which has a significant agreement of INLINEFORM0 . To account for minor disagreements in the intensity of perspective stances, before measuring any notion of agreement, we collapse the five labels into three labels, by collapsing mildly-support and mildly-oppose into support and oppose, respectively. To assess the quality of these annotations, two of the authors independently annotate a random subset of instances in the previous step (328 perspectives for 10 claims). Afterwards, the differences were adjudicated. We measure the accuracy adjudicated results with AMT annotations to estimate the quality of our annotation. This results in an accuracy of 94%, which shows high-agreement with the crowdsourced annotations. To enrich the ways the perspectives are phrased, we crowdsource paraphrases of our perspectives. We ask annotators to generate two paraphrases for each of the 15 perspectives in each HIT, for a reward of $1.50. Subsequently, we perform another round of crowdsourcing to verify the generated paraphrases. We create HITs of 24 candidate paraphrases to be verified, with a reward of $1. Overall, this process gives us INLINEFORM0 paraphrased perspectives. The collected paraphrases form clusters of equivalent perspectives, which we refine further in the later steps. In order to ensure that our dataset contains more realistic sentences, we use web search to augment our pool of perspectives with additional sentences that are topically related to what we already have. Specifically, we use Bing search to extract sentences that are similar to our current pool of perspectives, by querying “claim+perspective”. We create a pool of relevant web sentences and use an IR system (introduced earlier) to retrieve the 10 most similar sentences. These candidate perspectives are annotated using (similar to step 2a) and only those that were agreed upon are retained. In a final round of annotation for perspectives, an expert annotator went over all the claims in order to verify that all the equivalent perspectives are clustered together. Subsequently, the expert annotator went over the most similar claim-pairs (and their perspectives), in order to annotate the missing perspectives shared between the two claims. To cut the space of claim pairs, the annotation was done on the top 350 most similar claim pairs retrieved by the IR system. The goal of this step is to decide whether a given evidence paragraph provides enough substantiations for a perspective or not. Performing these annotations exhaustively for any perspective-evidence pair is not possible. Instead, we make use of a retrieval system to annotate only the relevant pairs. In particular, we create an index of all the perspectives retained from step 2a. For a given evidence paragraph, we retrieve the top relevant perspectives. We ask the annotators to note whether a given evidence paragraph supports a given perspective or not. Each HIT contains a 20 evidence paragraphs and their top 8 relevant candidate perspectives. Each HIT is paid $1 and annotated by at least 4 independent annotators. In order to assess the quality of our annotations, a random subset of instances (4 evidence-perspective pairs) are annotated by two independent authors and the differences are adjudicated. We measure the accuracy of our adjudicated labels versus AMT labels, resulting in 87.7%. This indicates the high quality of the crowdsourced data. Statistics on the dataset We now provide a brief summary of [wave]390P[wave]415e[wave]440r[wave]465s[wave]485p[wave]525e[wave]535c[wave]595t[wave]610r[wave]635u[wave]660m. The dataset contains about INLINEFORM0 claims with a significant length diversity (Table TABREF19 ). Additionally, the dataset comes with INLINEFORM1 perspectives, most of which were generated through paraphrasing (step 2b). The perspectives which convey the same point with respect to a claim are grouped into clusters. On average, each cluster has a size of INLINEFORM2 which shows that, on average, many perspectives have equivalents. More granular details are available in Table TABREF19 . To better understand the topical breakdown of claims in the dataset, we crowdsource the set of “topics” associated with each claim (e.g., Law, Ethics, etc.) We observe that, as expected, the three topics of Politics, World, and Society have the biggest portions (Figure FIGREF21 ). Additionally, the included claims touch upon 10+ different topics. Figure FIGREF22 depicts a few popular categories and sampled questions from each. Required skills We perform a closer investigation of the abilities required to solve the stance classification task. One of the authors went through a random subset of claim-perspectives pairs and annotated each with the abilities required in determining their stances labels. We follow the common definitions used in prior work BIBREF37 , BIBREF38 . The result of this annotation is depicted in Figure FIGREF24 . As can be seen, the problem requires understanding of common-sense, i.e., an understanding that is commonly shared among humans and rarely gets explicitly mentioned in the text. Additionally, the task requires various types of coreference understanding, such as event coreference and entity coreference. Empirical Analysis In this section we provide empirical analysis to address the tasks. We create a split of 60%/15%/25% of the data train/dev/test. In order to make sure our baselines are not overfitting to the keywords of each topic (the “topic” annotation from Section SECREF20 ), we make sure to have claims with the same topic fall into the same split. For simplicity, we define a notation which we will extensively use for the rest of this paper. The clusters of equivalent perspectives are denoted as INLINEFORM0 , given a representative member INLINEFORM1 . Let INLINEFORM2 denote the collection of relevant perspectives to a claim INLINEFORM3 , which is the union of all the equivalent perspectives participating in the claim: INLINEFORM4 . Let INLINEFORM5 denote the set of evidence documents lending support to a perspective INLINEFORM6 . Additionally, denote the two pools of perspectives and evidence with INLINEFORM7 and INLINEFORM8 , respectively. Systems We make use of the following systems in our evaluation: (Information Retrieval). This baseline has been successfully used for related tasks like Question Answering BIBREF39 . We create two versions of this baseline: one with the pool of perspectives INLINEFORM0 and one with the pool of evidences INLINEFORM1 . We use this system to retrieve a ranked list of best matching perspective/evidence from the corresponding index. (Contextual representations). A recent state-of-the-art contextualized representation BIBREF40 . This system has been shown to be effective on a broad range of natural language understanding tasks. Human performance provides us with an estimate of the best achievable results on datasets. We use human annotators to measure human performance for each task. We randomly sample 10 claims from the test set, and instruct two expert annotators to solve each of T1 to T4. Evaluation metrics. We perform evaluations on four different subtasks in our dataset. In all of the following evaluations, the systems are given the two pools of perspectives INLINEFORM0 and evidences INLINEFORM1 . A system is expected to return the collection of mutually disjoint perspectives with respect to a given claim. Let INLINEFORM0 be the set of output perspectives. Define the precision and recall as INLINEFORM1 and INLINEFORM2 respectively. To calculate dataset metrics, the aforementioned per-claim metrics are averaged across all the claims in the test set. Given a claim, a system is expected to label every perspective in INLINEFORM0 with one of two labels support or oppose. We use the well-established definitions of precision-recall for this binary classification task. A system is expected to decide whether two given perspectives are equivalent or not, with respect to a given claim. We evaluate this task in a way similar to a clustering problem. For a pair of perspectives INLINEFORM0 , a system predicts whether the two are in the same cluster or not. The ground-truth is whether there is a cluster which contains both of the perspectives or not: INLINEFORM1 . We use this pairwise definition for all the pairs in INLINEFORM2 , for any claim INLINEFORM3 in the test set. Given a perspective INLINEFORM0 , we expect a system to return all the evidence INLINEFORM1 from the pool of evidence INLINEFORM2 . Let INLINEFORM3 and INLINEFORM4 be the predicted and gold evidence for a perspective INLINEFORM5 . Define macro-precision and macro-recall as INLINEFORM6 and INLINEFORM7 , respectively. The metrics are averaged across all the perspectives INLINEFORM8 participating in the test set. The goal is to get estimates of the overall performance of the systems. Instead of creating a complex measure that would take all the aspects into account, we approximate the overall performance by multiplying the disjoint measures in INLINEFORM0 , INLINEFORM1 and INLINEFORM2 . While this gives an estimate on the overall quality, it ignores the pipeline structure of the task (e.g., the propagation of the errors throughout the pipeline). We note that the task of INLINEFORM3 (perspective equivalence) is indirectly being measured within INLINEFORM4 . Furthermore, since we do not report an IR performance on INLINEFORM5 , we use the “always supp” baseline instead to estimate an overall performance for IR. Results Table TABREF40 shows a summary of the experimental results. To measure the performance of the IR system, we use the index containing INLINEFORM0 . Given each claim, we query the top INLINEFORM1 perspectives, ranked according to their retrieval scores. We tune INLINEFORM2 on our development set and report the results on the test section according to the tuned parameter. We use IR results as candidates for other solvers (including humans). For this task, IR with top-15 candidates yields INLINEFORM3 90% recall (for the PR-curve, see Figure FIGREF53 in the Appendix). In order to train BERT on this task, we use the IR candidates as the training instances. We then tune a threshold on the dev data to select the top relevant perspectives. In order to measure human performance, we create an interface where two human annotators see IR top- INLINEFORM4 and select a minimal set of perspectives (i.e., no two equivalent perspectives). We measure the quality of perspective stance classification, where the input is a claim-perspective pair, mapped to {support, oppose}. The candidate inputs are generated on the collection of perspectives INLINEFORM0 relevant to a claim INLINEFORM1 . To have an understanding of a lower bound for the metric, we measure the quality of an always-support baseline. We measure the performance of BERT on this task as well, which is about 20% below human performance. This might be because this task requires a deep understanding of commonsense knowledge/reasoning (as indicated earlier in Section SECREF5 ). Since a retrieval system is unlikely to distinguish perspectives with different stances, we do not report the IR performance for this task. We create instances in the form of INLINEFORM0 where INLINEFORM1 . The expected label is whether the two perspectives belong to the same equivalence class or not. In the experiments, we observe that BERT has a significant performance gain of INLINEFORM2 over the IR baseline. Meanwhile, this system is behind human performance by a margin of INLINEFORM3 . We evaluate the systems on the extraction of items from the pool of evidences INLINEFORM0 , given a claim-perspective pair. To measure the performance of the IR system working with the index containing INLINEFORM1 we issue a query containing the concatenation of a perspective-claim pair. Given the sorted results (according to their retrieval confidence score), we select the top candidates using a threshold parameter tuned on the dev set. We also use the IR system's candidates (top-60) for other baselines. This set of candidates yields a INLINEFORM2 85% recall (for the PR-curve, see Figure FIGREF53 in the Appendix). We train BERT system to map each (gold) claim-perspective pair to its corresponding evidence paragraph(s). Since each evidence paragraph could be long (hence hard to feed into BERT), we split each evidence paragraph into sliding windows of 3 sentences. For each claim-perspective pair, we use all 3-sentences windows of gold evidence paragraphs as positive examples, and rest of the IR candidates as negative examples. In the run-time, if a certain percentage (tuned on the dev set) of the sentences from a given evidence paragraph are predicted as positive by BERT, we consider the whole evidence as positive (i.e. it supports a given perspective). Overall, the performances on this task are lower, which could probably be expected, considering the length of the evidence paragraphs. Similar to the previous scenarios, the BERT solver has a significant gain over a trivial baseline, while standing behind human with a significant margin. Discussion As one of the key consequences of the information revolution, information pollution and over-personalization have already had detrimental effects on our life. In this work, we attempt to facilitate the development of systems that aid in better organization and access to information, with the hope that the access to more diverse information can address over-personalization too BIBREF41 . The dataset presented here is not intended to be exhaustive, nor does it attempt to reflect a true distribution of the important claims and perspectives in the world, or to associate any of the perspective and identified evidence with levels of expertise and trustworthiness. Moreover, it is important to note that when we ask crowd-workers to evaluate the validity of perspectives and evidence, their judgement process can potentially be influenced by their prior beliefs BIBREF42 . To avoid additional biases introduced in the process of dataset construction, we try to take the least restrictive approach in filtering dataset content beyond the necessary quality assurances. For this reason, we choose not to explicitly ask annotators to filter contents based on the intention of their creators (e.g. offensive content). A few algorithmic components were not addressed in this work, although they are important to the complete perspective discovery and presentation pipeline. For instance, one has to first verify that the input to the system is a reasonably well-phrased and an argue-worthy claim. And, to construct the pool of perspectives, one has to extract relevant arguments BIBREF43 . In a similar vein, since our main focus is the study of the relations between claims, perspectives, and evidence, we leave out important issues such as their degree of factuality BIBREF8 or trustworthiness BIBREF44 , BIBREF1 as separate aspects of problem. We hope that some of these challenges and limitations will be addressed in future work. Conclusion The importance of this work is three-fold; we define the problem of substantiated perspective discovery and characterize language understanding tasks necessary to address this problem. We combine online resources, web data and crowdsourcing and create a high-quality dataset, in order to drive research on this problem. Finally, we build and evaluate strong baseline supervised systems for this problem. Our hope is that this dataset would bring more attention to this important problem and would speed up the progress in this direction. There are two aspects that we defer to future work. First, the systems designed here assumed that the input are valid claim sentences. To make use of such systems, one needs to develop mechanisms to recognize valid argumentative structures. In addition, we ignore trustworthiness and credibility issues, important research issues that are addressed in other works. Acknowledgments The authors would like to thank Jennifer Sheffield, Stephen Mayhew, Shyam Upadhyay, Nitish Gupta and the anonymous reviewers for insightful comments and suggestions. This work was supported in part by a gift from Google and by Contract HR0011-15-2-0025 with the US Defense Advanced Research Projects Agency (DARPA). The views expressed are those of the authors and do not reflect the official policy or position of the Department of Defense or the U.S. Government. Statistics We provide brief statistics on the sources of different content in our dataset in Table TABREF46 . In particular, this table shows: the size of the data collected from online debate websites (step 1). the size of the data filtered out (step 2a). the size of the perspectives added by paraphrases (step 2b). the size of the perspective candidates added by web (step 2c). Measure of agreement We use the following definition formula in calculation of our measure of agreement. For a fixed subject (problem instance), let INLINEFORM0 represent the number of raters who assigned the given subject to the INLINEFORM1 -th category. The measure of agreement is defined as INLINEFORM2 where for INLINEFORM0 . Intuitively, this function measure concentration of values the vector INLINEFORM1 . Take the edge cases: Values concentrated: INLINEFORM0 (in other words INLINEFORM1 ) INLINEFORM2 . Least concentration (uniformly distribution): INLINEFORM0 . This definition is used in calculation of more extensive agreement measures (e.g, Fleiss' kappa BIBREF49 ). There multiple ways of interpreting this formula: It indicates how many rater–rater pairs are in agreement, relative to the number of all possible rater–rater pairs. One can interpret this measure by a simple combinatorial notions. Suppose we have sets INLINEFORM0 which are pairwise disjunct and for each INLINEFORM1 let INLINEFORM2 . We choose randomly two elements from INLINEFORM3 . Then the probability that they are from the same set is the expressed by INLINEFORM4 . We can write INLINEFORM0 in terms of INLINEFORM1 which is the conventional Chi-Square statistic for testing if the vector of INLINEFORM2 values comes from the all-categories-equally-likely flat multinomial model.
idebate.com, debatewise.org, procon.org
534f69c8c90467d5aa4e38d7c25c53dbc94f4b24
534f69c8c90467d5aa4e38d7c25c53dbc94f4b24_0
Q: What crowdsourcing platform did they use? Text: Introduction Understanding most nontrivial claims requires insights from various perspectives. Today, we make use of search engines or recommendation systems to retrieve information relevant to a claim, but this process carries multiple forms of bias. In particular, they are optimized relative to the claim (query) presented, and the popularity of the relevant documents returned, rather than with respect to the diversity of the perspectives presented in them or whether they are supported by evidence. In this paper, we explore an approach to mitigating this selection bias BIBREF0 when studying (disputed) claims. Consider the claim shown in Figure FIGREF1 : “animals should have lawful rights.” One might compare the biological similarities/differences between humans and other animals to support/oppose the claim. Alternatively, one can base an argument on morality and rationality of animals, or lack thereof. Each of these arguments, which we refer to as perspectives throughout the paper, is an opinion, possibly conditional, in support of a given claim or against it. A perspective thus constitutes a particular attitude towards a given claim. Natural language understanding is at the heart of developing an ability to identify diverse perspectives for claims. In this work, we propose and study a setting that would facilitate discovering diverse perspectives and their supporting evidence with respect to a given claim. Our goal is to identify and formulate the key NLP challenges underlying this task, and develop a dataset that would allow a systematic study of these challenges. For example, for the claim in Figure FIGREF1 , multiple (non-redundant) perspectives should be retrieved from a pool of perspectives; one of them is “animals have no interest or rationality”, a perspective that should be identified as taking an opposing stance with respect to the claim. Each perspective should also be well-supported by evidence found in a pool of potential pieces of evidence. While it might be impractical to provide an exhaustive spectrum of ideas with respect to a claim, presenting a small but diverse set of perspectives could be an important step towards addressing the selection bias problem. Moreover, it would be impractical to develop an exhaustive pool of evidence for all perspectives, from a diverse set of credible sources. We are not attempting to do that. We aim at formulating the core NLP problems, and developing a dataset that will facilitate studying these problems from the NLP angle, realizing that using the outcomes of this research in practice requires addressing issues such as trustworthiness BIBREF1 , BIBREF2 and possibly others. Inherently, our objective requires understanding the relations between perspectives and claims, the nuances in the meaning of various perspectives in the context of claims, and relations between perspectives and evidence. This, we argue, can be done with a diverse enough, but not exhaustive, dataset. And it can be done without attending to the legitimacy and credibility of sources contributing evidence, an important problem but orthogonal to the one studied here. To facilitate the research towards developing solutions to such challenging issues, we propose [wave]390P[wave]415e[wave]440r[wave]465s[wave]485p[wave]525e[wave]535c[wave]595t[wave]610r[wave]635u[wave]660m, a dataset of claims, perspectives and evidence paragraphs. For a given claim and pools of perspectives and evidence paragraphs, a hypothetical system is expected to select the relevant perspectives and their supporting paragraphs. Our dataset contains 907 claims, 11,164 perspectives and 8,092 evidence paragraphs. In constructing it, we use online debate websites as our initial seed data, and augment it with search data and paraphrases to make it richer and more challenging. We make extensive use of crowdsourcing to increase the quality of the data and clean it from annotation noise. The contributions of this paper are as follows: Design Principles and Challenges In this section we provide a closer look into the challenge and propose a collection of tasks that move us closer to substantiated perspective discovery. To clarify our description we use to following notation. Let INLINEFORM0 indicate a target claim of interest (for example, the claims INLINEFORM1 and INLINEFORM2 in Figure FIGREF6 ). Each claim INLINEFORM3 is addressed by a collection of perspectives INLINEFORM4 that are grouped into clusters of equivalent perspectives. Additionally, each perspective INLINEFORM5 is supported, relative to INLINEFORM6 , by at least one evidence paragraph INLINEFORM7 , denoted INLINEFORM8 . Creating systems that would address our challenge in its full glory requires solving the following interdependent tasks: Determination of argue-worthy claims: not every claim requires an in-depth discussion of perspectives. For a system to be practical, it needs to be equipped with understanding argumentative structures BIBREF3 in order to discern disputed claims from those with straightforward responses. We set aside this problem in this work and assume that all the inputs to the systems are discussion-worthy claims. Discovery of pertinent perspectives: a system is expected to recognize argumentative sentences BIBREF4 that directly address the points raised in the disputed claim. For example, while the perspectives in Figure FIGREF6 are topically related to the claims, INLINEFORM0 do not directly address the focus of claim INLINEFORM1 (i.e., “use of animals” in “entertainment”). Perspective equivalence: a system is expected to extract a minimal and diverse set of perspectives. This requires the ability to discover equivalent perspectives INLINEFORM0 , with respect to a claim INLINEFORM1 : INLINEFORM2 . For instance, INLINEFORM3 and INLINEFORM4 are equivalent in the context of INLINEFORM5 ; however, they might not be equivalent with respect to any other claim. The conditional nature of perspective equivalence differentiates it from the paraphrasing task BIBREF5 . Stance classification of perspectives: a system is supposed to assess the stances of the perspectives with respect to the given claim (supporting, opposing, etc.) BIBREF6 . Substantiating the perspectives: a system is expected to find valid evidence paragraph(s) in support of each perspective. Conceptually, this is similar to the well-studied problem of textual entailment BIBREF7 except that here the entailment decisions depend on the choice of claims. Dataset construction In this section we describe a multi-step process, constructed with detailed analysis, substantial refinements and multiple pilots studies. We use crowdsourcing to annotate different aspects of the dataset. We used Amazon Mechanical Turk (AMT) for our annotations, restricting the task to workers in five English-speaking countries (USA, UK, Canada, New Zealand, and Australia), more than 1000 finished HITs and at least a 95% acceptance rate. To ensure the diversity of responses, we do not require additional qualifications or demographic information from our annotators. For any of the annotations steps described below, the users are guided to an external platform where they first read the instructions and try a verification step to make sure they have understood the instructions. Only after successful completion are they allowed to start the annotation tasks. Throughout our annotations, it is our aim to make sure that the workers are responding objectively to the tasks (as opposed to using their personal opinions or preferences). The screen-shots of the annotation interfaces for each step are included in the Appendix (Section SECREF56 ). In the steps outlined below, we filter out a subset of the data with low rater–rater agreement INLINEFORM0 (see Appendix SECREF47 ). In certain steps, we use an information retrieval (IR) system to generate the best candidates for the task at hand. We start by crawling the content of a few notable debating websites: idebate.com, debatewise.org, procon.org. This yields INLINEFORM0 claims, INLINEFORM1 perspectives and INLINEFORM2 evidence paragraphs (for complete statistics, see Table TABREF46 in the Appendix). This data is significantly noisy and lacks the structure we would like. In the following steps we explain how we denoise it and augment it with additional data. For each perspective we verify that it is a complete English sentence, with a clear stance with respect to the given claim. For a fixed pair of claim and perspective, we ask the crowd-workers to label the perspective with one of the five categories of support, oppose, mildly-support, mildly-oppose, or not a valid perspective. The reason that we ask for two levels of intensity is to distinguish mild or conditional arguments from those that express stronger positions. Every 10 claims (and their relevant perspectives) are bundled to form a HIT. Three independent annotators solve a HIT, and each gets paid $1.5-2 per HIT. To get rid of the ambiguous/noisy perspectives we measure rater-rater agreement on the resulting data and retain only the subset which has a significant agreement of INLINEFORM0 . To account for minor disagreements in the intensity of perspective stances, before measuring any notion of agreement, we collapse the five labels into three labels, by collapsing mildly-support and mildly-oppose into support and oppose, respectively. To assess the quality of these annotations, two of the authors independently annotate a random subset of instances in the previous step (328 perspectives for 10 claims). Afterwards, the differences were adjudicated. We measure the accuracy adjudicated results with AMT annotations to estimate the quality of our annotation. This results in an accuracy of 94%, which shows high-agreement with the crowdsourced annotations. To enrich the ways the perspectives are phrased, we crowdsource paraphrases of our perspectives. We ask annotators to generate two paraphrases for each of the 15 perspectives in each HIT, for a reward of $1.50. Subsequently, we perform another round of crowdsourcing to verify the generated paraphrases. We create HITs of 24 candidate paraphrases to be verified, with a reward of $1. Overall, this process gives us INLINEFORM0 paraphrased perspectives. The collected paraphrases form clusters of equivalent perspectives, which we refine further in the later steps. In order to ensure that our dataset contains more realistic sentences, we use web search to augment our pool of perspectives with additional sentences that are topically related to what we already have. Specifically, we use Bing search to extract sentences that are similar to our current pool of perspectives, by querying “claim+perspective”. We create a pool of relevant web sentences and use an IR system (introduced earlier) to retrieve the 10 most similar sentences. These candidate perspectives are annotated using (similar to step 2a) and only those that were agreed upon are retained. In a final round of annotation for perspectives, an expert annotator went over all the claims in order to verify that all the equivalent perspectives are clustered together. Subsequently, the expert annotator went over the most similar claim-pairs (and their perspectives), in order to annotate the missing perspectives shared between the two claims. To cut the space of claim pairs, the annotation was done on the top 350 most similar claim pairs retrieved by the IR system. The goal of this step is to decide whether a given evidence paragraph provides enough substantiations for a perspective or not. Performing these annotations exhaustively for any perspective-evidence pair is not possible. Instead, we make use of a retrieval system to annotate only the relevant pairs. In particular, we create an index of all the perspectives retained from step 2a. For a given evidence paragraph, we retrieve the top relevant perspectives. We ask the annotators to note whether a given evidence paragraph supports a given perspective or not. Each HIT contains a 20 evidence paragraphs and their top 8 relevant candidate perspectives. Each HIT is paid $1 and annotated by at least 4 independent annotators. In order to assess the quality of our annotations, a random subset of instances (4 evidence-perspective pairs) are annotated by two independent authors and the differences are adjudicated. We measure the accuracy of our adjudicated labels versus AMT labels, resulting in 87.7%. This indicates the high quality of the crowdsourced data. Statistics on the dataset We now provide a brief summary of [wave]390P[wave]415e[wave]440r[wave]465s[wave]485p[wave]525e[wave]535c[wave]595t[wave]610r[wave]635u[wave]660m. The dataset contains about INLINEFORM0 claims with a significant length diversity (Table TABREF19 ). Additionally, the dataset comes with INLINEFORM1 perspectives, most of which were generated through paraphrasing (step 2b). The perspectives which convey the same point with respect to a claim are grouped into clusters. On average, each cluster has a size of INLINEFORM2 which shows that, on average, many perspectives have equivalents. More granular details are available in Table TABREF19 . To better understand the topical breakdown of claims in the dataset, we crowdsource the set of “topics” associated with each claim (e.g., Law, Ethics, etc.) We observe that, as expected, the three topics of Politics, World, and Society have the biggest portions (Figure FIGREF21 ). Additionally, the included claims touch upon 10+ different topics. Figure FIGREF22 depicts a few popular categories and sampled questions from each. Required skills We perform a closer investigation of the abilities required to solve the stance classification task. One of the authors went through a random subset of claim-perspectives pairs and annotated each with the abilities required in determining their stances labels. We follow the common definitions used in prior work BIBREF37 , BIBREF38 . The result of this annotation is depicted in Figure FIGREF24 . As can be seen, the problem requires understanding of common-sense, i.e., an understanding that is commonly shared among humans and rarely gets explicitly mentioned in the text. Additionally, the task requires various types of coreference understanding, such as event coreference and entity coreference. Empirical Analysis In this section we provide empirical analysis to address the tasks. We create a split of 60%/15%/25% of the data train/dev/test. In order to make sure our baselines are not overfitting to the keywords of each topic (the “topic” annotation from Section SECREF20 ), we make sure to have claims with the same topic fall into the same split. For simplicity, we define a notation which we will extensively use for the rest of this paper. The clusters of equivalent perspectives are denoted as INLINEFORM0 , given a representative member INLINEFORM1 . Let INLINEFORM2 denote the collection of relevant perspectives to a claim INLINEFORM3 , which is the union of all the equivalent perspectives participating in the claim: INLINEFORM4 . Let INLINEFORM5 denote the set of evidence documents lending support to a perspective INLINEFORM6 . Additionally, denote the two pools of perspectives and evidence with INLINEFORM7 and INLINEFORM8 , respectively. Systems We make use of the following systems in our evaluation: (Information Retrieval). This baseline has been successfully used for related tasks like Question Answering BIBREF39 . We create two versions of this baseline: one with the pool of perspectives INLINEFORM0 and one with the pool of evidences INLINEFORM1 . We use this system to retrieve a ranked list of best matching perspective/evidence from the corresponding index. (Contextual representations). A recent state-of-the-art contextualized representation BIBREF40 . This system has been shown to be effective on a broad range of natural language understanding tasks. Human performance provides us with an estimate of the best achievable results on datasets. We use human annotators to measure human performance for each task. We randomly sample 10 claims from the test set, and instruct two expert annotators to solve each of T1 to T4. Evaluation metrics. We perform evaluations on four different subtasks in our dataset. In all of the following evaluations, the systems are given the two pools of perspectives INLINEFORM0 and evidences INLINEFORM1 . A system is expected to return the collection of mutually disjoint perspectives with respect to a given claim. Let INLINEFORM0 be the set of output perspectives. Define the precision and recall as INLINEFORM1 and INLINEFORM2 respectively. To calculate dataset metrics, the aforementioned per-claim metrics are averaged across all the claims in the test set. Given a claim, a system is expected to label every perspective in INLINEFORM0 with one of two labels support or oppose. We use the well-established definitions of precision-recall for this binary classification task. A system is expected to decide whether two given perspectives are equivalent or not, with respect to a given claim. We evaluate this task in a way similar to a clustering problem. For a pair of perspectives INLINEFORM0 , a system predicts whether the two are in the same cluster or not. The ground-truth is whether there is a cluster which contains both of the perspectives or not: INLINEFORM1 . We use this pairwise definition for all the pairs in INLINEFORM2 , for any claim INLINEFORM3 in the test set. Given a perspective INLINEFORM0 , we expect a system to return all the evidence INLINEFORM1 from the pool of evidence INLINEFORM2 . Let INLINEFORM3 and INLINEFORM4 be the predicted and gold evidence for a perspective INLINEFORM5 . Define macro-precision and macro-recall as INLINEFORM6 and INLINEFORM7 , respectively. The metrics are averaged across all the perspectives INLINEFORM8 participating in the test set. The goal is to get estimates of the overall performance of the systems. Instead of creating a complex measure that would take all the aspects into account, we approximate the overall performance by multiplying the disjoint measures in INLINEFORM0 , INLINEFORM1 and INLINEFORM2 . While this gives an estimate on the overall quality, it ignores the pipeline structure of the task (e.g., the propagation of the errors throughout the pipeline). We note that the task of INLINEFORM3 (perspective equivalence) is indirectly being measured within INLINEFORM4 . Furthermore, since we do not report an IR performance on INLINEFORM5 , we use the “always supp” baseline instead to estimate an overall performance for IR. Results Table TABREF40 shows a summary of the experimental results. To measure the performance of the IR system, we use the index containing INLINEFORM0 . Given each claim, we query the top INLINEFORM1 perspectives, ranked according to their retrieval scores. We tune INLINEFORM2 on our development set and report the results on the test section according to the tuned parameter. We use IR results as candidates for other solvers (including humans). For this task, IR with top-15 candidates yields INLINEFORM3 90% recall (for the PR-curve, see Figure FIGREF53 in the Appendix). In order to train BERT on this task, we use the IR candidates as the training instances. We then tune a threshold on the dev data to select the top relevant perspectives. In order to measure human performance, we create an interface where two human annotators see IR top- INLINEFORM4 and select a minimal set of perspectives (i.e., no two equivalent perspectives). We measure the quality of perspective stance classification, where the input is a claim-perspective pair, mapped to {support, oppose}. The candidate inputs are generated on the collection of perspectives INLINEFORM0 relevant to a claim INLINEFORM1 . To have an understanding of a lower bound for the metric, we measure the quality of an always-support baseline. We measure the performance of BERT on this task as well, which is about 20% below human performance. This might be because this task requires a deep understanding of commonsense knowledge/reasoning (as indicated earlier in Section SECREF5 ). Since a retrieval system is unlikely to distinguish perspectives with different stances, we do not report the IR performance for this task. We create instances in the form of INLINEFORM0 where INLINEFORM1 . The expected label is whether the two perspectives belong to the same equivalence class or not. In the experiments, we observe that BERT has a significant performance gain of INLINEFORM2 over the IR baseline. Meanwhile, this system is behind human performance by a margin of INLINEFORM3 . We evaluate the systems on the extraction of items from the pool of evidences INLINEFORM0 , given a claim-perspective pair. To measure the performance of the IR system working with the index containing INLINEFORM1 we issue a query containing the concatenation of a perspective-claim pair. Given the sorted results (according to their retrieval confidence score), we select the top candidates using a threshold parameter tuned on the dev set. We also use the IR system's candidates (top-60) for other baselines. This set of candidates yields a INLINEFORM2 85% recall (for the PR-curve, see Figure FIGREF53 in the Appendix). We train BERT system to map each (gold) claim-perspective pair to its corresponding evidence paragraph(s). Since each evidence paragraph could be long (hence hard to feed into BERT), we split each evidence paragraph into sliding windows of 3 sentences. For each claim-perspective pair, we use all 3-sentences windows of gold evidence paragraphs as positive examples, and rest of the IR candidates as negative examples. In the run-time, if a certain percentage (tuned on the dev set) of the sentences from a given evidence paragraph are predicted as positive by BERT, we consider the whole evidence as positive (i.e. it supports a given perspective). Overall, the performances on this task are lower, which could probably be expected, considering the length of the evidence paragraphs. Similar to the previous scenarios, the BERT solver has a significant gain over a trivial baseline, while standing behind human with a significant margin. Discussion As one of the key consequences of the information revolution, information pollution and over-personalization have already had detrimental effects on our life. In this work, we attempt to facilitate the development of systems that aid in better organization and access to information, with the hope that the access to more diverse information can address over-personalization too BIBREF41 . The dataset presented here is not intended to be exhaustive, nor does it attempt to reflect a true distribution of the important claims and perspectives in the world, or to associate any of the perspective and identified evidence with levels of expertise and trustworthiness. Moreover, it is important to note that when we ask crowd-workers to evaluate the validity of perspectives and evidence, their judgement process can potentially be influenced by their prior beliefs BIBREF42 . To avoid additional biases introduced in the process of dataset construction, we try to take the least restrictive approach in filtering dataset content beyond the necessary quality assurances. For this reason, we choose not to explicitly ask annotators to filter contents based on the intention of their creators (e.g. offensive content). A few algorithmic components were not addressed in this work, although they are important to the complete perspective discovery and presentation pipeline. For instance, one has to first verify that the input to the system is a reasonably well-phrased and an argue-worthy claim. And, to construct the pool of perspectives, one has to extract relevant arguments BIBREF43 . In a similar vein, since our main focus is the study of the relations between claims, perspectives, and evidence, we leave out important issues such as their degree of factuality BIBREF8 or trustworthiness BIBREF44 , BIBREF1 as separate aspects of problem. We hope that some of these challenges and limitations will be addressed in future work. Conclusion The importance of this work is three-fold; we define the problem of substantiated perspective discovery and characterize language understanding tasks necessary to address this problem. We combine online resources, web data and crowdsourcing and create a high-quality dataset, in order to drive research on this problem. Finally, we build and evaluate strong baseline supervised systems for this problem. Our hope is that this dataset would bring more attention to this important problem and would speed up the progress in this direction. There are two aspects that we defer to future work. First, the systems designed here assumed that the input are valid claim sentences. To make use of such systems, one needs to develop mechanisms to recognize valid argumentative structures. In addition, we ignore trustworthiness and credibility issues, important research issues that are addressed in other works. Acknowledgments The authors would like to thank Jennifer Sheffield, Stephen Mayhew, Shyam Upadhyay, Nitish Gupta and the anonymous reviewers for insightful comments and suggestions. This work was supported in part by a gift from Google and by Contract HR0011-15-2-0025 with the US Defense Advanced Research Projects Agency (DARPA). The views expressed are those of the authors and do not reflect the official policy or position of the Department of Defense or the U.S. Government. Statistics We provide brief statistics on the sources of different content in our dataset in Table TABREF46 . In particular, this table shows: the size of the data collected from online debate websites (step 1). the size of the data filtered out (step 2a). the size of the perspectives added by paraphrases (step 2b). the size of the perspective candidates added by web (step 2c). Measure of agreement We use the following definition formula in calculation of our measure of agreement. For a fixed subject (problem instance), let INLINEFORM0 represent the number of raters who assigned the given subject to the INLINEFORM1 -th category. The measure of agreement is defined as INLINEFORM2 where for INLINEFORM0 . Intuitively, this function measure concentration of values the vector INLINEFORM1 . Take the edge cases: Values concentrated: INLINEFORM0 (in other words INLINEFORM1 ) INLINEFORM2 . Least concentration (uniformly distribution): INLINEFORM0 . This definition is used in calculation of more extensive agreement measures (e.g, Fleiss' kappa BIBREF49 ). There multiple ways of interpreting this formula: It indicates how many rater–rater pairs are in agreement, relative to the number of all possible rater–rater pairs. One can interpret this measure by a simple combinatorial notions. Suppose we have sets INLINEFORM0 which are pairwise disjunct and for each INLINEFORM1 let INLINEFORM2 . We choose randomly two elements from INLINEFORM3 . Then the probability that they are from the same set is the expressed by INLINEFORM4 . We can write INLINEFORM0 in terms of INLINEFORM1 which is the conventional Chi-Square statistic for testing if the vector of INLINEFORM2 values comes from the all-categories-equally-likely flat multinomial model.
Amazon Mechanical Turk (AMT)
090f2b941b9c5b6b7c34ae18c2cc97e9650f1f0b
090f2b941b9c5b6b7c34ae18c2cc97e9650f1f0b_0
Q: Which machine baselines are used? Text: Introduction Understanding most nontrivial claims requires insights from various perspectives. Today, we make use of search engines or recommendation systems to retrieve information relevant to a claim, but this process carries multiple forms of bias. In particular, they are optimized relative to the claim (query) presented, and the popularity of the relevant documents returned, rather than with respect to the diversity of the perspectives presented in them or whether they are supported by evidence. In this paper, we explore an approach to mitigating this selection bias BIBREF0 when studying (disputed) claims. Consider the claim shown in Figure FIGREF1 : “animals should have lawful rights.” One might compare the biological similarities/differences between humans and other animals to support/oppose the claim. Alternatively, one can base an argument on morality and rationality of animals, or lack thereof. Each of these arguments, which we refer to as perspectives throughout the paper, is an opinion, possibly conditional, in support of a given claim or against it. A perspective thus constitutes a particular attitude towards a given claim. Natural language understanding is at the heart of developing an ability to identify diverse perspectives for claims. In this work, we propose and study a setting that would facilitate discovering diverse perspectives and their supporting evidence with respect to a given claim. Our goal is to identify and formulate the key NLP challenges underlying this task, and develop a dataset that would allow a systematic study of these challenges. For example, for the claim in Figure FIGREF1 , multiple (non-redundant) perspectives should be retrieved from a pool of perspectives; one of them is “animals have no interest or rationality”, a perspective that should be identified as taking an opposing stance with respect to the claim. Each perspective should also be well-supported by evidence found in a pool of potential pieces of evidence. While it might be impractical to provide an exhaustive spectrum of ideas with respect to a claim, presenting a small but diverse set of perspectives could be an important step towards addressing the selection bias problem. Moreover, it would be impractical to develop an exhaustive pool of evidence for all perspectives, from a diverse set of credible sources. We are not attempting to do that. We aim at formulating the core NLP problems, and developing a dataset that will facilitate studying these problems from the NLP angle, realizing that using the outcomes of this research in practice requires addressing issues such as trustworthiness BIBREF1 , BIBREF2 and possibly others. Inherently, our objective requires understanding the relations between perspectives and claims, the nuances in the meaning of various perspectives in the context of claims, and relations between perspectives and evidence. This, we argue, can be done with a diverse enough, but not exhaustive, dataset. And it can be done without attending to the legitimacy and credibility of sources contributing evidence, an important problem but orthogonal to the one studied here. To facilitate the research towards developing solutions to such challenging issues, we propose [wave]390P[wave]415e[wave]440r[wave]465s[wave]485p[wave]525e[wave]535c[wave]595t[wave]610r[wave]635u[wave]660m, a dataset of claims, perspectives and evidence paragraphs. For a given claim and pools of perspectives and evidence paragraphs, a hypothetical system is expected to select the relevant perspectives and their supporting paragraphs. Our dataset contains 907 claims, 11,164 perspectives and 8,092 evidence paragraphs. In constructing it, we use online debate websites as our initial seed data, and augment it with search data and paraphrases to make it richer and more challenging. We make extensive use of crowdsourcing to increase the quality of the data and clean it from annotation noise. The contributions of this paper are as follows: Design Principles and Challenges In this section we provide a closer look into the challenge and propose a collection of tasks that move us closer to substantiated perspective discovery. To clarify our description we use to following notation. Let INLINEFORM0 indicate a target claim of interest (for example, the claims INLINEFORM1 and INLINEFORM2 in Figure FIGREF6 ). Each claim INLINEFORM3 is addressed by a collection of perspectives INLINEFORM4 that are grouped into clusters of equivalent perspectives. Additionally, each perspective INLINEFORM5 is supported, relative to INLINEFORM6 , by at least one evidence paragraph INLINEFORM7 , denoted INLINEFORM8 . Creating systems that would address our challenge in its full glory requires solving the following interdependent tasks: Determination of argue-worthy claims: not every claim requires an in-depth discussion of perspectives. For a system to be practical, it needs to be equipped with understanding argumentative structures BIBREF3 in order to discern disputed claims from those with straightforward responses. We set aside this problem in this work and assume that all the inputs to the systems are discussion-worthy claims. Discovery of pertinent perspectives: a system is expected to recognize argumentative sentences BIBREF4 that directly address the points raised in the disputed claim. For example, while the perspectives in Figure FIGREF6 are topically related to the claims, INLINEFORM0 do not directly address the focus of claim INLINEFORM1 (i.e., “use of animals” in “entertainment”). Perspective equivalence: a system is expected to extract a minimal and diverse set of perspectives. This requires the ability to discover equivalent perspectives INLINEFORM0 , with respect to a claim INLINEFORM1 : INLINEFORM2 . For instance, INLINEFORM3 and INLINEFORM4 are equivalent in the context of INLINEFORM5 ; however, they might not be equivalent with respect to any other claim. The conditional nature of perspective equivalence differentiates it from the paraphrasing task BIBREF5 . Stance classification of perspectives: a system is supposed to assess the stances of the perspectives with respect to the given claim (supporting, opposing, etc.) BIBREF6 . Substantiating the perspectives: a system is expected to find valid evidence paragraph(s) in support of each perspective. Conceptually, this is similar to the well-studied problem of textual entailment BIBREF7 except that here the entailment decisions depend on the choice of claims. Dataset construction In this section we describe a multi-step process, constructed with detailed analysis, substantial refinements and multiple pilots studies. We use crowdsourcing to annotate different aspects of the dataset. We used Amazon Mechanical Turk (AMT) for our annotations, restricting the task to workers in five English-speaking countries (USA, UK, Canada, New Zealand, and Australia), more than 1000 finished HITs and at least a 95% acceptance rate. To ensure the diversity of responses, we do not require additional qualifications or demographic information from our annotators. For any of the annotations steps described below, the users are guided to an external platform where they first read the instructions and try a verification step to make sure they have understood the instructions. Only after successful completion are they allowed to start the annotation tasks. Throughout our annotations, it is our aim to make sure that the workers are responding objectively to the tasks (as opposed to using their personal opinions or preferences). The screen-shots of the annotation interfaces for each step are included in the Appendix (Section SECREF56 ). In the steps outlined below, we filter out a subset of the data with low rater–rater agreement INLINEFORM0 (see Appendix SECREF47 ). In certain steps, we use an information retrieval (IR) system to generate the best candidates for the task at hand. We start by crawling the content of a few notable debating websites: idebate.com, debatewise.org, procon.org. This yields INLINEFORM0 claims, INLINEFORM1 perspectives and INLINEFORM2 evidence paragraphs (for complete statistics, see Table TABREF46 in the Appendix). This data is significantly noisy and lacks the structure we would like. In the following steps we explain how we denoise it and augment it with additional data. For each perspective we verify that it is a complete English sentence, with a clear stance with respect to the given claim. For a fixed pair of claim and perspective, we ask the crowd-workers to label the perspective with one of the five categories of support, oppose, mildly-support, mildly-oppose, or not a valid perspective. The reason that we ask for two levels of intensity is to distinguish mild or conditional arguments from those that express stronger positions. Every 10 claims (and their relevant perspectives) are bundled to form a HIT. Three independent annotators solve a HIT, and each gets paid $1.5-2 per HIT. To get rid of the ambiguous/noisy perspectives we measure rater-rater agreement on the resulting data and retain only the subset which has a significant agreement of INLINEFORM0 . To account for minor disagreements in the intensity of perspective stances, before measuring any notion of agreement, we collapse the five labels into three labels, by collapsing mildly-support and mildly-oppose into support and oppose, respectively. To assess the quality of these annotations, two of the authors independently annotate a random subset of instances in the previous step (328 perspectives for 10 claims). Afterwards, the differences were adjudicated. We measure the accuracy adjudicated results with AMT annotations to estimate the quality of our annotation. This results in an accuracy of 94%, which shows high-agreement with the crowdsourced annotations. To enrich the ways the perspectives are phrased, we crowdsource paraphrases of our perspectives. We ask annotators to generate two paraphrases for each of the 15 perspectives in each HIT, for a reward of $1.50. Subsequently, we perform another round of crowdsourcing to verify the generated paraphrases. We create HITs of 24 candidate paraphrases to be verified, with a reward of $1. Overall, this process gives us INLINEFORM0 paraphrased perspectives. The collected paraphrases form clusters of equivalent perspectives, which we refine further in the later steps. In order to ensure that our dataset contains more realistic sentences, we use web search to augment our pool of perspectives with additional sentences that are topically related to what we already have. Specifically, we use Bing search to extract sentences that are similar to our current pool of perspectives, by querying “claim+perspective”. We create a pool of relevant web sentences and use an IR system (introduced earlier) to retrieve the 10 most similar sentences. These candidate perspectives are annotated using (similar to step 2a) and only those that were agreed upon are retained. In a final round of annotation for perspectives, an expert annotator went over all the claims in order to verify that all the equivalent perspectives are clustered together. Subsequently, the expert annotator went over the most similar claim-pairs (and their perspectives), in order to annotate the missing perspectives shared between the two claims. To cut the space of claim pairs, the annotation was done on the top 350 most similar claim pairs retrieved by the IR system. The goal of this step is to decide whether a given evidence paragraph provides enough substantiations for a perspective or not. Performing these annotations exhaustively for any perspective-evidence pair is not possible. Instead, we make use of a retrieval system to annotate only the relevant pairs. In particular, we create an index of all the perspectives retained from step 2a. For a given evidence paragraph, we retrieve the top relevant perspectives. We ask the annotators to note whether a given evidence paragraph supports a given perspective or not. Each HIT contains a 20 evidence paragraphs and their top 8 relevant candidate perspectives. Each HIT is paid $1 and annotated by at least 4 independent annotators. In order to assess the quality of our annotations, a random subset of instances (4 evidence-perspective pairs) are annotated by two independent authors and the differences are adjudicated. We measure the accuracy of our adjudicated labels versus AMT labels, resulting in 87.7%. This indicates the high quality of the crowdsourced data. Statistics on the dataset We now provide a brief summary of [wave]390P[wave]415e[wave]440r[wave]465s[wave]485p[wave]525e[wave]535c[wave]595t[wave]610r[wave]635u[wave]660m. The dataset contains about INLINEFORM0 claims with a significant length diversity (Table TABREF19 ). Additionally, the dataset comes with INLINEFORM1 perspectives, most of which were generated through paraphrasing (step 2b). The perspectives which convey the same point with respect to a claim are grouped into clusters. On average, each cluster has a size of INLINEFORM2 which shows that, on average, many perspectives have equivalents. More granular details are available in Table TABREF19 . To better understand the topical breakdown of claims in the dataset, we crowdsource the set of “topics” associated with each claim (e.g., Law, Ethics, etc.) We observe that, as expected, the three topics of Politics, World, and Society have the biggest portions (Figure FIGREF21 ). Additionally, the included claims touch upon 10+ different topics. Figure FIGREF22 depicts a few popular categories and sampled questions from each. Required skills We perform a closer investigation of the abilities required to solve the stance classification task. One of the authors went through a random subset of claim-perspectives pairs and annotated each with the abilities required in determining their stances labels. We follow the common definitions used in prior work BIBREF37 , BIBREF38 . The result of this annotation is depicted in Figure FIGREF24 . As can be seen, the problem requires understanding of common-sense, i.e., an understanding that is commonly shared among humans and rarely gets explicitly mentioned in the text. Additionally, the task requires various types of coreference understanding, such as event coreference and entity coreference. Empirical Analysis In this section we provide empirical analysis to address the tasks. We create a split of 60%/15%/25% of the data train/dev/test. In order to make sure our baselines are not overfitting to the keywords of each topic (the “topic” annotation from Section SECREF20 ), we make sure to have claims with the same topic fall into the same split. For simplicity, we define a notation which we will extensively use for the rest of this paper. The clusters of equivalent perspectives are denoted as INLINEFORM0 , given a representative member INLINEFORM1 . Let INLINEFORM2 denote the collection of relevant perspectives to a claim INLINEFORM3 , which is the union of all the equivalent perspectives participating in the claim: INLINEFORM4 . Let INLINEFORM5 denote the set of evidence documents lending support to a perspective INLINEFORM6 . Additionally, denote the two pools of perspectives and evidence with INLINEFORM7 and INLINEFORM8 , respectively. Systems We make use of the following systems in our evaluation: (Information Retrieval). This baseline has been successfully used for related tasks like Question Answering BIBREF39 . We create two versions of this baseline: one with the pool of perspectives INLINEFORM0 and one with the pool of evidences INLINEFORM1 . We use this system to retrieve a ranked list of best matching perspective/evidence from the corresponding index. (Contextual representations). A recent state-of-the-art contextualized representation BIBREF40 . This system has been shown to be effective on a broad range of natural language understanding tasks. Human performance provides us with an estimate of the best achievable results on datasets. We use human annotators to measure human performance for each task. We randomly sample 10 claims from the test set, and instruct two expert annotators to solve each of T1 to T4. Evaluation metrics. We perform evaluations on four different subtasks in our dataset. In all of the following evaluations, the systems are given the two pools of perspectives INLINEFORM0 and evidences INLINEFORM1 . A system is expected to return the collection of mutually disjoint perspectives with respect to a given claim. Let INLINEFORM0 be the set of output perspectives. Define the precision and recall as INLINEFORM1 and INLINEFORM2 respectively. To calculate dataset metrics, the aforementioned per-claim metrics are averaged across all the claims in the test set. Given a claim, a system is expected to label every perspective in INLINEFORM0 with one of two labels support or oppose. We use the well-established definitions of precision-recall for this binary classification task. A system is expected to decide whether two given perspectives are equivalent or not, with respect to a given claim. We evaluate this task in a way similar to a clustering problem. For a pair of perspectives INLINEFORM0 , a system predicts whether the two are in the same cluster or not. The ground-truth is whether there is a cluster which contains both of the perspectives or not: INLINEFORM1 . We use this pairwise definition for all the pairs in INLINEFORM2 , for any claim INLINEFORM3 in the test set. Given a perspective INLINEFORM0 , we expect a system to return all the evidence INLINEFORM1 from the pool of evidence INLINEFORM2 . Let INLINEFORM3 and INLINEFORM4 be the predicted and gold evidence for a perspective INLINEFORM5 . Define macro-precision and macro-recall as INLINEFORM6 and INLINEFORM7 , respectively. The metrics are averaged across all the perspectives INLINEFORM8 participating in the test set. The goal is to get estimates of the overall performance of the systems. Instead of creating a complex measure that would take all the aspects into account, we approximate the overall performance by multiplying the disjoint measures in INLINEFORM0 , INLINEFORM1 and INLINEFORM2 . While this gives an estimate on the overall quality, it ignores the pipeline structure of the task (e.g., the propagation of the errors throughout the pipeline). We note that the task of INLINEFORM3 (perspective equivalence) is indirectly being measured within INLINEFORM4 . Furthermore, since we do not report an IR performance on INLINEFORM5 , we use the “always supp” baseline instead to estimate an overall performance for IR. Results Table TABREF40 shows a summary of the experimental results. To measure the performance of the IR system, we use the index containing INLINEFORM0 . Given each claim, we query the top INLINEFORM1 perspectives, ranked according to their retrieval scores. We tune INLINEFORM2 on our development set and report the results on the test section according to the tuned parameter. We use IR results as candidates for other solvers (including humans). For this task, IR with top-15 candidates yields INLINEFORM3 90% recall (for the PR-curve, see Figure FIGREF53 in the Appendix). In order to train BERT on this task, we use the IR candidates as the training instances. We then tune a threshold on the dev data to select the top relevant perspectives. In order to measure human performance, we create an interface where two human annotators see IR top- INLINEFORM4 and select a minimal set of perspectives (i.e., no two equivalent perspectives). We measure the quality of perspective stance classification, where the input is a claim-perspective pair, mapped to {support, oppose}. The candidate inputs are generated on the collection of perspectives INLINEFORM0 relevant to a claim INLINEFORM1 . To have an understanding of a lower bound for the metric, we measure the quality of an always-support baseline. We measure the performance of BERT on this task as well, which is about 20% below human performance. This might be because this task requires a deep understanding of commonsense knowledge/reasoning (as indicated earlier in Section SECREF5 ). Since a retrieval system is unlikely to distinguish perspectives with different stances, we do not report the IR performance for this task. We create instances in the form of INLINEFORM0 where INLINEFORM1 . The expected label is whether the two perspectives belong to the same equivalence class or not. In the experiments, we observe that BERT has a significant performance gain of INLINEFORM2 over the IR baseline. Meanwhile, this system is behind human performance by a margin of INLINEFORM3 . We evaluate the systems on the extraction of items from the pool of evidences INLINEFORM0 , given a claim-perspective pair. To measure the performance of the IR system working with the index containing INLINEFORM1 we issue a query containing the concatenation of a perspective-claim pair. Given the sorted results (according to their retrieval confidence score), we select the top candidates using a threshold parameter tuned on the dev set. We also use the IR system's candidates (top-60) for other baselines. This set of candidates yields a INLINEFORM2 85% recall (for the PR-curve, see Figure FIGREF53 in the Appendix). We train BERT system to map each (gold) claim-perspective pair to its corresponding evidence paragraph(s). Since each evidence paragraph could be long (hence hard to feed into BERT), we split each evidence paragraph into sliding windows of 3 sentences. For each claim-perspective pair, we use all 3-sentences windows of gold evidence paragraphs as positive examples, and rest of the IR candidates as negative examples. In the run-time, if a certain percentage (tuned on the dev set) of the sentences from a given evidence paragraph are predicted as positive by BERT, we consider the whole evidence as positive (i.e. it supports a given perspective). Overall, the performances on this task are lower, which could probably be expected, considering the length of the evidence paragraphs. Similar to the previous scenarios, the BERT solver has a significant gain over a trivial baseline, while standing behind human with a significant margin. Discussion As one of the key consequences of the information revolution, information pollution and over-personalization have already had detrimental effects on our life. In this work, we attempt to facilitate the development of systems that aid in better organization and access to information, with the hope that the access to more diverse information can address over-personalization too BIBREF41 . The dataset presented here is not intended to be exhaustive, nor does it attempt to reflect a true distribution of the important claims and perspectives in the world, or to associate any of the perspective and identified evidence with levels of expertise and trustworthiness. Moreover, it is important to note that when we ask crowd-workers to evaluate the validity of perspectives and evidence, their judgement process can potentially be influenced by their prior beliefs BIBREF42 . To avoid additional biases introduced in the process of dataset construction, we try to take the least restrictive approach in filtering dataset content beyond the necessary quality assurances. For this reason, we choose not to explicitly ask annotators to filter contents based on the intention of their creators (e.g. offensive content). A few algorithmic components were not addressed in this work, although they are important to the complete perspective discovery and presentation pipeline. For instance, one has to first verify that the input to the system is a reasonably well-phrased and an argue-worthy claim. And, to construct the pool of perspectives, one has to extract relevant arguments BIBREF43 . In a similar vein, since our main focus is the study of the relations between claims, perspectives, and evidence, we leave out important issues such as their degree of factuality BIBREF8 or trustworthiness BIBREF44 , BIBREF1 as separate aspects of problem. We hope that some of these challenges and limitations will be addressed in future work. Conclusion The importance of this work is three-fold; we define the problem of substantiated perspective discovery and characterize language understanding tasks necessary to address this problem. We combine online resources, web data and crowdsourcing and create a high-quality dataset, in order to drive research on this problem. Finally, we build and evaluate strong baseline supervised systems for this problem. Our hope is that this dataset would bring more attention to this important problem and would speed up the progress in this direction. There are two aspects that we defer to future work. First, the systems designed here assumed that the input are valid claim sentences. To make use of such systems, one needs to develop mechanisms to recognize valid argumentative structures. In addition, we ignore trustworthiness and credibility issues, important research issues that are addressed in other works. Acknowledgments The authors would like to thank Jennifer Sheffield, Stephen Mayhew, Shyam Upadhyay, Nitish Gupta and the anonymous reviewers for insightful comments and suggestions. This work was supported in part by a gift from Google and by Contract HR0011-15-2-0025 with the US Defense Advanced Research Projects Agency (DARPA). The views expressed are those of the authors and do not reflect the official policy or position of the Department of Defense or the U.S. Government. Statistics We provide brief statistics on the sources of different content in our dataset in Table TABREF46 . In particular, this table shows: the size of the data collected from online debate websites (step 1). the size of the data filtered out (step 2a). the size of the perspectives added by paraphrases (step 2b). the size of the perspective candidates added by web (step 2c). Measure of agreement We use the following definition formula in calculation of our measure of agreement. For a fixed subject (problem instance), let INLINEFORM0 represent the number of raters who assigned the given subject to the INLINEFORM1 -th category. The measure of agreement is defined as INLINEFORM2 where for INLINEFORM0 . Intuitively, this function measure concentration of values the vector INLINEFORM1 . Take the edge cases: Values concentrated: INLINEFORM0 (in other words INLINEFORM1 ) INLINEFORM2 . Least concentration (uniformly distribution): INLINEFORM0 . This definition is used in calculation of more extensive agreement measures (e.g, Fleiss' kappa BIBREF49 ). There multiple ways of interpreting this formula: It indicates how many rater–rater pairs are in agreement, relative to the number of all possible rater–rater pairs. One can interpret this measure by a simple combinatorial notions. Suppose we have sets INLINEFORM0 which are pairwise disjunct and for each INLINEFORM1 let INLINEFORM2 . We choose randomly two elements from INLINEFORM3 . Then the probability that they are from the same set is the expressed by INLINEFORM4 . We can write INLINEFORM0 in terms of INLINEFORM1 which is the conventional Chi-Square statistic for testing if the vector of INLINEFORM2 values comes from the all-categories-equally-likely flat multinomial model.
Information Retrieval
5e032de729ce9fc727b547e3064be04d30009324
5e032de729ce9fc727b547e3064be04d30009324_0
Q: What challenges are highlighted? Text: Introduction Understanding most nontrivial claims requires insights from various perspectives. Today, we make use of search engines or recommendation systems to retrieve information relevant to a claim, but this process carries multiple forms of bias. In particular, they are optimized relative to the claim (query) presented, and the popularity of the relevant documents returned, rather than with respect to the diversity of the perspectives presented in them or whether they are supported by evidence. In this paper, we explore an approach to mitigating this selection bias BIBREF0 when studying (disputed) claims. Consider the claim shown in Figure FIGREF1 : “animals should have lawful rights.” One might compare the biological similarities/differences between humans and other animals to support/oppose the claim. Alternatively, one can base an argument on morality and rationality of animals, or lack thereof. Each of these arguments, which we refer to as perspectives throughout the paper, is an opinion, possibly conditional, in support of a given claim or against it. A perspective thus constitutes a particular attitude towards a given claim. Natural language understanding is at the heart of developing an ability to identify diverse perspectives for claims. In this work, we propose and study a setting that would facilitate discovering diverse perspectives and their supporting evidence with respect to a given claim. Our goal is to identify and formulate the key NLP challenges underlying this task, and develop a dataset that would allow a systematic study of these challenges. For example, for the claim in Figure FIGREF1 , multiple (non-redundant) perspectives should be retrieved from a pool of perspectives; one of them is “animals have no interest or rationality”, a perspective that should be identified as taking an opposing stance with respect to the claim. Each perspective should also be well-supported by evidence found in a pool of potential pieces of evidence. While it might be impractical to provide an exhaustive spectrum of ideas with respect to a claim, presenting a small but diverse set of perspectives could be an important step towards addressing the selection bias problem. Moreover, it would be impractical to develop an exhaustive pool of evidence for all perspectives, from a diverse set of credible sources. We are not attempting to do that. We aim at formulating the core NLP problems, and developing a dataset that will facilitate studying these problems from the NLP angle, realizing that using the outcomes of this research in practice requires addressing issues such as trustworthiness BIBREF1 , BIBREF2 and possibly others. Inherently, our objective requires understanding the relations between perspectives and claims, the nuances in the meaning of various perspectives in the context of claims, and relations between perspectives and evidence. This, we argue, can be done with a diverse enough, but not exhaustive, dataset. And it can be done without attending to the legitimacy and credibility of sources contributing evidence, an important problem but orthogonal to the one studied here. To facilitate the research towards developing solutions to such challenging issues, we propose [wave]390P[wave]415e[wave]440r[wave]465s[wave]485p[wave]525e[wave]535c[wave]595t[wave]610r[wave]635u[wave]660m, a dataset of claims, perspectives and evidence paragraphs. For a given claim and pools of perspectives and evidence paragraphs, a hypothetical system is expected to select the relevant perspectives and their supporting paragraphs. Our dataset contains 907 claims, 11,164 perspectives and 8,092 evidence paragraphs. In constructing it, we use online debate websites as our initial seed data, and augment it with search data and paraphrases to make it richer and more challenging. We make extensive use of crowdsourcing to increase the quality of the data and clean it from annotation noise. The contributions of this paper are as follows: Design Principles and Challenges In this section we provide a closer look into the challenge and propose a collection of tasks that move us closer to substantiated perspective discovery. To clarify our description we use to following notation. Let INLINEFORM0 indicate a target claim of interest (for example, the claims INLINEFORM1 and INLINEFORM2 in Figure FIGREF6 ). Each claim INLINEFORM3 is addressed by a collection of perspectives INLINEFORM4 that are grouped into clusters of equivalent perspectives. Additionally, each perspective INLINEFORM5 is supported, relative to INLINEFORM6 , by at least one evidence paragraph INLINEFORM7 , denoted INLINEFORM8 . Creating systems that would address our challenge in its full glory requires solving the following interdependent tasks: Determination of argue-worthy claims: not every claim requires an in-depth discussion of perspectives. For a system to be practical, it needs to be equipped with understanding argumentative structures BIBREF3 in order to discern disputed claims from those with straightforward responses. We set aside this problem in this work and assume that all the inputs to the systems are discussion-worthy claims. Discovery of pertinent perspectives: a system is expected to recognize argumentative sentences BIBREF4 that directly address the points raised in the disputed claim. For example, while the perspectives in Figure FIGREF6 are topically related to the claims, INLINEFORM0 do not directly address the focus of claim INLINEFORM1 (i.e., “use of animals” in “entertainment”). Perspective equivalence: a system is expected to extract a minimal and diverse set of perspectives. This requires the ability to discover equivalent perspectives INLINEFORM0 , with respect to a claim INLINEFORM1 : INLINEFORM2 . For instance, INLINEFORM3 and INLINEFORM4 are equivalent in the context of INLINEFORM5 ; however, they might not be equivalent with respect to any other claim. The conditional nature of perspective equivalence differentiates it from the paraphrasing task BIBREF5 . Stance classification of perspectives: a system is supposed to assess the stances of the perspectives with respect to the given claim (supporting, opposing, etc.) BIBREF6 . Substantiating the perspectives: a system is expected to find valid evidence paragraph(s) in support of each perspective. Conceptually, this is similar to the well-studied problem of textual entailment BIBREF7 except that here the entailment decisions depend on the choice of claims. Dataset construction In this section we describe a multi-step process, constructed with detailed analysis, substantial refinements and multiple pilots studies. We use crowdsourcing to annotate different aspects of the dataset. We used Amazon Mechanical Turk (AMT) for our annotations, restricting the task to workers in five English-speaking countries (USA, UK, Canada, New Zealand, and Australia), more than 1000 finished HITs and at least a 95% acceptance rate. To ensure the diversity of responses, we do not require additional qualifications or demographic information from our annotators. For any of the annotations steps described below, the users are guided to an external platform where they first read the instructions and try a verification step to make sure they have understood the instructions. Only after successful completion are they allowed to start the annotation tasks. Throughout our annotations, it is our aim to make sure that the workers are responding objectively to the tasks (as opposed to using their personal opinions or preferences). The screen-shots of the annotation interfaces for each step are included in the Appendix (Section SECREF56 ). In the steps outlined below, we filter out a subset of the data with low rater–rater agreement INLINEFORM0 (see Appendix SECREF47 ). In certain steps, we use an information retrieval (IR) system to generate the best candidates for the task at hand. We start by crawling the content of a few notable debating websites: idebate.com, debatewise.org, procon.org. This yields INLINEFORM0 claims, INLINEFORM1 perspectives and INLINEFORM2 evidence paragraphs (for complete statistics, see Table TABREF46 in the Appendix). This data is significantly noisy and lacks the structure we would like. In the following steps we explain how we denoise it and augment it with additional data. For each perspective we verify that it is a complete English sentence, with a clear stance with respect to the given claim. For a fixed pair of claim and perspective, we ask the crowd-workers to label the perspective with one of the five categories of support, oppose, mildly-support, mildly-oppose, or not a valid perspective. The reason that we ask for two levels of intensity is to distinguish mild or conditional arguments from those that express stronger positions. Every 10 claims (and their relevant perspectives) are bundled to form a HIT. Three independent annotators solve a HIT, and each gets paid $1.5-2 per HIT. To get rid of the ambiguous/noisy perspectives we measure rater-rater agreement on the resulting data and retain only the subset which has a significant agreement of INLINEFORM0 . To account for minor disagreements in the intensity of perspective stances, before measuring any notion of agreement, we collapse the five labels into three labels, by collapsing mildly-support and mildly-oppose into support and oppose, respectively. To assess the quality of these annotations, two of the authors independently annotate a random subset of instances in the previous step (328 perspectives for 10 claims). Afterwards, the differences were adjudicated. We measure the accuracy adjudicated results with AMT annotations to estimate the quality of our annotation. This results in an accuracy of 94%, which shows high-agreement with the crowdsourced annotations. To enrich the ways the perspectives are phrased, we crowdsource paraphrases of our perspectives. We ask annotators to generate two paraphrases for each of the 15 perspectives in each HIT, for a reward of $1.50. Subsequently, we perform another round of crowdsourcing to verify the generated paraphrases. We create HITs of 24 candidate paraphrases to be verified, with a reward of $1. Overall, this process gives us INLINEFORM0 paraphrased perspectives. The collected paraphrases form clusters of equivalent perspectives, which we refine further in the later steps. In order to ensure that our dataset contains more realistic sentences, we use web search to augment our pool of perspectives with additional sentences that are topically related to what we already have. Specifically, we use Bing search to extract sentences that are similar to our current pool of perspectives, by querying “claim+perspective”. We create a pool of relevant web sentences and use an IR system (introduced earlier) to retrieve the 10 most similar sentences. These candidate perspectives are annotated using (similar to step 2a) and only those that were agreed upon are retained. In a final round of annotation for perspectives, an expert annotator went over all the claims in order to verify that all the equivalent perspectives are clustered together. Subsequently, the expert annotator went over the most similar claim-pairs (and their perspectives), in order to annotate the missing perspectives shared between the two claims. To cut the space of claim pairs, the annotation was done on the top 350 most similar claim pairs retrieved by the IR system. The goal of this step is to decide whether a given evidence paragraph provides enough substantiations for a perspective or not. Performing these annotations exhaustively for any perspective-evidence pair is not possible. Instead, we make use of a retrieval system to annotate only the relevant pairs. In particular, we create an index of all the perspectives retained from step 2a. For a given evidence paragraph, we retrieve the top relevant perspectives. We ask the annotators to note whether a given evidence paragraph supports a given perspective or not. Each HIT contains a 20 evidence paragraphs and their top 8 relevant candidate perspectives. Each HIT is paid $1 and annotated by at least 4 independent annotators. In order to assess the quality of our annotations, a random subset of instances (4 evidence-perspective pairs) are annotated by two independent authors and the differences are adjudicated. We measure the accuracy of our adjudicated labels versus AMT labels, resulting in 87.7%. This indicates the high quality of the crowdsourced data. Statistics on the dataset We now provide a brief summary of [wave]390P[wave]415e[wave]440r[wave]465s[wave]485p[wave]525e[wave]535c[wave]595t[wave]610r[wave]635u[wave]660m. The dataset contains about INLINEFORM0 claims with a significant length diversity (Table TABREF19 ). Additionally, the dataset comes with INLINEFORM1 perspectives, most of which were generated through paraphrasing (step 2b). The perspectives which convey the same point with respect to a claim are grouped into clusters. On average, each cluster has a size of INLINEFORM2 which shows that, on average, many perspectives have equivalents. More granular details are available in Table TABREF19 . To better understand the topical breakdown of claims in the dataset, we crowdsource the set of “topics” associated with each claim (e.g., Law, Ethics, etc.) We observe that, as expected, the three topics of Politics, World, and Society have the biggest portions (Figure FIGREF21 ). Additionally, the included claims touch upon 10+ different topics. Figure FIGREF22 depicts a few popular categories and sampled questions from each. Required skills We perform a closer investigation of the abilities required to solve the stance classification task. One of the authors went through a random subset of claim-perspectives pairs and annotated each with the abilities required in determining their stances labels. We follow the common definitions used in prior work BIBREF37 , BIBREF38 . The result of this annotation is depicted in Figure FIGREF24 . As can be seen, the problem requires understanding of common-sense, i.e., an understanding that is commonly shared among humans and rarely gets explicitly mentioned in the text. Additionally, the task requires various types of coreference understanding, such as event coreference and entity coreference. Empirical Analysis In this section we provide empirical analysis to address the tasks. We create a split of 60%/15%/25% of the data train/dev/test. In order to make sure our baselines are not overfitting to the keywords of each topic (the “topic” annotation from Section SECREF20 ), we make sure to have claims with the same topic fall into the same split. For simplicity, we define a notation which we will extensively use for the rest of this paper. The clusters of equivalent perspectives are denoted as INLINEFORM0 , given a representative member INLINEFORM1 . Let INLINEFORM2 denote the collection of relevant perspectives to a claim INLINEFORM3 , which is the union of all the equivalent perspectives participating in the claim: INLINEFORM4 . Let INLINEFORM5 denote the set of evidence documents lending support to a perspective INLINEFORM6 . Additionally, denote the two pools of perspectives and evidence with INLINEFORM7 and INLINEFORM8 , respectively. Systems We make use of the following systems in our evaluation: (Information Retrieval). This baseline has been successfully used for related tasks like Question Answering BIBREF39 . We create two versions of this baseline: one with the pool of perspectives INLINEFORM0 and one with the pool of evidences INLINEFORM1 . We use this system to retrieve a ranked list of best matching perspective/evidence from the corresponding index. (Contextual representations). A recent state-of-the-art contextualized representation BIBREF40 . This system has been shown to be effective on a broad range of natural language understanding tasks. Human performance provides us with an estimate of the best achievable results on datasets. We use human annotators to measure human performance for each task. We randomly sample 10 claims from the test set, and instruct two expert annotators to solve each of T1 to T4. Evaluation metrics. We perform evaluations on four different subtasks in our dataset. In all of the following evaluations, the systems are given the two pools of perspectives INLINEFORM0 and evidences INLINEFORM1 . A system is expected to return the collection of mutually disjoint perspectives with respect to a given claim. Let INLINEFORM0 be the set of output perspectives. Define the precision and recall as INLINEFORM1 and INLINEFORM2 respectively. To calculate dataset metrics, the aforementioned per-claim metrics are averaged across all the claims in the test set. Given a claim, a system is expected to label every perspective in INLINEFORM0 with one of two labels support or oppose. We use the well-established definitions of precision-recall for this binary classification task. A system is expected to decide whether two given perspectives are equivalent or not, with respect to a given claim. We evaluate this task in a way similar to a clustering problem. For a pair of perspectives INLINEFORM0 , a system predicts whether the two are in the same cluster or not. The ground-truth is whether there is a cluster which contains both of the perspectives or not: INLINEFORM1 . We use this pairwise definition for all the pairs in INLINEFORM2 , for any claim INLINEFORM3 in the test set. Given a perspective INLINEFORM0 , we expect a system to return all the evidence INLINEFORM1 from the pool of evidence INLINEFORM2 . Let INLINEFORM3 and INLINEFORM4 be the predicted and gold evidence for a perspective INLINEFORM5 . Define macro-precision and macro-recall as INLINEFORM6 and INLINEFORM7 , respectively. The metrics are averaged across all the perspectives INLINEFORM8 participating in the test set. The goal is to get estimates of the overall performance of the systems. Instead of creating a complex measure that would take all the aspects into account, we approximate the overall performance by multiplying the disjoint measures in INLINEFORM0 , INLINEFORM1 and INLINEFORM2 . While this gives an estimate on the overall quality, it ignores the pipeline structure of the task (e.g., the propagation of the errors throughout the pipeline). We note that the task of INLINEFORM3 (perspective equivalence) is indirectly being measured within INLINEFORM4 . Furthermore, since we do not report an IR performance on INLINEFORM5 , we use the “always supp” baseline instead to estimate an overall performance for IR. Results Table TABREF40 shows a summary of the experimental results. To measure the performance of the IR system, we use the index containing INLINEFORM0 . Given each claim, we query the top INLINEFORM1 perspectives, ranked according to their retrieval scores. We tune INLINEFORM2 on our development set and report the results on the test section according to the tuned parameter. We use IR results as candidates for other solvers (including humans). For this task, IR with top-15 candidates yields INLINEFORM3 90% recall (for the PR-curve, see Figure FIGREF53 in the Appendix). In order to train BERT on this task, we use the IR candidates as the training instances. We then tune a threshold on the dev data to select the top relevant perspectives. In order to measure human performance, we create an interface where two human annotators see IR top- INLINEFORM4 and select a minimal set of perspectives (i.e., no two equivalent perspectives). We measure the quality of perspective stance classification, where the input is a claim-perspective pair, mapped to {support, oppose}. The candidate inputs are generated on the collection of perspectives INLINEFORM0 relevant to a claim INLINEFORM1 . To have an understanding of a lower bound for the metric, we measure the quality of an always-support baseline. We measure the performance of BERT on this task as well, which is about 20% below human performance. This might be because this task requires a deep understanding of commonsense knowledge/reasoning (as indicated earlier in Section SECREF5 ). Since a retrieval system is unlikely to distinguish perspectives with different stances, we do not report the IR performance for this task. We create instances in the form of INLINEFORM0 where INLINEFORM1 . The expected label is whether the two perspectives belong to the same equivalence class or not. In the experiments, we observe that BERT has a significant performance gain of INLINEFORM2 over the IR baseline. Meanwhile, this system is behind human performance by a margin of INLINEFORM3 . We evaluate the systems on the extraction of items from the pool of evidences INLINEFORM0 , given a claim-perspective pair. To measure the performance of the IR system working with the index containing INLINEFORM1 we issue a query containing the concatenation of a perspective-claim pair. Given the sorted results (according to their retrieval confidence score), we select the top candidates using a threshold parameter tuned on the dev set. We also use the IR system's candidates (top-60) for other baselines. This set of candidates yields a INLINEFORM2 85% recall (for the PR-curve, see Figure FIGREF53 in the Appendix). We train BERT system to map each (gold) claim-perspective pair to its corresponding evidence paragraph(s). Since each evidence paragraph could be long (hence hard to feed into BERT), we split each evidence paragraph into sliding windows of 3 sentences. For each claim-perspective pair, we use all 3-sentences windows of gold evidence paragraphs as positive examples, and rest of the IR candidates as negative examples. In the run-time, if a certain percentage (tuned on the dev set) of the sentences from a given evidence paragraph are predicted as positive by BERT, we consider the whole evidence as positive (i.e. it supports a given perspective). Overall, the performances on this task are lower, which could probably be expected, considering the length of the evidence paragraphs. Similar to the previous scenarios, the BERT solver has a significant gain over a trivial baseline, while standing behind human with a significant margin. Discussion As one of the key consequences of the information revolution, information pollution and over-personalization have already had detrimental effects on our life. In this work, we attempt to facilitate the development of systems that aid in better organization and access to information, with the hope that the access to more diverse information can address over-personalization too BIBREF41 . The dataset presented here is not intended to be exhaustive, nor does it attempt to reflect a true distribution of the important claims and perspectives in the world, or to associate any of the perspective and identified evidence with levels of expertise and trustworthiness. Moreover, it is important to note that when we ask crowd-workers to evaluate the validity of perspectives and evidence, their judgement process can potentially be influenced by their prior beliefs BIBREF42 . To avoid additional biases introduced in the process of dataset construction, we try to take the least restrictive approach in filtering dataset content beyond the necessary quality assurances. For this reason, we choose not to explicitly ask annotators to filter contents based on the intention of their creators (e.g. offensive content). A few algorithmic components were not addressed in this work, although they are important to the complete perspective discovery and presentation pipeline. For instance, one has to first verify that the input to the system is a reasonably well-phrased and an argue-worthy claim. And, to construct the pool of perspectives, one has to extract relevant arguments BIBREF43 . In a similar vein, since our main focus is the study of the relations between claims, perspectives, and evidence, we leave out important issues such as their degree of factuality BIBREF8 or trustworthiness BIBREF44 , BIBREF1 as separate aspects of problem. We hope that some of these challenges and limitations will be addressed in future work. Conclusion The importance of this work is three-fold; we define the problem of substantiated perspective discovery and characterize language understanding tasks necessary to address this problem. We combine online resources, web data and crowdsourcing and create a high-quality dataset, in order to drive research on this problem. Finally, we build and evaluate strong baseline supervised systems for this problem. Our hope is that this dataset would bring more attention to this important problem and would speed up the progress in this direction. There are two aspects that we defer to future work. First, the systems designed here assumed that the input are valid claim sentences. To make use of such systems, one needs to develop mechanisms to recognize valid argumentative structures. In addition, we ignore trustworthiness and credibility issues, important research issues that are addressed in other works. Acknowledgments The authors would like to thank Jennifer Sheffield, Stephen Mayhew, Shyam Upadhyay, Nitish Gupta and the anonymous reviewers for insightful comments and suggestions. This work was supported in part by a gift from Google and by Contract HR0011-15-2-0025 with the US Defense Advanced Research Projects Agency (DARPA). The views expressed are those of the authors and do not reflect the official policy or position of the Department of Defense or the U.S. Government. Statistics We provide brief statistics on the sources of different content in our dataset in Table TABREF46 . In particular, this table shows: the size of the data collected from online debate websites (step 1). the size of the data filtered out (step 2a). the size of the perspectives added by paraphrases (step 2b). the size of the perspective candidates added by web (step 2c). Measure of agreement We use the following definition formula in calculation of our measure of agreement. For a fixed subject (problem instance), let INLINEFORM0 represent the number of raters who assigned the given subject to the INLINEFORM1 -th category. The measure of agreement is defined as INLINEFORM2 where for INLINEFORM0 . Intuitively, this function measure concentration of values the vector INLINEFORM1 . Take the edge cases: Values concentrated: INLINEFORM0 (in other words INLINEFORM1 ) INLINEFORM2 . Least concentration (uniformly distribution): INLINEFORM0 . This definition is used in calculation of more extensive agreement measures (e.g, Fleiss' kappa BIBREF49 ). There multiple ways of interpreting this formula: It indicates how many rater–rater pairs are in agreement, relative to the number of all possible rater–rater pairs. One can interpret this measure by a simple combinatorial notions. Suppose we have sets INLINEFORM0 which are pairwise disjunct and for each INLINEFORM1 let INLINEFORM2 . We choose randomly two elements from INLINEFORM3 . Then the probability that they are from the same set is the expressed by INLINEFORM4 . We can write INLINEFORM0 in terms of INLINEFORM1 which is the conventional Chi-Square statistic for testing if the vector of INLINEFORM2 values comes from the all-categories-equally-likely flat multinomial model.
one needs to develop mechanisms to recognize valid argumentative structures, we ignore trustworthiness and credibility issues
01dc6893fc2f49b732449dfe1907505e747440b0
01dc6893fc2f49b732449dfe1907505e747440b0_0
Q: What debate topics are included in the dataset? Text: Introduction Understanding most nontrivial claims requires insights from various perspectives. Today, we make use of search engines or recommendation systems to retrieve information relevant to a claim, but this process carries multiple forms of bias. In particular, they are optimized relative to the claim (query) presented, and the popularity of the relevant documents returned, rather than with respect to the diversity of the perspectives presented in them or whether they are supported by evidence. In this paper, we explore an approach to mitigating this selection bias BIBREF0 when studying (disputed) claims. Consider the claim shown in Figure FIGREF1 : “animals should have lawful rights.” One might compare the biological similarities/differences between humans and other animals to support/oppose the claim. Alternatively, one can base an argument on morality and rationality of animals, or lack thereof. Each of these arguments, which we refer to as perspectives throughout the paper, is an opinion, possibly conditional, in support of a given claim or against it. A perspective thus constitutes a particular attitude towards a given claim. Natural language understanding is at the heart of developing an ability to identify diverse perspectives for claims. In this work, we propose and study a setting that would facilitate discovering diverse perspectives and their supporting evidence with respect to a given claim. Our goal is to identify and formulate the key NLP challenges underlying this task, and develop a dataset that would allow a systematic study of these challenges. For example, for the claim in Figure FIGREF1 , multiple (non-redundant) perspectives should be retrieved from a pool of perspectives; one of them is “animals have no interest or rationality”, a perspective that should be identified as taking an opposing stance with respect to the claim. Each perspective should also be well-supported by evidence found in a pool of potential pieces of evidence. While it might be impractical to provide an exhaustive spectrum of ideas with respect to a claim, presenting a small but diverse set of perspectives could be an important step towards addressing the selection bias problem. Moreover, it would be impractical to develop an exhaustive pool of evidence for all perspectives, from a diverse set of credible sources. We are not attempting to do that. We aim at formulating the core NLP problems, and developing a dataset that will facilitate studying these problems from the NLP angle, realizing that using the outcomes of this research in practice requires addressing issues such as trustworthiness BIBREF1 , BIBREF2 and possibly others. Inherently, our objective requires understanding the relations between perspectives and claims, the nuances in the meaning of various perspectives in the context of claims, and relations between perspectives and evidence. This, we argue, can be done with a diverse enough, but not exhaustive, dataset. And it can be done without attending to the legitimacy and credibility of sources contributing evidence, an important problem but orthogonal to the one studied here. To facilitate the research towards developing solutions to such challenging issues, we propose [wave]390P[wave]415e[wave]440r[wave]465s[wave]485p[wave]525e[wave]535c[wave]595t[wave]610r[wave]635u[wave]660m, a dataset of claims, perspectives and evidence paragraphs. For a given claim and pools of perspectives and evidence paragraphs, a hypothetical system is expected to select the relevant perspectives and their supporting paragraphs. Our dataset contains 907 claims, 11,164 perspectives and 8,092 evidence paragraphs. In constructing it, we use online debate websites as our initial seed data, and augment it with search data and paraphrases to make it richer and more challenging. We make extensive use of crowdsourcing to increase the quality of the data and clean it from annotation noise. The contributions of this paper are as follows: Design Principles and Challenges In this section we provide a closer look into the challenge and propose a collection of tasks that move us closer to substantiated perspective discovery. To clarify our description we use to following notation. Let INLINEFORM0 indicate a target claim of interest (for example, the claims INLINEFORM1 and INLINEFORM2 in Figure FIGREF6 ). Each claim INLINEFORM3 is addressed by a collection of perspectives INLINEFORM4 that are grouped into clusters of equivalent perspectives. Additionally, each perspective INLINEFORM5 is supported, relative to INLINEFORM6 , by at least one evidence paragraph INLINEFORM7 , denoted INLINEFORM8 . Creating systems that would address our challenge in its full glory requires solving the following interdependent tasks: Determination of argue-worthy claims: not every claim requires an in-depth discussion of perspectives. For a system to be practical, it needs to be equipped with understanding argumentative structures BIBREF3 in order to discern disputed claims from those with straightforward responses. We set aside this problem in this work and assume that all the inputs to the systems are discussion-worthy claims. Discovery of pertinent perspectives: a system is expected to recognize argumentative sentences BIBREF4 that directly address the points raised in the disputed claim. For example, while the perspectives in Figure FIGREF6 are topically related to the claims, INLINEFORM0 do not directly address the focus of claim INLINEFORM1 (i.e., “use of animals” in “entertainment”). Perspective equivalence: a system is expected to extract a minimal and diverse set of perspectives. This requires the ability to discover equivalent perspectives INLINEFORM0 , with respect to a claim INLINEFORM1 : INLINEFORM2 . For instance, INLINEFORM3 and INLINEFORM4 are equivalent in the context of INLINEFORM5 ; however, they might not be equivalent with respect to any other claim. The conditional nature of perspective equivalence differentiates it from the paraphrasing task BIBREF5 . Stance classification of perspectives: a system is supposed to assess the stances of the perspectives with respect to the given claim (supporting, opposing, etc.) BIBREF6 . Substantiating the perspectives: a system is expected to find valid evidence paragraph(s) in support of each perspective. Conceptually, this is similar to the well-studied problem of textual entailment BIBREF7 except that here the entailment decisions depend on the choice of claims. Dataset construction In this section we describe a multi-step process, constructed with detailed analysis, substantial refinements and multiple pilots studies. We use crowdsourcing to annotate different aspects of the dataset. We used Amazon Mechanical Turk (AMT) for our annotations, restricting the task to workers in five English-speaking countries (USA, UK, Canada, New Zealand, and Australia), more than 1000 finished HITs and at least a 95% acceptance rate. To ensure the diversity of responses, we do not require additional qualifications or demographic information from our annotators. For any of the annotations steps described below, the users are guided to an external platform where they first read the instructions and try a verification step to make sure they have understood the instructions. Only after successful completion are they allowed to start the annotation tasks. Throughout our annotations, it is our aim to make sure that the workers are responding objectively to the tasks (as opposed to using their personal opinions or preferences). The screen-shots of the annotation interfaces for each step are included in the Appendix (Section SECREF56 ). In the steps outlined below, we filter out a subset of the data with low rater–rater agreement INLINEFORM0 (see Appendix SECREF47 ). In certain steps, we use an information retrieval (IR) system to generate the best candidates for the task at hand. We start by crawling the content of a few notable debating websites: idebate.com, debatewise.org, procon.org. This yields INLINEFORM0 claims, INLINEFORM1 perspectives and INLINEFORM2 evidence paragraphs (for complete statistics, see Table TABREF46 in the Appendix). This data is significantly noisy and lacks the structure we would like. In the following steps we explain how we denoise it and augment it with additional data. For each perspective we verify that it is a complete English sentence, with a clear stance with respect to the given claim. For a fixed pair of claim and perspective, we ask the crowd-workers to label the perspective with one of the five categories of support, oppose, mildly-support, mildly-oppose, or not a valid perspective. The reason that we ask for two levels of intensity is to distinguish mild or conditional arguments from those that express stronger positions. Every 10 claims (and their relevant perspectives) are bundled to form a HIT. Three independent annotators solve a HIT, and each gets paid $1.5-2 per HIT. To get rid of the ambiguous/noisy perspectives we measure rater-rater agreement on the resulting data and retain only the subset which has a significant agreement of INLINEFORM0 . To account for minor disagreements in the intensity of perspective stances, before measuring any notion of agreement, we collapse the five labels into three labels, by collapsing mildly-support and mildly-oppose into support and oppose, respectively. To assess the quality of these annotations, two of the authors independently annotate a random subset of instances in the previous step (328 perspectives for 10 claims). Afterwards, the differences were adjudicated. We measure the accuracy adjudicated results with AMT annotations to estimate the quality of our annotation. This results in an accuracy of 94%, which shows high-agreement with the crowdsourced annotations. To enrich the ways the perspectives are phrased, we crowdsource paraphrases of our perspectives. We ask annotators to generate two paraphrases for each of the 15 perspectives in each HIT, for a reward of $1.50. Subsequently, we perform another round of crowdsourcing to verify the generated paraphrases. We create HITs of 24 candidate paraphrases to be verified, with a reward of $1. Overall, this process gives us INLINEFORM0 paraphrased perspectives. The collected paraphrases form clusters of equivalent perspectives, which we refine further in the later steps. In order to ensure that our dataset contains more realistic sentences, we use web search to augment our pool of perspectives with additional sentences that are topically related to what we already have. Specifically, we use Bing search to extract sentences that are similar to our current pool of perspectives, by querying “claim+perspective”. We create a pool of relevant web sentences and use an IR system (introduced earlier) to retrieve the 10 most similar sentences. These candidate perspectives are annotated using (similar to step 2a) and only those that were agreed upon are retained. In a final round of annotation for perspectives, an expert annotator went over all the claims in order to verify that all the equivalent perspectives are clustered together. Subsequently, the expert annotator went over the most similar claim-pairs (and their perspectives), in order to annotate the missing perspectives shared between the two claims. To cut the space of claim pairs, the annotation was done on the top 350 most similar claim pairs retrieved by the IR system. The goal of this step is to decide whether a given evidence paragraph provides enough substantiations for a perspective or not. Performing these annotations exhaustively for any perspective-evidence pair is not possible. Instead, we make use of a retrieval system to annotate only the relevant pairs. In particular, we create an index of all the perspectives retained from step 2a. For a given evidence paragraph, we retrieve the top relevant perspectives. We ask the annotators to note whether a given evidence paragraph supports a given perspective or not. Each HIT contains a 20 evidence paragraphs and their top 8 relevant candidate perspectives. Each HIT is paid $1 and annotated by at least 4 independent annotators. In order to assess the quality of our annotations, a random subset of instances (4 evidence-perspective pairs) are annotated by two independent authors and the differences are adjudicated. We measure the accuracy of our adjudicated labels versus AMT labels, resulting in 87.7%. This indicates the high quality of the crowdsourced data. Statistics on the dataset We now provide a brief summary of [wave]390P[wave]415e[wave]440r[wave]465s[wave]485p[wave]525e[wave]535c[wave]595t[wave]610r[wave]635u[wave]660m. The dataset contains about INLINEFORM0 claims with a significant length diversity (Table TABREF19 ). Additionally, the dataset comes with INLINEFORM1 perspectives, most of which were generated through paraphrasing (step 2b). The perspectives which convey the same point with respect to a claim are grouped into clusters. On average, each cluster has a size of INLINEFORM2 which shows that, on average, many perspectives have equivalents. More granular details are available in Table TABREF19 . To better understand the topical breakdown of claims in the dataset, we crowdsource the set of “topics” associated with each claim (e.g., Law, Ethics, etc.) We observe that, as expected, the three topics of Politics, World, and Society have the biggest portions (Figure FIGREF21 ). Additionally, the included claims touch upon 10+ different topics. Figure FIGREF22 depicts a few popular categories and sampled questions from each. Required skills We perform a closer investigation of the abilities required to solve the stance classification task. One of the authors went through a random subset of claim-perspectives pairs and annotated each with the abilities required in determining their stances labels. We follow the common definitions used in prior work BIBREF37 , BIBREF38 . The result of this annotation is depicted in Figure FIGREF24 . As can be seen, the problem requires understanding of common-sense, i.e., an understanding that is commonly shared among humans and rarely gets explicitly mentioned in the text. Additionally, the task requires various types of coreference understanding, such as event coreference and entity coreference. Empirical Analysis In this section we provide empirical analysis to address the tasks. We create a split of 60%/15%/25% of the data train/dev/test. In order to make sure our baselines are not overfitting to the keywords of each topic (the “topic” annotation from Section SECREF20 ), we make sure to have claims with the same topic fall into the same split. For simplicity, we define a notation which we will extensively use for the rest of this paper. The clusters of equivalent perspectives are denoted as INLINEFORM0 , given a representative member INLINEFORM1 . Let INLINEFORM2 denote the collection of relevant perspectives to a claim INLINEFORM3 , which is the union of all the equivalent perspectives participating in the claim: INLINEFORM4 . Let INLINEFORM5 denote the set of evidence documents lending support to a perspective INLINEFORM6 . Additionally, denote the two pools of perspectives and evidence with INLINEFORM7 and INLINEFORM8 , respectively. Systems We make use of the following systems in our evaluation: (Information Retrieval). This baseline has been successfully used for related tasks like Question Answering BIBREF39 . We create two versions of this baseline: one with the pool of perspectives INLINEFORM0 and one with the pool of evidences INLINEFORM1 . We use this system to retrieve a ranked list of best matching perspective/evidence from the corresponding index. (Contextual representations). A recent state-of-the-art contextualized representation BIBREF40 . This system has been shown to be effective on a broad range of natural language understanding tasks. Human performance provides us with an estimate of the best achievable results on datasets. We use human annotators to measure human performance for each task. We randomly sample 10 claims from the test set, and instruct two expert annotators to solve each of T1 to T4. Evaluation metrics. We perform evaluations on four different subtasks in our dataset. In all of the following evaluations, the systems are given the two pools of perspectives INLINEFORM0 and evidences INLINEFORM1 . A system is expected to return the collection of mutually disjoint perspectives with respect to a given claim. Let INLINEFORM0 be the set of output perspectives. Define the precision and recall as INLINEFORM1 and INLINEFORM2 respectively. To calculate dataset metrics, the aforementioned per-claim metrics are averaged across all the claims in the test set. Given a claim, a system is expected to label every perspective in INLINEFORM0 with one of two labels support or oppose. We use the well-established definitions of precision-recall for this binary classification task. A system is expected to decide whether two given perspectives are equivalent or not, with respect to a given claim. We evaluate this task in a way similar to a clustering problem. For a pair of perspectives INLINEFORM0 , a system predicts whether the two are in the same cluster or not. The ground-truth is whether there is a cluster which contains both of the perspectives or not: INLINEFORM1 . We use this pairwise definition for all the pairs in INLINEFORM2 , for any claim INLINEFORM3 in the test set. Given a perspective INLINEFORM0 , we expect a system to return all the evidence INLINEFORM1 from the pool of evidence INLINEFORM2 . Let INLINEFORM3 and INLINEFORM4 be the predicted and gold evidence for a perspective INLINEFORM5 . Define macro-precision and macro-recall as INLINEFORM6 and INLINEFORM7 , respectively. The metrics are averaged across all the perspectives INLINEFORM8 participating in the test set. The goal is to get estimates of the overall performance of the systems. Instead of creating a complex measure that would take all the aspects into account, we approximate the overall performance by multiplying the disjoint measures in INLINEFORM0 , INLINEFORM1 and INLINEFORM2 . While this gives an estimate on the overall quality, it ignores the pipeline structure of the task (e.g., the propagation of the errors throughout the pipeline). We note that the task of INLINEFORM3 (perspective equivalence) is indirectly being measured within INLINEFORM4 . Furthermore, since we do not report an IR performance on INLINEFORM5 , we use the “always supp” baseline instead to estimate an overall performance for IR. Results Table TABREF40 shows a summary of the experimental results. To measure the performance of the IR system, we use the index containing INLINEFORM0 . Given each claim, we query the top INLINEFORM1 perspectives, ranked according to their retrieval scores. We tune INLINEFORM2 on our development set and report the results on the test section according to the tuned parameter. We use IR results as candidates for other solvers (including humans). For this task, IR with top-15 candidates yields INLINEFORM3 90% recall (for the PR-curve, see Figure FIGREF53 in the Appendix). In order to train BERT on this task, we use the IR candidates as the training instances. We then tune a threshold on the dev data to select the top relevant perspectives. In order to measure human performance, we create an interface where two human annotators see IR top- INLINEFORM4 and select a minimal set of perspectives (i.e., no two equivalent perspectives). We measure the quality of perspective stance classification, where the input is a claim-perspective pair, mapped to {support, oppose}. The candidate inputs are generated on the collection of perspectives INLINEFORM0 relevant to a claim INLINEFORM1 . To have an understanding of a lower bound for the metric, we measure the quality of an always-support baseline. We measure the performance of BERT on this task as well, which is about 20% below human performance. This might be because this task requires a deep understanding of commonsense knowledge/reasoning (as indicated earlier in Section SECREF5 ). Since a retrieval system is unlikely to distinguish perspectives with different stances, we do not report the IR performance for this task. We create instances in the form of INLINEFORM0 where INLINEFORM1 . The expected label is whether the two perspectives belong to the same equivalence class or not. In the experiments, we observe that BERT has a significant performance gain of INLINEFORM2 over the IR baseline. Meanwhile, this system is behind human performance by a margin of INLINEFORM3 . We evaluate the systems on the extraction of items from the pool of evidences INLINEFORM0 , given a claim-perspective pair. To measure the performance of the IR system working with the index containing INLINEFORM1 we issue a query containing the concatenation of a perspective-claim pair. Given the sorted results (according to their retrieval confidence score), we select the top candidates using a threshold parameter tuned on the dev set. We also use the IR system's candidates (top-60) for other baselines. This set of candidates yields a INLINEFORM2 85% recall (for the PR-curve, see Figure FIGREF53 in the Appendix). We train BERT system to map each (gold) claim-perspective pair to its corresponding evidence paragraph(s). Since each evidence paragraph could be long (hence hard to feed into BERT), we split each evidence paragraph into sliding windows of 3 sentences. For each claim-perspective pair, we use all 3-sentences windows of gold evidence paragraphs as positive examples, and rest of the IR candidates as negative examples. In the run-time, if a certain percentage (tuned on the dev set) of the sentences from a given evidence paragraph are predicted as positive by BERT, we consider the whole evidence as positive (i.e. it supports a given perspective). Overall, the performances on this task are lower, which could probably be expected, considering the length of the evidence paragraphs. Similar to the previous scenarios, the BERT solver has a significant gain over a trivial baseline, while standing behind human with a significant margin. Discussion As one of the key consequences of the information revolution, information pollution and over-personalization have already had detrimental effects on our life. In this work, we attempt to facilitate the development of systems that aid in better organization and access to information, with the hope that the access to more diverse information can address over-personalization too BIBREF41 . The dataset presented here is not intended to be exhaustive, nor does it attempt to reflect a true distribution of the important claims and perspectives in the world, or to associate any of the perspective and identified evidence with levels of expertise and trustworthiness. Moreover, it is important to note that when we ask crowd-workers to evaluate the validity of perspectives and evidence, their judgement process can potentially be influenced by their prior beliefs BIBREF42 . To avoid additional biases introduced in the process of dataset construction, we try to take the least restrictive approach in filtering dataset content beyond the necessary quality assurances. For this reason, we choose not to explicitly ask annotators to filter contents based on the intention of their creators (e.g. offensive content). A few algorithmic components were not addressed in this work, although they are important to the complete perspective discovery and presentation pipeline. For instance, one has to first verify that the input to the system is a reasonably well-phrased and an argue-worthy claim. And, to construct the pool of perspectives, one has to extract relevant arguments BIBREF43 . In a similar vein, since our main focus is the study of the relations between claims, perspectives, and evidence, we leave out important issues such as their degree of factuality BIBREF8 or trustworthiness BIBREF44 , BIBREF1 as separate aspects of problem. We hope that some of these challenges and limitations will be addressed in future work. Conclusion The importance of this work is three-fold; we define the problem of substantiated perspective discovery and characterize language understanding tasks necessary to address this problem. We combine online resources, web data and crowdsourcing and create a high-quality dataset, in order to drive research on this problem. Finally, we build and evaluate strong baseline supervised systems for this problem. Our hope is that this dataset would bring more attention to this important problem and would speed up the progress in this direction. There are two aspects that we defer to future work. First, the systems designed here assumed that the input are valid claim sentences. To make use of such systems, one needs to develop mechanisms to recognize valid argumentative structures. In addition, we ignore trustworthiness and credibility issues, important research issues that are addressed in other works. Acknowledgments The authors would like to thank Jennifer Sheffield, Stephen Mayhew, Shyam Upadhyay, Nitish Gupta and the anonymous reviewers for insightful comments and suggestions. This work was supported in part by a gift from Google and by Contract HR0011-15-2-0025 with the US Defense Advanced Research Projects Agency (DARPA). The views expressed are those of the authors and do not reflect the official policy or position of the Department of Defense or the U.S. Government. Statistics We provide brief statistics on the sources of different content in our dataset in Table TABREF46 . In particular, this table shows: the size of the data collected from online debate websites (step 1). the size of the data filtered out (step 2a). the size of the perspectives added by paraphrases (step 2b). the size of the perspective candidates added by web (step 2c). Measure of agreement We use the following definition formula in calculation of our measure of agreement. For a fixed subject (problem instance), let INLINEFORM0 represent the number of raters who assigned the given subject to the INLINEFORM1 -th category. The measure of agreement is defined as INLINEFORM2 where for INLINEFORM0 . Intuitively, this function measure concentration of values the vector INLINEFORM1 . Take the edge cases: Values concentrated: INLINEFORM0 (in other words INLINEFORM1 ) INLINEFORM2 . Least concentration (uniformly distribution): INLINEFORM0 . This definition is used in calculation of more extensive agreement measures (e.g, Fleiss' kappa BIBREF49 ). There multiple ways of interpreting this formula: It indicates how many rater–rater pairs are in agreement, relative to the number of all possible rater–rater pairs. One can interpret this measure by a simple combinatorial notions. Suppose we have sets INLINEFORM0 which are pairwise disjunct and for each INLINEFORM1 let INLINEFORM2 . We choose randomly two elements from INLINEFORM3 . Then the probability that they are from the same set is the expressed by INLINEFORM4 . We can write INLINEFORM0 in terms of INLINEFORM1 which is the conventional Chi-Square statistic for testing if the vector of INLINEFORM2 values comes from the all-categories-equally-likely flat multinomial model.
Ethics, Gender, Human rights, Sports, Freedom of Speech, Society, Religion, Philosophy, Health, Culture, World, Politics, Environment, Education, Digital Freedom, Economy, Science and Law
9776156fc93daa36f4613df591e2b49827d25ad2
9776156fc93daa36f4613df591e2b49827d25ad2_0
Q: By how much, the proposed method improves BiDAF and DCN on SQuAD dataset? Text: Introduction Enabling machines to understand natural language is one of the key challenges to achieve artificially intelligent systems. Asking machines questions and getting a meaningful answer adds value to us since it automatizes knowledge acquisition efforts drastically. Apple's Siri and Amazon's Echo are two such examples of mass market products capable of machine comprehension that has led to a paradigm shift on how consumers' interact with machines. Over the last decade, research in the field of Natural Language Processing (NLP) has massively benefited from neural architectures. Those approaches have outperformed former state-of-the-art non-neural machine learning model families while needing far less human intervention since they don't require any manual feature engineering. A subset of NLP research focuses on building systems that are able to answer questions about a given document. To jointly expand the current best practice, the Stanford Question Answering Dataset (SQuAD) was setup as a basis for a global competition between different research groups BIBREF0 . SQuAD was published in 2016 and includes 100,000+ context-question-triplets on 500+ articles, significantly larger than previous reading comprehension datasets BIBREF1 . The context paragraphs were obtained from more then 500 Wikipedia articles and the answers were sourced with Amazon Mechanical Turk. Recently, researchers were able to make machines outperform humans (as of Jan 2018) BIBREF1 . Answers in this dataset are taken from the document itself and are not dynamically generated from scratch. Instead of generating text that provides a suitable answer, the objective is to find the boundaries in which the answer is contained in the document. The aim is to achieve close to human performance in generating correct answers from a context paragraph given any new unseen questions. To solve this problem of question answering, neural attention mechanisms have recently gained significant popularity by focusing on the most relevant area within a context paragraph, useful to answer the question BIBREF2 , BIBREF3 . Attention mechanisms have proven to be an important extension to achieve better results in NLP problems BIBREF4 . While earlier attention mechanisms for this task were usually uni-directional, obtaining a fixed size vector for each context word summarizing the question words, bi-directional attention flow applies an attention scheme in both directions (context-to-question as well as question-to-context). In this paper, we study two state-of-the-art neural architectures with an attention flow going in both directions called Bi-Directional Attention Flow (BiDAF) BIBREF5 and Dynamic Co-Attention network (DCN) BIBREF6 that were once themselves leading architectures in the SQuAD challenge. We would also like to propose yet another hybrid neural architecture that shows competitive results by bringing together these two models. More specifically, we combined the attention layer of both BiDAF and Co-Attention models. In addition to this, we propose another simpler model family called Double Cross Attention (DCA) which in itself performs better than both BiDAF and Co-Attention while giving similar performance as hybrid model. The objective of this paper is to do a comparative study of the performance of attention layer and not to optimize the performance of the overall system. Model We started our development by re-implementing the BiDAF and DCN models. We figured that these models individually enhanced the baseline performance significantly, so the hope was that a combination would eventually lead to superior results. Thereby we created our “hybrid" model, which we will subsequently explain shortly. In the following subsections, we describe each layer of our model in more detail. Word and Character Embedding Layer The word embedding layer maps each word in a context and question to a vector of fixed size using pre-trained GloVe embeddings BIBREF7 . First, we encode each word in the question and context with the pre-trained Glove embedding as given in the baseline code. Then we concatenate to the word encodings an optional Character-level Embedding with CNNs since it helps to deal with out-of-vocabulary words BIBREF5 , BIBREF8 . The joint concatenated encoding of words and characters is subsequently fed into the context and question encoding layer. Context and Question Encoding Layer Once we have a context and question embeddings, we use a Bidirectional GRU to translate these context and question embeddings into encodings. Whereas a simple LSTM/GRU cell encodes sequence data such as a sentences only from left-to-right, a bi-directional approach also parses a sentence from the end to the start. Both representations of a sequence are then usually concatenated and are assumed to encode the sequence structure more expressively ultimately leading to higher model performance. Attention Layer The attention layer is the modeling layer that eventually involves modeling the complex interactions between the context and question words. Next, we describe several different attention mechanisms that we implemented in our system. We implemented a complete BiDAF layer as suggested in the project handout and in the original paper BIBREF5 . Bi-directional attention flow approaches the machine comprehension challenge slightly differently. Instead of using an attention layer for transforming context inputs to fixed-size vectors, the BiDAF model computes the attention from both question-to-context as well as context-to-question and combines them effectively. The basic idea is essentially to obtain a similarity matrix to capture relations between context and question words and use this matrix to obtain context-to-question as well as question-to-context attention vectors. Finally, these attention vectors are concatenated to the context encodings in a specific way to obtain the output of the Bi-directional attention flow layer. In the original BiDAF paper, an additional Bidirectional-RNN is used to again encode these concatenated vectors. However, it didn't give any improvement in our setup, hence we chose to omit it in our final implementation. Dynamic Co-Attention Network layer (DCN), similar to BiDAF involves a two-way attention between the context and the question but unlike BiDAF, DCN involves a second-level attention computation over the previously computed attentions BIBREF6 . The dynamic co-attention network (DCN) is an end-to-end neural network architecture. The authors claim that the ability of attending to context inputs strongly depends on the query (question). The intuition behind that is also reflected by a human's ability to better answer a question on an input paragraph, when the question is known before reading the context itself, because then one can attend specifically to relevant information in the context. For details, please check the project handout, the original paper and our implementation code. In the original paper and the project handout, there was also a concept of sentinel vectors that was introduced but in our tests, it again didn't seem to provide any significant advantage, so we again chose to omit this as well in our final implementation. This is model that we propose and it builds heavily on aspects of the BiDAF BIBREF5 as well as the DCN models BIBREF6 . Since the attention outputs from both the BiDAF and DCN seem to have their merits, our idea was to combine them by concatenating both attentions to the original context states. The intuition was that the neural network should be able to train in order to use and pick them both effectively. Experimental results that we describe later, also verify our claim. Please check the code for exact implementation details In this section, we propose another simple idea called Double Cross Attention (DCA) which seem to provide better results compared to both BiDAF and Co-Attention while providing similar performance as concatenated hybrid model discussed in previous section. The motivation behind this approach is that first we pay attention to each context and question and then we attend those attentions with respect to each other in a slightly similar way as DCN. The intuition is that if iteratively read/attend both context and question, it should help us to search for answers easily. The DCA mechanism is explained graphically in Figure. 1 and the formal description of the layer is as follows. Assume we have context hidden states $\mathbf {c}_1, \mathbf {c}_2...,\mathbf {c}_N\in \mathbb {R}^{2h}$ and question hidden states $\mathbf {q}_1, \mathbf {q}_2...,\mathbf {q}_M\in \mathbb {R}^{2h}$ obtained after passing context and question embeddings through a bi-directional GRU. First, we compute a cross-attention matrix $\mathbf {S}\in \mathbb {R}^{N\times M}$ , which contains a similarity score $S_{ij}$ for each pair of context and question hidden states $(\mathbf {c}_i,\mathbf {q}_j)$ . We chose $S_{ij}=\mathbf {c}_i^T\mathbf {q}_j$ , since it is a parameter free approach to calculate attention but one can also construct this function with a trainable weight parameter (which can be shared in the subsequent step). First we obtain Context-to-Question (C2Q) attention vectors $\mathbf {a}_i$ as follows: $$\alpha _i = \text{softmax} \mathbf {S_{(i:)}}\in \mathbb {R}^M, \mathbf {a}_i = \sum _{j=1}^{M}\alpha _i^j\mathbf {q}_j \in \mathbb {R}^{2h}$$ (Eq. 9) Next, we also obtain Question-to-Context (Q2C) attention vectors $\mathbf {b}_j$ as follows: $$\beta _j = \text{softmax} \mathbf {S_{(:j)}}\in \mathbb {R}^N, \mathbf {b}_j = \sum _{i=1}^{N}\beta _j^i\mathbf {c}_i \in \mathbb {R}^{2h}$$ (Eq. 10) Then we compute a second-level cross attention matrix $\mathbf {R}\in \mathbb {R}^{N\times M}$ , which contains a similarity score $R_{ij}$ for each pair of context and question attention states $(\mathbf {a}_i, \mathbf {b}_j)$ . We again choose a simple dot product attention $R_{ij}=\mathbf {a}_i^T\mathbf {b}_j$ . Additionally, we obtain Context Attention-to-Question Attention(CA2QA) cross attention vectors $\mathbf {d}_i$ as follows: $$\gamma _i = \text{softmax} \mathbf {R_{(i:)}}\in \mathbb {R}^M, \mathbf {d}_i = \sum _{1}^{M}\gamma _i^j\mathbf {b}_j \in \mathbb {R}^{2h}$$ (Eq. 11) Finally, we concatenate $\mathbf {c}_i$ , $\mathbf {a}_i$ and $\mathbf {d}_i$ as a new state $[\mathbf {c}_i; \mathbf {a}_i; \mathbf {d}_i]$ and pass it through a biLSTM layer to obtain double query attended encoded context states as follows. $$\lbrace \mathbf {u}_1,.... \mathbf {u}_N\rbrace = \text{biLSTM} (\lbrace [\mathbf {c}_1; \mathbf {a}_1; \mathbf {d}_1],.... [\mathbf {c}_N; \mathbf {a}_N; \mathbf {d}_N]\rbrace )$$ (Eq. 12) Finally all attention layer outputs are concatenated and fed into a Softmax layer that computes the probability distributions for the start and end token independently, as it is done in the baseline implementation. Experiments Before we started the enhancements of the baseline model, we studied the SQuAD data set. Figure. 2 shows the distribution of the answer, question and context lengths as well as the relative position of the answer span inside a context. Furthermore, we counted the different question types. We found that most answers have a length less than 5 words. Additionally, a question usually consists of 5-20 words. Moreover, we noticed that on average a context is of length 120 (visualization excluded due to lack of space). Furthermore, answers for a question tend to be represented by spans of context words that are at the beginning of a context. Finally, we can see that “what" questions build the majority of questions, almost the same amount as all other question types combined. Results In this section, we report the results of our experiments. To ensure the generality of our model, we used Dropout technique for regularizing neural networks. We start our experiments with default hyperparameters: embedding size of 100, batch size 100, hidden size 200, learning rate of 0.001 and a dropout rate of 0.15. For character level encoding, default character embedding size is 20, kernel size is 5 and number of filters are 100. For each architecture, we report the evaluation metrics F1 and EM (Exact match) computed on the dev set. The effect of character embedding on the BiDAF model is reported in Table 1 . We can notice that character embedding boosts up the performance by roughly 2% for both EM and F1 score. This is expected since character embedding can help deal with non-dictionary words by giving them a unique embedding. Next, we report the results of the model performances for baseline, BiDAF, Co-Attention, Hybrid and DCA attention mechanisms in Table 1 . Notice that none of these architectures were optimized for EM/F1 scores but we are more interested in difference between these mechanisms for a fixed set of hyperparameters. Hybrid and DCA have a slight edge over plain BiDAF and Co-Attention module as per the results. Co-Attention with char embedding was giving us worse results so we put the best numbers we got for Co-Attention there. We would like to point out that the BiDAF model here doesn't include BiLSTM layer as present in original paper because the BiLSTM didn't give any advantage except for slowing down the training. Selected tensorboard visualizations are also shown in Figure 3 . Visualizations demonstrate that both hybrid and DCA models perform better than vanilla Co-Attention and BiDAF attention mechanisms and reduce the losses faster and increase the dev F1/EM scores faster as well. Hyperparameter Tuning We made a brief attempt to do a bit of hyperparameter tuning on our proposed DCA model and we report the results in Table 3 . Ideally, hyperparameter tuning for neural network architectures should be done using bayesian hyperparameter optimization but due to the lack of time we tried to do a random search on a small set of hyperparameters that we guessed could be more suitable. While, we didn't find any significantly good set of parameters, we noticed that reducing the hidden size has a minor effect on improving the performance. This is probably because it reduces the system complexity which makes the model easier to train. Error Analysis In Table 4 , we briefly provide error analysis on a small sample of results for hybrid and DCA models and try to explain the model behavior. Conclusions and Future work In this paper, we studied and implemented two well known attention mechanisms namely, BiDAF and Co-Attention. We also introduced a simple combination of these two schemes called Hybrid attention mechanism that outperforms both BiDAF and Co-Attention. In addition to this, we propose our own attention mechanism called Double Cross Attention that gives similar results as the Hybrid model. The objective of the paper was primarily to study and compare the two aforementioned popular attention schemes on their own and not to chase the leaderboard scores. In particular, we isolated the attention layer and suggested our own improvements to it. The comparative results between different schemes are obtained for same set of hyperparameters. To improve the F1/EM scores of the overall system, a number of enhancement techniques could be used. For e.g. while we simply concatenated character and word embeddings, more advanced techniques to effectively combine them have been suggested in the literature BIBREF9 . Also. a number of other attention mechanisms have been suggested which need to be investigated as well BIBREF10 , BIBREF11 . Another possible improvement is to properly condition the end position on the start position of the answer span. An LSTM based solution was used in the original BiDAF paper. Exponential moving average of weights and ensembling are additional common methods to further fine-tune and improve the results. Hierarchical Maxout Network as mentioned in the co-attention paper could be a replacement to our simple Softmax output layer to improve the performance even further. There are also a few possible directions where DCA model can further be improved/extended. We can continue recursively calculating the cross attention weights and combine them in some more intuitive or non-linear way. While, we didn't optimize for the number of parameters, it is possible to reduce the overall number of trainable parameters by appropriately sharing weights between layers when possible. All of the above mentioned suggestions, we see as enhancement opportunities (some we partially already tried to implement but could not finally manage to include in final running model). As a final project for the cs224n course, we found the task challenging but we were extremely satisfied with our own personal learning curve. We are sure that with even more time, we could significantly improve our model from the baseline enhancement we achieved so far. All in all, we believe that the experience of this project, will be of utmost value for our future professional work. Acknowledgements First of all, we would like to thank the course instructor Richard Socher for making the class highly informative and a great learning experience. We would also like to thank the TAs for prompt feedback and insightful discussions. Lastly, we would like to thank the fellow students who regularly helped each other on course forum regarding any questions..
In terms of F1 score, the Hybrid approach improved by 23.47% and 1.39% on BiDAF and DCN respectively. The DCA approach improved by 23.2% and 1.12% on BiDAF and DCN respectively.
03a911049b6d7df2b6391ed5bc129a3b65133bcd
03a911049b6d7df2b6391ed5bc129a3b65133bcd_0
Q: Do they report results only on English datasets? Text: Introduction Irony and sarcasm in dialogue constitute a highly creative use of language signaled by a large range of situational, semantic, pragmatic and lexical cues. Previous work draws attention to the use of both hyperbole and rhetorical questions in conversation as distinct types of lexico-syntactic cues defining diverse classes of sarcasm BIBREF0 . Theoretical models posit that a single semantic basis underlies sarcasm's diversity of form, namely "a contrast" between expected and experienced events, giving rise to a contrast between what is said and a literal description of the actual situation BIBREF1 , BIBREF2 . This semantic characterization has not been straightforward to operationalize computationally for sarcasm in dialogue. Riloffetal13 operationalize this notion for sarcasm in tweets, achieving good results. Joshietal15 develop several incongruity features to capture it, but although they improve performance on tweets, their features do not yield improvements for dialogue. Previous work on the Internet Argument Corpus (IAC) 1.0 dataset aimed to develop a high-precision classifier for sarcasm in order to bootstrap a much larger corpus BIBREF3 , but was only able to obtain a precision of just 0.62, with a best F of 0.57, not high enough for bootstrapping BIBREF4 , BIBREF5 . Justoetal14 experimented with the same corpus, using supervised learning, and achieved a best precision of 0.66 and a best F of 0.70. Joshietal15's explicit congruity features achieve precision around 0.70 and best F of 0.64 on a subset of IAC 1.0. We decided that we need a larger and more diverse corpus of sarcasm in dialogue. It is difficult to efficiently gather sarcastic data, because only about 12% of the utterances in written online debate forums dialogue are sarcastic BIBREF6 , and it is difficult to achieve high reliability for sarcasm annotation BIBREF7 , BIBREF8 , BIBREF9 , BIBREF10 . Thus, our contributions are: Creating a Diverse Sarcasm Corpus There has been relatively little theoretical work on sarcasm in dialogue that has had access to a large corpus of naturally occurring examples. Gibbs00 analyzes a corpus of 62 conversations between friends and argues that a robust theory of verbal irony must account for the large diversity in form. He defines several subtypes, including rhetorical questions and hyperbole: Other categories of irony defined by Gibbs00 include understatements, jocularity, and sarcasm (which he defines as a critical/mocking form of irony). Other work has also tackled jocularity and humor, using different approaches for data aggregation, including filtering by Twitter hashtags, or analyzing laugh-tracks from recordings BIBREF11 , BIBREF12 . Previous work has not, however, attempted to operationalize these subtypes in any concrete way. Here we describe our methods for creating a corpus for generic sarcasm (Gen) (Sec. SECREF11 ), rhetorical questions (RQ), and hyperbole (Hyp) (Sec. SECREF15 ) using data from the Internet Argument Corpus (IAC 2.0). Table TABREF9 provides examples of sarcastic and not-sarcastic posts from the corpus we create. Table TABREF10 summarizes the final composition of our sarcasm corpus. Generic Dataset (Gen) We first replicated the pattern-extraction experiments of LukinWalker13 on their dataset using AutoSlog-TS BIBREF13 , a weakly-supervised pattern learner that extracts lexico-syntactic patterns associated with the input data. We set up the learner to extract patterns for both sarcastic and not-sarcastic utterances. Our first discovery is that we can classify not-sarcastic posts with very high precision, ranging between 80-90%. Because our main goal is to build a larger, more diverse corpus of sarcasm, we use the high-precision not-sarcastic patterns extracted by AutoSlog-TS to create a "not-sarcastic" filter. We did this by randomly selecting a new set of 30K posts (restricting to posts with between 10 and 150 words) from IAC 2.0 BIBREF14 , and applying the high-precision not-sarcastic patterns from AutoSlog-TS to filter out any posts that contain at least one not-sarcastic cue. We end up filtering out two-thirds of the pool, only keeping posts that did not contain any of our high-precision not-sarcastic cues. We acknowledge that this may also filter out sarcastic posts, but we expect it to increase the ratio of sarcastic posts in the remaining pool. We put out the remaining 11,040 posts on Mechanical Turk. As in LukinWalker13, we present the posts in "quote-response" pairs, where the response post to be annotated is presented in the context of its “dialogic parent”, another post earlier in the thread, or a quote from another post earlier in the thread BIBREF15 . In the task instructions, annotators are presented with a definition of sarcasm, followed by one example of a quote-response pair that clearly contains sarcasm, and one pair that clearly does not. Each task consists of 20 quote-response pairs that follow the instructions. Figure FIGREF13 shows the instructions and layout of a single quote-response pair presented to annotators. As in LukinWalker13 and Walkeretal12d, annotators are asked a binary question: Is any part of the response to this quote sarcastic?. To help filter out unreliable annotators, we create a qualifier consisting of a set of 20 manually-selected quote-response pairs (10 that should receive a sarcastic label and 10 that should receive a not-sarcastic label). A Turker must pass the qualifier with a score above 70% to participate in our sarcasm annotations tasks. Our baseline ratio of sarcasm in online debate forums dialogue is the estimated 12% sarcastic posts in the IAC, which was found previously by Walker et al. by gathering annotations for sarcasm, agreement, emotional language, attacks, and nastiness from a subset of around 20K posts from the IAC across various topics BIBREF6 . Similarly, in his study of recorded conversation among friends, Gibbs cites 8% sarcastic utterances among all conversational turns BIBREF0 . We choose a conservative threshold: a post is only added to the sarcastic set if at least 6 out of 9 annotators labeled it sarcastic. Of the 11,040 posts we put out for annotation, we thus obtain 2,220 new posts, giving us a ratio of about 20% sarcasm – significantly higher than our baseline of 12%. We choose this conservative threshold to ensure the quality of our annotations, and we leave aside posts that 5 out of 9 annotators label as sarcastic for future work – noting that we can get even higher ratios of sarcasm by including them (up to 31%). The percentage agreement between each annotator and the majority vote is 80%. We then expand this set, using only 3 highly-reliable Turkers (based on our first round of annotations), giving them an exclusive sarcasm qualification to do additional HITs. We gain an additional 1,040 posts for each class when using majority agreement (at least 2 out of 3 sarcasm labels) for the additional set (to add to the 2,220 original posts). The average percent agreement with the majority vote is 89% for these three annotators. We supplement our sarcastic data with 2,360 not-sarcastic posts from the original data by BIBREF3 that follow our 150-word length restriction, and complete the set with 900 posts that were filtered out by our not-sarcastic filter – resulting in a total of 3,260 posts per class (6,520 total posts). Rows 1 and 2 of Table TABREF9 show examples of posts that are labeled sarcastic in our final generic sarcasm set. Using our filtering method, we are able to reduce the number of posts annotated from our original 30K to around 11K, achieving a percentage of 20% sarcastic posts, even though we choose to use a conservative threshold of at least 6 out of 9 sarcasm labels. Since the number of posts being annotated is only a third of the original set size, this method reduces annotation effort, time, and cost, and helps us shift the distribution of sarcasm to more efficiently expand our dataset than would otherwise be possible. Rhetorical Questions and Hyperbole The goal of collecting additional corpora for rhetorical questions and hyperbole is to increase the diversity of the corpus, and to allow us to explore the semantic differences between sarcastic and not-sarcastic utterances when particular lexico-syntactic cues are held constant. We hypothesize that identifying surface-level cues that are instantiated in both sarcastic and not sarcastic posts will force learning models to find deeper semantic cues to distinguish between the classes. Using a combination of findings in the theoretical literature, and observations of sarcasm patterns in our generic set, we developed a regex pattern matcher that runs against the 400K unannotated posts in the IAC 2.0 database and retrieves matching posts, only pulling posts that have parent posts and a maximum of 150 words. Table TABREF16 only shows a small subset of the “more successful” regex patterns we defined for each class. Cue annotation experiments. After running a large number of retrieval experiments with our regex pattern matcher, we select batches of the resulting posts that mix different cue classes to put out for annotation, in such a way as to not allow the annotators to determine what regex cues were used. We then successively put out various batches for annotation by 5 of our highly-qualified annotators, in order to determine what percentage of posts with these cues are sarcastic. Table TABREF16 summarizes the results for a sample set of cues, showing the number of posts found containing the cue, the subset that we put out for annotation, and the percentage of posts labeled sarcastic in the annotation experiments. For example, for the hyperbolic cue "wow", 977 utterances with the cue were found, 153 were annotated, and 44% of those were found to be sarcastic (i.e. 56% were found to be not-sarcastic). Posts with the cue "oh wait" had the highest sarcasm ratio, at 87%. It is the distinction between the sarcastic and not-sarcastic instances that we are specifically interested in. We describe the corpus collection process for each subclass below. It is important to note that using particular cues (regex) to retrieve sarcastic posts does not result in posts whose only cue is the regex pattern. We demonstrate this quantitatively in Sec. SECREF4 . Sarcasm is characterized by multiple lexical and morphosyntactic cues: these include the use of intensifiers, elongated words, quotations, false politeness, negative evaluations, emoticons, and tag questions inter alia. Table TABREF17 shows how sarcastic utterances often contain combinations of multiple indicators, each playing a role in the overall sarcastic tone of the post. Rhetorical Questions. There is no previous work on distinguishing sarcastic from non-sarcastic uses of rhetorical questions (RQs). RQs are syntactically formulated as a question, but function as an indirect assertion BIBREF16 . The polarity of the question implies an assertion of the opposite polarity, e.g. Can you read? implies You can't read. RQs are prevalent in persuasive discourse, and are frequently used ironically BIBREF17 , BIBREF18 , BIBREF0 . Previous work focuses on their formal semantic properties BIBREF19 , or distinguishing RQs from standard questions BIBREF20 . We hypothesized that we could find RQs in abundance by searching for questions in the middle of a post, that are followed by a statement, using the assumption that questions followed by a statement are unlikely to be standard information-seeking questions. We test this assumption by randomly extracting 100 potential RQs as per our definition and putting them out on Mechanical Turk to 3 annotators, asking them whether or not the questions (displayed with their following statement) were rhetorical. According to majority vote, 75% of the posts were rhetorical. We thus use this "middle of post" heuristic to obviate the need to gather manual annotations for RQs, and developed regex patterns to find RQs that were more likely to be sarcastic. A sample of the patterns, number of matches in the corpus, the numbers we had annotated, and the percent that are sarcastic after annotation are summarized in Table TABREF16 . We extract 357 posts following the intermediate question-answer pairs heuristic from our generic (Gen) corpus. We then supplement these with posts containing RQ cues from our cue-annotation experiments: posts that received 3 out of 5 sarcastic labels in the experiments were considered sarcastic, and posts that received 2 or fewer sarcastic labels were considered not-sarcastic. Our final rhetorical questions corpus consists of 851 posts per class (1,702 total posts). Table TABREF18 shows some examples of rhetorical questions and self-answering from our corpus. Hyperbole. Hyperbole (Hyp) has been studied as an independent form of figurative language, that can coincide with ironic intent BIBREF21 , BIBREF22 , and previous computational work on sarcasm typically includes features to capture hyperbole BIBREF23 . KreuzRoberts95 describe a standard frame for hyperbole in English where an adverb modifies an extreme, positive adjective, e.g. "That was absolutely amazing!" or "That was simply the most incredible dining experience in my entire life." ColstonObrien00b provide a theoretical framework that explains why hyperbole is so strongly associated with sarcasm. Hyperbole exaggerates the literal situation, introducing a discrepancy between the "truth" and what is said, as a matter of degree. A key observation is that this is a type of contrast BIBREF24 , BIBREF1 . In their framework: An event or situation evokes a scale; An event can be placed on that scale; The utterance about the event contrasts with actual scale placement. Fig. FIGREF22 illustrates that the scales that can be evoked range from negative to positive, undesirable to desirable, unexpected to expected and certain to uncertain. Hyperbole moves the strength of an assertion further up or down the scale from the literal meaning, the degree of movement corresponds to the degree of contrast. Depending on what they modify, adverbial intensifiers like totally, absolutely, incredibly shift the strength of the assertion to extreme negative or positive. Table TABREF23 shows examples of hyperbole from our corpus, showcasing the effect that intensifiers have in terms of strengthening the emotional evaluation of the response. To construct a balanced corpus of sarcastic and not-sarcastic utterances with hyperbole, we developed a number of patterns based on the literature and our observations of the generic corpus. The patterns, number matches on the whole corpus, the numbers we had annotated and the percent that are sarcastic after annotation are summarized in Table TABREF16 . Again, we extract a small subset of examples from our Gen corpus (30 per class), and supplement them with posts that contain our hyperbole cues (considering them sarcastic if they received at least 3/5 sarcastic labels, not-sarcastic otherwise). The final hyperbole dataset consists of 582 posts per class (1,164 posts in total). To recap, Table TABREF10 summarizes the total number of posts for each subset of our final corpus. Learning Experiments Our primary goal is not to optimize classification results, but to explore how results vary across different subcorpora and corpus properties. We also aim to demonstrate that the quality of our corpus makes it more straightforward to achieve high classification performance. We apply both supervised learning using SVM (from Scikit-Learn BIBREF25 ) and weakly-supervised linguistic pattern learning using AutoSlog-TS BIBREF13 . These reveal different aspects of the corpus. Supervised Learning. We restrict our supervised experiments to a default linear SVM learner with Stochastic Gradient Descent (SGD) training and L2 regularization, available in the SciKit-Learn toolkit BIBREF25 . We use 10-fold cross-validation, and only two types of features: n-grams and Word2Vec word embeddings. We expect Word2Vec to be able to capture semantic generalizations that n-grams do not BIBREF26 , BIBREF27 . The n-gram features include unigrams, bigrams, and trigrams, including sequences of punctuation (for example, ellipses or "!!!"), and emoticons. We use GoogleNews Word2Vec features BIBREF28 . Table TABREF25 summarizes the results of our supervised learning experiments on our datasets using 10-fold cross validation. The data is balanced evenly between the sarcastic and not-sarcastic classes, and the best F-Measures for each class are shown in bold. The default W2V model, (trained on Google News), gives the best overall F-measure of 0.74 on the Gen corpus for the sarcastic class, while n-grams give the best not-sarcastic F-measure of 0.73. Both of these results are higher F than previously reported for classifying sarcasm in dialogue, and we might expect that feature engineering could yield even greater performance. On the RQ corpus, n-grams provide the best F-measure for sarcastic at 0.70 and not-sarcastic at 0.71. Although W2V performs well, the n-gram model includes features involving repeated punctuation and emoticons, which the W2V model excludes. Punctuation and emoticons are often used as distinctive feature of sarcasm (i.e. "Oh, really?!?!", [emoticon-rolleyes]). For the Hyp corpus, the best F-measure for both the sarcastic and not-sarcastic classes again comes from n-grams, with F-measures of 0.65 and 0.68 respectively. It is interesting to note that the overall results of the Hyp data are lower than those for Gen and RQs, likely due to the smaller size of the Hyp dataset. To examine the effect of dataset size, we compare F-measure (using the same 10-fold cross-validation setup) for each dataset while holding the number of posts per class constant. Figure FIGREF26 shows the performance of each of the Gen, RQ, and Hyp datasets at intervals of 100 posts per class (up to the maximum size of 582 posts per class for Hyp, and 851 posts per class for RQ). From the graph, we can see that as a general trend, the datasets benefit from larger dataset sizes. Interestingly, the results for the RQ dataset are very comparable to those of Gen. The Gen dataset eventually gets the highest sarcastic F-measure (0.74) at its full dataset size of 3,260 posts per class. Weakly-Supervised Learning. AutoSlog-TS is a weakly supervised pattern learner that only requires training documents labeled broadly as sarcastic or not-sarcastic. AutoSlog-TS uses a set of syntactic templates to define different types of linguistic expressions. The left-hand side of Table TABREF28 lists each pattern template and the right-hand side illustrates a specific lexico-syntactic pattern (in bold) that represents an instantiation of each general pattern template for learning sarcastic patterns in our data. In addition to these 17 templates, we added patterns to AutoSlog for adjective-noun, adverb-adjective and adjective-adjective, because these patterns are frequent in hyperbolic sarcastic utterances. The examples in Table TABREF28 show that Colston's notion of contrast shows up in many learned patterns, and that the source of the contrast is highly variable. For example, Row 1 implies a contrast with a set of people who are not your mother. Row 5 contrasts what you were asked with what you've (just) done. Row 10 contrasts chapter 12 and chapter 13 BIBREF30 . Row 11 contrasts what I am allowed vs. what you have to do. AutoSlog-TS computes statistics on the strength of association of each pattern with each class, i.e. P(sarcastic INLINEFORM0 INLINEFORM1 ) and P(not-sarcastic INLINEFORM2 INLINEFORM3 ), along with the pattern's overall frequency. We define two tuning parameters for each class: INLINEFORM4 , the frequency with which a pattern occurs, INLINEFORM5 , the probability with which a pattern is associated with the given class. We do a grid-search, testing the performance of our patterns thresholds from INLINEFORM6 = {2-6} in intervals of 1, INLINEFORM7 ={0.60-0.85} in intervals of 0.05. Once we extract the subset of patterns passing our thresholds, we search for these patterns in the posts in our development set, classifying a post as a given class if it contains INLINEFORM8 ={1, 2, 3} of the thresholded patterns. For more detail, see BIBREF13 , BIBREF31 . An advantage of AutoSlog-TS is that it supports systematic exploration of recall and precision tradeoffs, by selecting pattern sets using different parameters. The parameters have to be tuned on a training set, so we divide each dataset into 80% training and 20% test. Figure FIGREF30 shows the precision (x-axis) vs. recall (y-axis) tradeoffs on the test set, when optimizing our three parameters for precision. Interestingly, the subcorpora for RQ and Hyp can get higher precision than is possible for Gen. When precision is fixed at 0.75, the recall for RQ is 0.07 and the recall for Hyp is 0.08. This recall is low, but given that each retrieved post provides multiple cues, and that datasets on the web are huge, these P values make it possible to bootstrap these two classes in future. Linguistic Analysis Here we aim to provide a linguistic characterization of the differences between the sarcastic and the not-sarcastic classes. We use the AutoSlog-TS pattern learner to generate patterns automatically, and the Stanford dependency parser to examine relationships between arguments BIBREF13 , BIBREF32 . Table TABREF31 shows the number of sarcastic patterns we extract with AutoSlog-TS, with a frequency of at least 2 and a probability of at least 0.75 for each corpus. We learn many novel lexico-syntactic cue patterns that are not the regex that we search for. We discuss specific novel learned patterns for each class below. Generic Sarcasm. We first examine the different patterns learned on the Gen dataset. Table TABREF29 show examples of extracted patterns for each class. We observe that the not-sarcastic patterns appear to capture technical and scientific language, while the sarcastic patterns tend to capture subjective language that is not topic-specific. We observe an abundance of adjective and adverb patterns for the sarcastic class, although we do not use adjective and adverb patterns in our regex retrieval method. Instead, such cues co-occur with the cues we search for, expanding our pattern inventory as we show in Table TABREF31 . Rhetorical Questions. We notice that while the not-sarcastic patterns generated for RQs are similar to the topic-specific not-sarcastic patterns we find in the general dataset, there are some interesting features of the sarcastic patterns that are more unique to the RQs. Many of our sarcastic questions focus specifically on attacks on the mental abilities of the addressee. This generalization is made clear when we extract and analyze the verb, subject, and object arguments using the Stanford dependency parser BIBREF32 for the questions in the RQ dataset. Table TABREF32 shows a few examples of the relations we extract. Hyperbole. One common pattern for hyperbole involves adverbs and adjectives, as noted above. We did not use this pattern to retrieve hyperbole, but because each hyperbolic sarcastic utterance contains multiple cues, we learn an expanded class of patterns for hyperbole. Table TABREF33 illustrates some of the new adverb adjective patterns that are frequent, high-precision indicators of sarcasm. We learn a number of verbal patterns that we had not previously associated with hyperbole, as shown in Table TABREF34 . Interestingly, many of these instantiate the observations of CanoMora2009 on hyperbole and its related semantic fields: creating contrast by exclusion, e.g. no limit and no way, or by expanding a predicated class, e.g. everyone knows. Many of them are also contrastive. Table TABREF33 shows just a few examples, such as though it in no way and so much knowledge. Conclusion and Future Work We have developed a large scale, highly diverse corpus of sarcasm using a combination of linguistic analysis and crowd-sourced annotation. We use filtering methods to skew the distribution of sarcasm in posts to be annotated to 20-31%, much higher than the estimated 12% distribution of sarcasm in online debate forums. We note that when using Mechanical Turk for sarcasm annotation, it is possible that the level of agreement signals how lexically-signaled the sarcasm is, so we settle on a conservative threshold (at least 6 out of 9 annotators agreeing that a post is sarcastic) to ensure the quality of our annotations. We operationalize lexico-syntactic cues prevalent in sarcasm, finding cues that are highly indicative of sarcasm, with ratios up to 87%. Our final corpus consists of data representing generic sarcasm, rhetorical questions, and hyperbole. We conduct supervised learning experiments to highlight the quality of our corpus, achieving a best F of 0.74 using very simple feature sets. We use weakly-supervised learning to show that we can also achieve high precision (albeit with a low recall) for our rhetorical questions and hyperbole datasets; much higher than the best precision that is possible for the Generic dataset. These high precision values may be used for bootstrapping these two classes in the future. We also present qualitative analysis of the different characteristics of rhetorical questions and hyperbole in sarcastic acts, and of the distinctions between sarcastic/not-sarcastic cues in generic sarcasm data. Our analysis shows that the forms of sarcasm and its underlying semantic contrast in dialogue are highly diverse. In future work, we will focus on feature engineering to improve results on the task of sarcasm classification for both our generic data and subclasses. We will also begin to explore evaluation on real-world data distributions, where the ratio of sarcastic/not-sarcastic posts is inherently unbalanced. As we continue our analysis of the generic and fine-grained categories of sarcasm, we aim to better characterize and model the great diversity of sarcasm in dialogue. Acknowledgments This work was funded by NSF CISE RI 1302668, under the Robust Intelligence Program.
Unanswerable
f5e571207d9f4701b4d01199ef7d0bfcfa2c0316
f5e571207d9f4701b4d01199ef7d0bfcfa2c0316_0
Q: What are the linguistic differences between each class? Text: Introduction Irony and sarcasm in dialogue constitute a highly creative use of language signaled by a large range of situational, semantic, pragmatic and lexical cues. Previous work draws attention to the use of both hyperbole and rhetorical questions in conversation as distinct types of lexico-syntactic cues defining diverse classes of sarcasm BIBREF0 . Theoretical models posit that a single semantic basis underlies sarcasm's diversity of form, namely "a contrast" between expected and experienced events, giving rise to a contrast between what is said and a literal description of the actual situation BIBREF1 , BIBREF2 . This semantic characterization has not been straightforward to operationalize computationally for sarcasm in dialogue. Riloffetal13 operationalize this notion for sarcasm in tweets, achieving good results. Joshietal15 develop several incongruity features to capture it, but although they improve performance on tweets, their features do not yield improvements for dialogue. Previous work on the Internet Argument Corpus (IAC) 1.0 dataset aimed to develop a high-precision classifier for sarcasm in order to bootstrap a much larger corpus BIBREF3 , but was only able to obtain a precision of just 0.62, with a best F of 0.57, not high enough for bootstrapping BIBREF4 , BIBREF5 . Justoetal14 experimented with the same corpus, using supervised learning, and achieved a best precision of 0.66 and a best F of 0.70. Joshietal15's explicit congruity features achieve precision around 0.70 and best F of 0.64 on a subset of IAC 1.0. We decided that we need a larger and more diverse corpus of sarcasm in dialogue. It is difficult to efficiently gather sarcastic data, because only about 12% of the utterances in written online debate forums dialogue are sarcastic BIBREF6 , and it is difficult to achieve high reliability for sarcasm annotation BIBREF7 , BIBREF8 , BIBREF9 , BIBREF10 . Thus, our contributions are: Creating a Diverse Sarcasm Corpus There has been relatively little theoretical work on sarcasm in dialogue that has had access to a large corpus of naturally occurring examples. Gibbs00 analyzes a corpus of 62 conversations between friends and argues that a robust theory of verbal irony must account for the large diversity in form. He defines several subtypes, including rhetorical questions and hyperbole: Other categories of irony defined by Gibbs00 include understatements, jocularity, and sarcasm (which he defines as a critical/mocking form of irony). Other work has also tackled jocularity and humor, using different approaches for data aggregation, including filtering by Twitter hashtags, or analyzing laugh-tracks from recordings BIBREF11 , BIBREF12 . Previous work has not, however, attempted to operationalize these subtypes in any concrete way. Here we describe our methods for creating a corpus for generic sarcasm (Gen) (Sec. SECREF11 ), rhetorical questions (RQ), and hyperbole (Hyp) (Sec. SECREF15 ) using data from the Internet Argument Corpus (IAC 2.0). Table TABREF9 provides examples of sarcastic and not-sarcastic posts from the corpus we create. Table TABREF10 summarizes the final composition of our sarcasm corpus. Generic Dataset (Gen) We first replicated the pattern-extraction experiments of LukinWalker13 on their dataset using AutoSlog-TS BIBREF13 , a weakly-supervised pattern learner that extracts lexico-syntactic patterns associated with the input data. We set up the learner to extract patterns for both sarcastic and not-sarcastic utterances. Our first discovery is that we can classify not-sarcastic posts with very high precision, ranging between 80-90%. Because our main goal is to build a larger, more diverse corpus of sarcasm, we use the high-precision not-sarcastic patterns extracted by AutoSlog-TS to create a "not-sarcastic" filter. We did this by randomly selecting a new set of 30K posts (restricting to posts with between 10 and 150 words) from IAC 2.0 BIBREF14 , and applying the high-precision not-sarcastic patterns from AutoSlog-TS to filter out any posts that contain at least one not-sarcastic cue. We end up filtering out two-thirds of the pool, only keeping posts that did not contain any of our high-precision not-sarcastic cues. We acknowledge that this may also filter out sarcastic posts, but we expect it to increase the ratio of sarcastic posts in the remaining pool. We put out the remaining 11,040 posts on Mechanical Turk. As in LukinWalker13, we present the posts in "quote-response" pairs, where the response post to be annotated is presented in the context of its “dialogic parent”, another post earlier in the thread, or a quote from another post earlier in the thread BIBREF15 . In the task instructions, annotators are presented with a definition of sarcasm, followed by one example of a quote-response pair that clearly contains sarcasm, and one pair that clearly does not. Each task consists of 20 quote-response pairs that follow the instructions. Figure FIGREF13 shows the instructions and layout of a single quote-response pair presented to annotators. As in LukinWalker13 and Walkeretal12d, annotators are asked a binary question: Is any part of the response to this quote sarcastic?. To help filter out unreliable annotators, we create a qualifier consisting of a set of 20 manually-selected quote-response pairs (10 that should receive a sarcastic label and 10 that should receive a not-sarcastic label). A Turker must pass the qualifier with a score above 70% to participate in our sarcasm annotations tasks. Our baseline ratio of sarcasm in online debate forums dialogue is the estimated 12% sarcastic posts in the IAC, which was found previously by Walker et al. by gathering annotations for sarcasm, agreement, emotional language, attacks, and nastiness from a subset of around 20K posts from the IAC across various topics BIBREF6 . Similarly, in his study of recorded conversation among friends, Gibbs cites 8% sarcastic utterances among all conversational turns BIBREF0 . We choose a conservative threshold: a post is only added to the sarcastic set if at least 6 out of 9 annotators labeled it sarcastic. Of the 11,040 posts we put out for annotation, we thus obtain 2,220 new posts, giving us a ratio of about 20% sarcasm – significantly higher than our baseline of 12%. We choose this conservative threshold to ensure the quality of our annotations, and we leave aside posts that 5 out of 9 annotators label as sarcastic for future work – noting that we can get even higher ratios of sarcasm by including them (up to 31%). The percentage agreement between each annotator and the majority vote is 80%. We then expand this set, using only 3 highly-reliable Turkers (based on our first round of annotations), giving them an exclusive sarcasm qualification to do additional HITs. We gain an additional 1,040 posts for each class when using majority agreement (at least 2 out of 3 sarcasm labels) for the additional set (to add to the 2,220 original posts). The average percent agreement with the majority vote is 89% for these three annotators. We supplement our sarcastic data with 2,360 not-sarcastic posts from the original data by BIBREF3 that follow our 150-word length restriction, and complete the set with 900 posts that were filtered out by our not-sarcastic filter – resulting in a total of 3,260 posts per class (6,520 total posts). Rows 1 and 2 of Table TABREF9 show examples of posts that are labeled sarcastic in our final generic sarcasm set. Using our filtering method, we are able to reduce the number of posts annotated from our original 30K to around 11K, achieving a percentage of 20% sarcastic posts, even though we choose to use a conservative threshold of at least 6 out of 9 sarcasm labels. Since the number of posts being annotated is only a third of the original set size, this method reduces annotation effort, time, and cost, and helps us shift the distribution of sarcasm to more efficiently expand our dataset than would otherwise be possible. Rhetorical Questions and Hyperbole The goal of collecting additional corpora for rhetorical questions and hyperbole is to increase the diversity of the corpus, and to allow us to explore the semantic differences between sarcastic and not-sarcastic utterances when particular lexico-syntactic cues are held constant. We hypothesize that identifying surface-level cues that are instantiated in both sarcastic and not sarcastic posts will force learning models to find deeper semantic cues to distinguish between the classes. Using a combination of findings in the theoretical literature, and observations of sarcasm patterns in our generic set, we developed a regex pattern matcher that runs against the 400K unannotated posts in the IAC 2.0 database and retrieves matching posts, only pulling posts that have parent posts and a maximum of 150 words. Table TABREF16 only shows a small subset of the “more successful” regex patterns we defined for each class. Cue annotation experiments. After running a large number of retrieval experiments with our regex pattern matcher, we select batches of the resulting posts that mix different cue classes to put out for annotation, in such a way as to not allow the annotators to determine what regex cues were used. We then successively put out various batches for annotation by 5 of our highly-qualified annotators, in order to determine what percentage of posts with these cues are sarcastic. Table TABREF16 summarizes the results for a sample set of cues, showing the number of posts found containing the cue, the subset that we put out for annotation, and the percentage of posts labeled sarcastic in the annotation experiments. For example, for the hyperbolic cue "wow", 977 utterances with the cue were found, 153 were annotated, and 44% of those were found to be sarcastic (i.e. 56% were found to be not-sarcastic). Posts with the cue "oh wait" had the highest sarcasm ratio, at 87%. It is the distinction between the sarcastic and not-sarcastic instances that we are specifically interested in. We describe the corpus collection process for each subclass below. It is important to note that using particular cues (regex) to retrieve sarcastic posts does not result in posts whose only cue is the regex pattern. We demonstrate this quantitatively in Sec. SECREF4 . Sarcasm is characterized by multiple lexical and morphosyntactic cues: these include the use of intensifiers, elongated words, quotations, false politeness, negative evaluations, emoticons, and tag questions inter alia. Table TABREF17 shows how sarcastic utterances often contain combinations of multiple indicators, each playing a role in the overall sarcastic tone of the post. Rhetorical Questions. There is no previous work on distinguishing sarcastic from non-sarcastic uses of rhetorical questions (RQs). RQs are syntactically formulated as a question, but function as an indirect assertion BIBREF16 . The polarity of the question implies an assertion of the opposite polarity, e.g. Can you read? implies You can't read. RQs are prevalent in persuasive discourse, and are frequently used ironically BIBREF17 , BIBREF18 , BIBREF0 . Previous work focuses on their formal semantic properties BIBREF19 , or distinguishing RQs from standard questions BIBREF20 . We hypothesized that we could find RQs in abundance by searching for questions in the middle of a post, that are followed by a statement, using the assumption that questions followed by a statement are unlikely to be standard information-seeking questions. We test this assumption by randomly extracting 100 potential RQs as per our definition and putting them out on Mechanical Turk to 3 annotators, asking them whether or not the questions (displayed with their following statement) were rhetorical. According to majority vote, 75% of the posts were rhetorical. We thus use this "middle of post" heuristic to obviate the need to gather manual annotations for RQs, and developed regex patterns to find RQs that were more likely to be sarcastic. A sample of the patterns, number of matches in the corpus, the numbers we had annotated, and the percent that are sarcastic after annotation are summarized in Table TABREF16 . We extract 357 posts following the intermediate question-answer pairs heuristic from our generic (Gen) corpus. We then supplement these with posts containing RQ cues from our cue-annotation experiments: posts that received 3 out of 5 sarcastic labels in the experiments were considered sarcastic, and posts that received 2 or fewer sarcastic labels were considered not-sarcastic. Our final rhetorical questions corpus consists of 851 posts per class (1,702 total posts). Table TABREF18 shows some examples of rhetorical questions and self-answering from our corpus. Hyperbole. Hyperbole (Hyp) has been studied as an independent form of figurative language, that can coincide with ironic intent BIBREF21 , BIBREF22 , and previous computational work on sarcasm typically includes features to capture hyperbole BIBREF23 . KreuzRoberts95 describe a standard frame for hyperbole in English where an adverb modifies an extreme, positive adjective, e.g. "That was absolutely amazing!" or "That was simply the most incredible dining experience in my entire life." ColstonObrien00b provide a theoretical framework that explains why hyperbole is so strongly associated with sarcasm. Hyperbole exaggerates the literal situation, introducing a discrepancy between the "truth" and what is said, as a matter of degree. A key observation is that this is a type of contrast BIBREF24 , BIBREF1 . In their framework: An event or situation evokes a scale; An event can be placed on that scale; The utterance about the event contrasts with actual scale placement. Fig. FIGREF22 illustrates that the scales that can be evoked range from negative to positive, undesirable to desirable, unexpected to expected and certain to uncertain. Hyperbole moves the strength of an assertion further up or down the scale from the literal meaning, the degree of movement corresponds to the degree of contrast. Depending on what they modify, adverbial intensifiers like totally, absolutely, incredibly shift the strength of the assertion to extreme negative or positive. Table TABREF23 shows examples of hyperbole from our corpus, showcasing the effect that intensifiers have in terms of strengthening the emotional evaluation of the response. To construct a balanced corpus of sarcastic and not-sarcastic utterances with hyperbole, we developed a number of patterns based on the literature and our observations of the generic corpus. The patterns, number matches on the whole corpus, the numbers we had annotated and the percent that are sarcastic after annotation are summarized in Table TABREF16 . Again, we extract a small subset of examples from our Gen corpus (30 per class), and supplement them with posts that contain our hyperbole cues (considering them sarcastic if they received at least 3/5 sarcastic labels, not-sarcastic otherwise). The final hyperbole dataset consists of 582 posts per class (1,164 posts in total). To recap, Table TABREF10 summarizes the total number of posts for each subset of our final corpus. Learning Experiments Our primary goal is not to optimize classification results, but to explore how results vary across different subcorpora and corpus properties. We also aim to demonstrate that the quality of our corpus makes it more straightforward to achieve high classification performance. We apply both supervised learning using SVM (from Scikit-Learn BIBREF25 ) and weakly-supervised linguistic pattern learning using AutoSlog-TS BIBREF13 . These reveal different aspects of the corpus. Supervised Learning. We restrict our supervised experiments to a default linear SVM learner with Stochastic Gradient Descent (SGD) training and L2 regularization, available in the SciKit-Learn toolkit BIBREF25 . We use 10-fold cross-validation, and only two types of features: n-grams and Word2Vec word embeddings. We expect Word2Vec to be able to capture semantic generalizations that n-grams do not BIBREF26 , BIBREF27 . The n-gram features include unigrams, bigrams, and trigrams, including sequences of punctuation (for example, ellipses or "!!!"), and emoticons. We use GoogleNews Word2Vec features BIBREF28 . Table TABREF25 summarizes the results of our supervised learning experiments on our datasets using 10-fold cross validation. The data is balanced evenly between the sarcastic and not-sarcastic classes, and the best F-Measures for each class are shown in bold. The default W2V model, (trained on Google News), gives the best overall F-measure of 0.74 on the Gen corpus for the sarcastic class, while n-grams give the best not-sarcastic F-measure of 0.73. Both of these results are higher F than previously reported for classifying sarcasm in dialogue, and we might expect that feature engineering could yield even greater performance. On the RQ corpus, n-grams provide the best F-measure for sarcastic at 0.70 and not-sarcastic at 0.71. Although W2V performs well, the n-gram model includes features involving repeated punctuation and emoticons, which the W2V model excludes. Punctuation and emoticons are often used as distinctive feature of sarcasm (i.e. "Oh, really?!?!", [emoticon-rolleyes]). For the Hyp corpus, the best F-measure for both the sarcastic and not-sarcastic classes again comes from n-grams, with F-measures of 0.65 and 0.68 respectively. It is interesting to note that the overall results of the Hyp data are lower than those for Gen and RQs, likely due to the smaller size of the Hyp dataset. To examine the effect of dataset size, we compare F-measure (using the same 10-fold cross-validation setup) for each dataset while holding the number of posts per class constant. Figure FIGREF26 shows the performance of each of the Gen, RQ, and Hyp datasets at intervals of 100 posts per class (up to the maximum size of 582 posts per class for Hyp, and 851 posts per class for RQ). From the graph, we can see that as a general trend, the datasets benefit from larger dataset sizes. Interestingly, the results for the RQ dataset are very comparable to those of Gen. The Gen dataset eventually gets the highest sarcastic F-measure (0.74) at its full dataset size of 3,260 posts per class. Weakly-Supervised Learning. AutoSlog-TS is a weakly supervised pattern learner that only requires training documents labeled broadly as sarcastic or not-sarcastic. AutoSlog-TS uses a set of syntactic templates to define different types of linguistic expressions. The left-hand side of Table TABREF28 lists each pattern template and the right-hand side illustrates a specific lexico-syntactic pattern (in bold) that represents an instantiation of each general pattern template for learning sarcastic patterns in our data. In addition to these 17 templates, we added patterns to AutoSlog for adjective-noun, adverb-adjective and adjective-adjective, because these patterns are frequent in hyperbolic sarcastic utterances. The examples in Table TABREF28 show that Colston's notion of contrast shows up in many learned patterns, and that the source of the contrast is highly variable. For example, Row 1 implies a contrast with a set of people who are not your mother. Row 5 contrasts what you were asked with what you've (just) done. Row 10 contrasts chapter 12 and chapter 13 BIBREF30 . Row 11 contrasts what I am allowed vs. what you have to do. AutoSlog-TS computes statistics on the strength of association of each pattern with each class, i.e. P(sarcastic INLINEFORM0 INLINEFORM1 ) and P(not-sarcastic INLINEFORM2 INLINEFORM3 ), along with the pattern's overall frequency. We define two tuning parameters for each class: INLINEFORM4 , the frequency with which a pattern occurs, INLINEFORM5 , the probability with which a pattern is associated with the given class. We do a grid-search, testing the performance of our patterns thresholds from INLINEFORM6 = {2-6} in intervals of 1, INLINEFORM7 ={0.60-0.85} in intervals of 0.05. Once we extract the subset of patterns passing our thresholds, we search for these patterns in the posts in our development set, classifying a post as a given class if it contains INLINEFORM8 ={1, 2, 3} of the thresholded patterns. For more detail, see BIBREF13 , BIBREF31 . An advantage of AutoSlog-TS is that it supports systematic exploration of recall and precision tradeoffs, by selecting pattern sets using different parameters. The parameters have to be tuned on a training set, so we divide each dataset into 80% training and 20% test. Figure FIGREF30 shows the precision (x-axis) vs. recall (y-axis) tradeoffs on the test set, when optimizing our three parameters for precision. Interestingly, the subcorpora for RQ and Hyp can get higher precision than is possible for Gen. When precision is fixed at 0.75, the recall for RQ is 0.07 and the recall for Hyp is 0.08. This recall is low, but given that each retrieved post provides multiple cues, and that datasets on the web are huge, these P values make it possible to bootstrap these two classes in future. Linguistic Analysis Here we aim to provide a linguistic characterization of the differences between the sarcastic and the not-sarcastic classes. We use the AutoSlog-TS pattern learner to generate patterns automatically, and the Stanford dependency parser to examine relationships between arguments BIBREF13 , BIBREF32 . Table TABREF31 shows the number of sarcastic patterns we extract with AutoSlog-TS, with a frequency of at least 2 and a probability of at least 0.75 for each corpus. We learn many novel lexico-syntactic cue patterns that are not the regex that we search for. We discuss specific novel learned patterns for each class below. Generic Sarcasm. We first examine the different patterns learned on the Gen dataset. Table TABREF29 show examples of extracted patterns for each class. We observe that the not-sarcastic patterns appear to capture technical and scientific language, while the sarcastic patterns tend to capture subjective language that is not topic-specific. We observe an abundance of adjective and adverb patterns for the sarcastic class, although we do not use adjective and adverb patterns in our regex retrieval method. Instead, such cues co-occur with the cues we search for, expanding our pattern inventory as we show in Table TABREF31 . Rhetorical Questions. We notice that while the not-sarcastic patterns generated for RQs are similar to the topic-specific not-sarcastic patterns we find in the general dataset, there are some interesting features of the sarcastic patterns that are more unique to the RQs. Many of our sarcastic questions focus specifically on attacks on the mental abilities of the addressee. This generalization is made clear when we extract and analyze the verb, subject, and object arguments using the Stanford dependency parser BIBREF32 for the questions in the RQ dataset. Table TABREF32 shows a few examples of the relations we extract. Hyperbole. One common pattern for hyperbole involves adverbs and adjectives, as noted above. We did not use this pattern to retrieve hyperbole, but because each hyperbolic sarcastic utterance contains multiple cues, we learn an expanded class of patterns for hyperbole. Table TABREF33 illustrates some of the new adverb adjective patterns that are frequent, high-precision indicators of sarcasm. We learn a number of verbal patterns that we had not previously associated with hyperbole, as shown in Table TABREF34 . Interestingly, many of these instantiate the observations of CanoMora2009 on hyperbole and its related semantic fields: creating contrast by exclusion, e.g. no limit and no way, or by expanding a predicated class, e.g. everyone knows. Many of them are also contrastive. Table TABREF33 shows just a few examples, such as though it in no way and so much knowledge. Conclusion and Future Work We have developed a large scale, highly diverse corpus of sarcasm using a combination of linguistic analysis and crowd-sourced annotation. We use filtering methods to skew the distribution of sarcasm in posts to be annotated to 20-31%, much higher than the estimated 12% distribution of sarcasm in online debate forums. We note that when using Mechanical Turk for sarcasm annotation, it is possible that the level of agreement signals how lexically-signaled the sarcasm is, so we settle on a conservative threshold (at least 6 out of 9 annotators agreeing that a post is sarcastic) to ensure the quality of our annotations. We operationalize lexico-syntactic cues prevalent in sarcasm, finding cues that are highly indicative of sarcasm, with ratios up to 87%. Our final corpus consists of data representing generic sarcasm, rhetorical questions, and hyperbole. We conduct supervised learning experiments to highlight the quality of our corpus, achieving a best F of 0.74 using very simple feature sets. We use weakly-supervised learning to show that we can also achieve high precision (albeit with a low recall) for our rhetorical questions and hyperbole datasets; much higher than the best precision that is possible for the Generic dataset. These high precision values may be used for bootstrapping these two classes in the future. We also present qualitative analysis of the different characteristics of rhetorical questions and hyperbole in sarcastic acts, and of the distinctions between sarcastic/not-sarcastic cues in generic sarcasm data. Our analysis shows that the forms of sarcasm and its underlying semantic contrast in dialogue are highly diverse. In future work, we will focus on feature engineering to improve results on the task of sarcasm classification for both our generic data and subclasses. We will also begin to explore evaluation on real-world data distributions, where the ratio of sarcastic/not-sarcastic posts is inherently unbalanced. As we continue our analysis of the generic and fine-grained categories of sarcasm, we aim to better characterize and model the great diversity of sarcasm in dialogue. Acknowledgments This work was funded by NSF CISE RI 1302668, under the Robust Intelligence Program.
Each class has different patterns in adjectives, adverbs and verbs for sarcastic and non-sarcastic classes
c5ac07528cf99d353413c9d9ea61a1a699dd783e
c5ac07528cf99d353413c9d9ea61a1a699dd783e_0
Q: What simple features are used? Text: Introduction Irony and sarcasm in dialogue constitute a highly creative use of language signaled by a large range of situational, semantic, pragmatic and lexical cues. Previous work draws attention to the use of both hyperbole and rhetorical questions in conversation as distinct types of lexico-syntactic cues defining diverse classes of sarcasm BIBREF0 . Theoretical models posit that a single semantic basis underlies sarcasm's diversity of form, namely "a contrast" between expected and experienced events, giving rise to a contrast between what is said and a literal description of the actual situation BIBREF1 , BIBREF2 . This semantic characterization has not been straightforward to operationalize computationally for sarcasm in dialogue. Riloffetal13 operationalize this notion for sarcasm in tweets, achieving good results. Joshietal15 develop several incongruity features to capture it, but although they improve performance on tweets, their features do not yield improvements for dialogue. Previous work on the Internet Argument Corpus (IAC) 1.0 dataset aimed to develop a high-precision classifier for sarcasm in order to bootstrap a much larger corpus BIBREF3 , but was only able to obtain a precision of just 0.62, with a best F of 0.57, not high enough for bootstrapping BIBREF4 , BIBREF5 . Justoetal14 experimented with the same corpus, using supervised learning, and achieved a best precision of 0.66 and a best F of 0.70. Joshietal15's explicit congruity features achieve precision around 0.70 and best F of 0.64 on a subset of IAC 1.0. We decided that we need a larger and more diverse corpus of sarcasm in dialogue. It is difficult to efficiently gather sarcastic data, because only about 12% of the utterances in written online debate forums dialogue are sarcastic BIBREF6 , and it is difficult to achieve high reliability for sarcasm annotation BIBREF7 , BIBREF8 , BIBREF9 , BIBREF10 . Thus, our contributions are: Creating a Diverse Sarcasm Corpus There has been relatively little theoretical work on sarcasm in dialogue that has had access to a large corpus of naturally occurring examples. Gibbs00 analyzes a corpus of 62 conversations between friends and argues that a robust theory of verbal irony must account for the large diversity in form. He defines several subtypes, including rhetorical questions and hyperbole: Other categories of irony defined by Gibbs00 include understatements, jocularity, and sarcasm (which he defines as a critical/mocking form of irony). Other work has also tackled jocularity and humor, using different approaches for data aggregation, including filtering by Twitter hashtags, or analyzing laugh-tracks from recordings BIBREF11 , BIBREF12 . Previous work has not, however, attempted to operationalize these subtypes in any concrete way. Here we describe our methods for creating a corpus for generic sarcasm (Gen) (Sec. SECREF11 ), rhetorical questions (RQ), and hyperbole (Hyp) (Sec. SECREF15 ) using data from the Internet Argument Corpus (IAC 2.0). Table TABREF9 provides examples of sarcastic and not-sarcastic posts from the corpus we create. Table TABREF10 summarizes the final composition of our sarcasm corpus. Generic Dataset (Gen) We first replicated the pattern-extraction experiments of LukinWalker13 on their dataset using AutoSlog-TS BIBREF13 , a weakly-supervised pattern learner that extracts lexico-syntactic patterns associated with the input data. We set up the learner to extract patterns for both sarcastic and not-sarcastic utterances. Our first discovery is that we can classify not-sarcastic posts with very high precision, ranging between 80-90%. Because our main goal is to build a larger, more diverse corpus of sarcasm, we use the high-precision not-sarcastic patterns extracted by AutoSlog-TS to create a "not-sarcastic" filter. We did this by randomly selecting a new set of 30K posts (restricting to posts with between 10 and 150 words) from IAC 2.0 BIBREF14 , and applying the high-precision not-sarcastic patterns from AutoSlog-TS to filter out any posts that contain at least one not-sarcastic cue. We end up filtering out two-thirds of the pool, only keeping posts that did not contain any of our high-precision not-sarcastic cues. We acknowledge that this may also filter out sarcastic posts, but we expect it to increase the ratio of sarcastic posts in the remaining pool. We put out the remaining 11,040 posts on Mechanical Turk. As in LukinWalker13, we present the posts in "quote-response" pairs, where the response post to be annotated is presented in the context of its “dialogic parent”, another post earlier in the thread, or a quote from another post earlier in the thread BIBREF15 . In the task instructions, annotators are presented with a definition of sarcasm, followed by one example of a quote-response pair that clearly contains sarcasm, and one pair that clearly does not. Each task consists of 20 quote-response pairs that follow the instructions. Figure FIGREF13 shows the instructions and layout of a single quote-response pair presented to annotators. As in LukinWalker13 and Walkeretal12d, annotators are asked a binary question: Is any part of the response to this quote sarcastic?. To help filter out unreliable annotators, we create a qualifier consisting of a set of 20 manually-selected quote-response pairs (10 that should receive a sarcastic label and 10 that should receive a not-sarcastic label). A Turker must pass the qualifier with a score above 70% to participate in our sarcasm annotations tasks. Our baseline ratio of sarcasm in online debate forums dialogue is the estimated 12% sarcastic posts in the IAC, which was found previously by Walker et al. by gathering annotations for sarcasm, agreement, emotional language, attacks, and nastiness from a subset of around 20K posts from the IAC across various topics BIBREF6 . Similarly, in his study of recorded conversation among friends, Gibbs cites 8% sarcastic utterances among all conversational turns BIBREF0 . We choose a conservative threshold: a post is only added to the sarcastic set if at least 6 out of 9 annotators labeled it sarcastic. Of the 11,040 posts we put out for annotation, we thus obtain 2,220 new posts, giving us a ratio of about 20% sarcasm – significantly higher than our baseline of 12%. We choose this conservative threshold to ensure the quality of our annotations, and we leave aside posts that 5 out of 9 annotators label as sarcastic for future work – noting that we can get even higher ratios of sarcasm by including them (up to 31%). The percentage agreement between each annotator and the majority vote is 80%. We then expand this set, using only 3 highly-reliable Turkers (based on our first round of annotations), giving them an exclusive sarcasm qualification to do additional HITs. We gain an additional 1,040 posts for each class when using majority agreement (at least 2 out of 3 sarcasm labels) for the additional set (to add to the 2,220 original posts). The average percent agreement with the majority vote is 89% for these three annotators. We supplement our sarcastic data with 2,360 not-sarcastic posts from the original data by BIBREF3 that follow our 150-word length restriction, and complete the set with 900 posts that were filtered out by our not-sarcastic filter – resulting in a total of 3,260 posts per class (6,520 total posts). Rows 1 and 2 of Table TABREF9 show examples of posts that are labeled sarcastic in our final generic sarcasm set. Using our filtering method, we are able to reduce the number of posts annotated from our original 30K to around 11K, achieving a percentage of 20% sarcastic posts, even though we choose to use a conservative threshold of at least 6 out of 9 sarcasm labels. Since the number of posts being annotated is only a third of the original set size, this method reduces annotation effort, time, and cost, and helps us shift the distribution of sarcasm to more efficiently expand our dataset than would otherwise be possible. Rhetorical Questions and Hyperbole The goal of collecting additional corpora for rhetorical questions and hyperbole is to increase the diversity of the corpus, and to allow us to explore the semantic differences between sarcastic and not-sarcastic utterances when particular lexico-syntactic cues are held constant. We hypothesize that identifying surface-level cues that are instantiated in both sarcastic and not sarcastic posts will force learning models to find deeper semantic cues to distinguish between the classes. Using a combination of findings in the theoretical literature, and observations of sarcasm patterns in our generic set, we developed a regex pattern matcher that runs against the 400K unannotated posts in the IAC 2.0 database and retrieves matching posts, only pulling posts that have parent posts and a maximum of 150 words. Table TABREF16 only shows a small subset of the “more successful” regex patterns we defined for each class. Cue annotation experiments. After running a large number of retrieval experiments with our regex pattern matcher, we select batches of the resulting posts that mix different cue classes to put out for annotation, in such a way as to not allow the annotators to determine what regex cues were used. We then successively put out various batches for annotation by 5 of our highly-qualified annotators, in order to determine what percentage of posts with these cues are sarcastic. Table TABREF16 summarizes the results for a sample set of cues, showing the number of posts found containing the cue, the subset that we put out for annotation, and the percentage of posts labeled sarcastic in the annotation experiments. For example, for the hyperbolic cue "wow", 977 utterances with the cue were found, 153 were annotated, and 44% of those were found to be sarcastic (i.e. 56% were found to be not-sarcastic). Posts with the cue "oh wait" had the highest sarcasm ratio, at 87%. It is the distinction between the sarcastic and not-sarcastic instances that we are specifically interested in. We describe the corpus collection process for each subclass below. It is important to note that using particular cues (regex) to retrieve sarcastic posts does not result in posts whose only cue is the regex pattern. We demonstrate this quantitatively in Sec. SECREF4 . Sarcasm is characterized by multiple lexical and morphosyntactic cues: these include the use of intensifiers, elongated words, quotations, false politeness, negative evaluations, emoticons, and tag questions inter alia. Table TABREF17 shows how sarcastic utterances often contain combinations of multiple indicators, each playing a role in the overall sarcastic tone of the post. Rhetorical Questions. There is no previous work on distinguishing sarcastic from non-sarcastic uses of rhetorical questions (RQs). RQs are syntactically formulated as a question, but function as an indirect assertion BIBREF16 . The polarity of the question implies an assertion of the opposite polarity, e.g. Can you read? implies You can't read. RQs are prevalent in persuasive discourse, and are frequently used ironically BIBREF17 , BIBREF18 , BIBREF0 . Previous work focuses on their formal semantic properties BIBREF19 , or distinguishing RQs from standard questions BIBREF20 . We hypothesized that we could find RQs in abundance by searching for questions in the middle of a post, that are followed by a statement, using the assumption that questions followed by a statement are unlikely to be standard information-seeking questions. We test this assumption by randomly extracting 100 potential RQs as per our definition and putting them out on Mechanical Turk to 3 annotators, asking them whether or not the questions (displayed with their following statement) were rhetorical. According to majority vote, 75% of the posts were rhetorical. We thus use this "middle of post" heuristic to obviate the need to gather manual annotations for RQs, and developed regex patterns to find RQs that were more likely to be sarcastic. A sample of the patterns, number of matches in the corpus, the numbers we had annotated, and the percent that are sarcastic after annotation are summarized in Table TABREF16 . We extract 357 posts following the intermediate question-answer pairs heuristic from our generic (Gen) corpus. We then supplement these with posts containing RQ cues from our cue-annotation experiments: posts that received 3 out of 5 sarcastic labels in the experiments were considered sarcastic, and posts that received 2 or fewer sarcastic labels were considered not-sarcastic. Our final rhetorical questions corpus consists of 851 posts per class (1,702 total posts). Table TABREF18 shows some examples of rhetorical questions and self-answering from our corpus. Hyperbole. Hyperbole (Hyp) has been studied as an independent form of figurative language, that can coincide with ironic intent BIBREF21 , BIBREF22 , and previous computational work on sarcasm typically includes features to capture hyperbole BIBREF23 . KreuzRoberts95 describe a standard frame for hyperbole in English where an adverb modifies an extreme, positive adjective, e.g. "That was absolutely amazing!" or "That was simply the most incredible dining experience in my entire life." ColstonObrien00b provide a theoretical framework that explains why hyperbole is so strongly associated with sarcasm. Hyperbole exaggerates the literal situation, introducing a discrepancy between the "truth" and what is said, as a matter of degree. A key observation is that this is a type of contrast BIBREF24 , BIBREF1 . In their framework: An event or situation evokes a scale; An event can be placed on that scale; The utterance about the event contrasts with actual scale placement. Fig. FIGREF22 illustrates that the scales that can be evoked range from negative to positive, undesirable to desirable, unexpected to expected and certain to uncertain. Hyperbole moves the strength of an assertion further up or down the scale from the literal meaning, the degree of movement corresponds to the degree of contrast. Depending on what they modify, adverbial intensifiers like totally, absolutely, incredibly shift the strength of the assertion to extreme negative or positive. Table TABREF23 shows examples of hyperbole from our corpus, showcasing the effect that intensifiers have in terms of strengthening the emotional evaluation of the response. To construct a balanced corpus of sarcastic and not-sarcastic utterances with hyperbole, we developed a number of patterns based on the literature and our observations of the generic corpus. The patterns, number matches on the whole corpus, the numbers we had annotated and the percent that are sarcastic after annotation are summarized in Table TABREF16 . Again, we extract a small subset of examples from our Gen corpus (30 per class), and supplement them with posts that contain our hyperbole cues (considering them sarcastic if they received at least 3/5 sarcastic labels, not-sarcastic otherwise). The final hyperbole dataset consists of 582 posts per class (1,164 posts in total). To recap, Table TABREF10 summarizes the total number of posts for each subset of our final corpus. Learning Experiments Our primary goal is not to optimize classification results, but to explore how results vary across different subcorpora and corpus properties. We also aim to demonstrate that the quality of our corpus makes it more straightforward to achieve high classification performance. We apply both supervised learning using SVM (from Scikit-Learn BIBREF25 ) and weakly-supervised linguistic pattern learning using AutoSlog-TS BIBREF13 . These reveal different aspects of the corpus. Supervised Learning. We restrict our supervised experiments to a default linear SVM learner with Stochastic Gradient Descent (SGD) training and L2 regularization, available in the SciKit-Learn toolkit BIBREF25 . We use 10-fold cross-validation, and only two types of features: n-grams and Word2Vec word embeddings. We expect Word2Vec to be able to capture semantic generalizations that n-grams do not BIBREF26 , BIBREF27 . The n-gram features include unigrams, bigrams, and trigrams, including sequences of punctuation (for example, ellipses or "!!!"), and emoticons. We use GoogleNews Word2Vec features BIBREF28 . Table TABREF25 summarizes the results of our supervised learning experiments on our datasets using 10-fold cross validation. The data is balanced evenly between the sarcastic and not-sarcastic classes, and the best F-Measures for each class are shown in bold. The default W2V model, (trained on Google News), gives the best overall F-measure of 0.74 on the Gen corpus for the sarcastic class, while n-grams give the best not-sarcastic F-measure of 0.73. Both of these results are higher F than previously reported for classifying sarcasm in dialogue, and we might expect that feature engineering could yield even greater performance. On the RQ corpus, n-grams provide the best F-measure for sarcastic at 0.70 and not-sarcastic at 0.71. Although W2V performs well, the n-gram model includes features involving repeated punctuation and emoticons, which the W2V model excludes. Punctuation and emoticons are often used as distinctive feature of sarcasm (i.e. "Oh, really?!?!", [emoticon-rolleyes]). For the Hyp corpus, the best F-measure for both the sarcastic and not-sarcastic classes again comes from n-grams, with F-measures of 0.65 and 0.68 respectively. It is interesting to note that the overall results of the Hyp data are lower than those for Gen and RQs, likely due to the smaller size of the Hyp dataset. To examine the effect of dataset size, we compare F-measure (using the same 10-fold cross-validation setup) for each dataset while holding the number of posts per class constant. Figure FIGREF26 shows the performance of each of the Gen, RQ, and Hyp datasets at intervals of 100 posts per class (up to the maximum size of 582 posts per class for Hyp, and 851 posts per class for RQ). From the graph, we can see that as a general trend, the datasets benefit from larger dataset sizes. Interestingly, the results for the RQ dataset are very comparable to those of Gen. The Gen dataset eventually gets the highest sarcastic F-measure (0.74) at its full dataset size of 3,260 posts per class. Weakly-Supervised Learning. AutoSlog-TS is a weakly supervised pattern learner that only requires training documents labeled broadly as sarcastic or not-sarcastic. AutoSlog-TS uses a set of syntactic templates to define different types of linguistic expressions. The left-hand side of Table TABREF28 lists each pattern template and the right-hand side illustrates a specific lexico-syntactic pattern (in bold) that represents an instantiation of each general pattern template for learning sarcastic patterns in our data. In addition to these 17 templates, we added patterns to AutoSlog for adjective-noun, adverb-adjective and adjective-adjective, because these patterns are frequent in hyperbolic sarcastic utterances. The examples in Table TABREF28 show that Colston's notion of contrast shows up in many learned patterns, and that the source of the contrast is highly variable. For example, Row 1 implies a contrast with a set of people who are not your mother. Row 5 contrasts what you were asked with what you've (just) done. Row 10 contrasts chapter 12 and chapter 13 BIBREF30 . Row 11 contrasts what I am allowed vs. what you have to do. AutoSlog-TS computes statistics on the strength of association of each pattern with each class, i.e. P(sarcastic INLINEFORM0 INLINEFORM1 ) and P(not-sarcastic INLINEFORM2 INLINEFORM3 ), along with the pattern's overall frequency. We define two tuning parameters for each class: INLINEFORM4 , the frequency with which a pattern occurs, INLINEFORM5 , the probability with which a pattern is associated with the given class. We do a grid-search, testing the performance of our patterns thresholds from INLINEFORM6 = {2-6} in intervals of 1, INLINEFORM7 ={0.60-0.85} in intervals of 0.05. Once we extract the subset of patterns passing our thresholds, we search for these patterns in the posts in our development set, classifying a post as a given class if it contains INLINEFORM8 ={1, 2, 3} of the thresholded patterns. For more detail, see BIBREF13 , BIBREF31 . An advantage of AutoSlog-TS is that it supports systematic exploration of recall and precision tradeoffs, by selecting pattern sets using different parameters. The parameters have to be tuned on a training set, so we divide each dataset into 80% training and 20% test. Figure FIGREF30 shows the precision (x-axis) vs. recall (y-axis) tradeoffs on the test set, when optimizing our three parameters for precision. Interestingly, the subcorpora for RQ and Hyp can get higher precision than is possible for Gen. When precision is fixed at 0.75, the recall for RQ is 0.07 and the recall for Hyp is 0.08. This recall is low, but given that each retrieved post provides multiple cues, and that datasets on the web are huge, these P values make it possible to bootstrap these two classes in future. Linguistic Analysis Here we aim to provide a linguistic characterization of the differences between the sarcastic and the not-sarcastic classes. We use the AutoSlog-TS pattern learner to generate patterns automatically, and the Stanford dependency parser to examine relationships between arguments BIBREF13 , BIBREF32 . Table TABREF31 shows the number of sarcastic patterns we extract with AutoSlog-TS, with a frequency of at least 2 and a probability of at least 0.75 for each corpus. We learn many novel lexico-syntactic cue patterns that are not the regex that we search for. We discuss specific novel learned patterns for each class below. Generic Sarcasm. We first examine the different patterns learned on the Gen dataset. Table TABREF29 show examples of extracted patterns for each class. We observe that the not-sarcastic patterns appear to capture technical and scientific language, while the sarcastic patterns tend to capture subjective language that is not topic-specific. We observe an abundance of adjective and adverb patterns for the sarcastic class, although we do not use adjective and adverb patterns in our regex retrieval method. Instead, such cues co-occur with the cues we search for, expanding our pattern inventory as we show in Table TABREF31 . Rhetorical Questions. We notice that while the not-sarcastic patterns generated for RQs are similar to the topic-specific not-sarcastic patterns we find in the general dataset, there are some interesting features of the sarcastic patterns that are more unique to the RQs. Many of our sarcastic questions focus specifically on attacks on the mental abilities of the addressee. This generalization is made clear when we extract and analyze the verb, subject, and object arguments using the Stanford dependency parser BIBREF32 for the questions in the RQ dataset. Table TABREF32 shows a few examples of the relations we extract. Hyperbole. One common pattern for hyperbole involves adverbs and adjectives, as noted above. We did not use this pattern to retrieve hyperbole, but because each hyperbolic sarcastic utterance contains multiple cues, we learn an expanded class of patterns for hyperbole. Table TABREF33 illustrates some of the new adverb adjective patterns that are frequent, high-precision indicators of sarcasm. We learn a number of verbal patterns that we had not previously associated with hyperbole, as shown in Table TABREF34 . Interestingly, many of these instantiate the observations of CanoMora2009 on hyperbole and its related semantic fields: creating contrast by exclusion, e.g. no limit and no way, or by expanding a predicated class, e.g. everyone knows. Many of them are also contrastive. Table TABREF33 shows just a few examples, such as though it in no way and so much knowledge. Conclusion and Future Work We have developed a large scale, highly diverse corpus of sarcasm using a combination of linguistic analysis and crowd-sourced annotation. We use filtering methods to skew the distribution of sarcasm in posts to be annotated to 20-31%, much higher than the estimated 12% distribution of sarcasm in online debate forums. We note that when using Mechanical Turk for sarcasm annotation, it is possible that the level of agreement signals how lexically-signaled the sarcasm is, so we settle on a conservative threshold (at least 6 out of 9 annotators agreeing that a post is sarcastic) to ensure the quality of our annotations. We operationalize lexico-syntactic cues prevalent in sarcasm, finding cues that are highly indicative of sarcasm, with ratios up to 87%. Our final corpus consists of data representing generic sarcasm, rhetorical questions, and hyperbole. We conduct supervised learning experiments to highlight the quality of our corpus, achieving a best F of 0.74 using very simple feature sets. We use weakly-supervised learning to show that we can also achieve high precision (albeit with a low recall) for our rhetorical questions and hyperbole datasets; much higher than the best precision that is possible for the Generic dataset. These high precision values may be used for bootstrapping these two classes in the future. We also present qualitative analysis of the different characteristics of rhetorical questions and hyperbole in sarcastic acts, and of the distinctions between sarcastic/not-sarcastic cues in generic sarcasm data. Our analysis shows that the forms of sarcasm and its underlying semantic contrast in dialogue are highly diverse. In future work, we will focus on feature engineering to improve results on the task of sarcasm classification for both our generic data and subclasses. We will also begin to explore evaluation on real-world data distributions, where the ratio of sarcastic/not-sarcastic posts is inherently unbalanced. As we continue our analysis of the generic and fine-grained categories of sarcasm, we aim to better characterize and model the great diversity of sarcasm in dialogue. Acknowledgments This work was funded by NSF CISE RI 1302668, under the Robust Intelligence Program.
unigrams, bigrams, and trigrams, including sequences of punctuation, Word2Vec word embeddings
6608f171b3e0dcdcd51b3e0c697d6e5003ab5f02
6608f171b3e0dcdcd51b3e0c697d6e5003ab5f02_0
Q: What lexico-syntactic cues are used to retrieve sarcastic utterances? Text: Introduction Irony and sarcasm in dialogue constitute a highly creative use of language signaled by a large range of situational, semantic, pragmatic and lexical cues. Previous work draws attention to the use of both hyperbole and rhetorical questions in conversation as distinct types of lexico-syntactic cues defining diverse classes of sarcasm BIBREF0 . Theoretical models posit that a single semantic basis underlies sarcasm's diversity of form, namely "a contrast" between expected and experienced events, giving rise to a contrast between what is said and a literal description of the actual situation BIBREF1 , BIBREF2 . This semantic characterization has not been straightforward to operationalize computationally for sarcasm in dialogue. Riloffetal13 operationalize this notion for sarcasm in tweets, achieving good results. Joshietal15 develop several incongruity features to capture it, but although they improve performance on tweets, their features do not yield improvements for dialogue. Previous work on the Internet Argument Corpus (IAC) 1.0 dataset aimed to develop a high-precision classifier for sarcasm in order to bootstrap a much larger corpus BIBREF3 , but was only able to obtain a precision of just 0.62, with a best F of 0.57, not high enough for bootstrapping BIBREF4 , BIBREF5 . Justoetal14 experimented with the same corpus, using supervised learning, and achieved a best precision of 0.66 and a best F of 0.70. Joshietal15's explicit congruity features achieve precision around 0.70 and best F of 0.64 on a subset of IAC 1.0. We decided that we need a larger and more diverse corpus of sarcasm in dialogue. It is difficult to efficiently gather sarcastic data, because only about 12% of the utterances in written online debate forums dialogue are sarcastic BIBREF6 , and it is difficult to achieve high reliability for sarcasm annotation BIBREF7 , BIBREF8 , BIBREF9 , BIBREF10 . Thus, our contributions are: Creating a Diverse Sarcasm Corpus There has been relatively little theoretical work on sarcasm in dialogue that has had access to a large corpus of naturally occurring examples. Gibbs00 analyzes a corpus of 62 conversations between friends and argues that a robust theory of verbal irony must account for the large diversity in form. He defines several subtypes, including rhetorical questions and hyperbole: Other categories of irony defined by Gibbs00 include understatements, jocularity, and sarcasm (which he defines as a critical/mocking form of irony). Other work has also tackled jocularity and humor, using different approaches for data aggregation, including filtering by Twitter hashtags, or analyzing laugh-tracks from recordings BIBREF11 , BIBREF12 . Previous work has not, however, attempted to operationalize these subtypes in any concrete way. Here we describe our methods for creating a corpus for generic sarcasm (Gen) (Sec. SECREF11 ), rhetorical questions (RQ), and hyperbole (Hyp) (Sec. SECREF15 ) using data from the Internet Argument Corpus (IAC 2.0). Table TABREF9 provides examples of sarcastic and not-sarcastic posts from the corpus we create. Table TABREF10 summarizes the final composition of our sarcasm corpus. Generic Dataset (Gen) We first replicated the pattern-extraction experiments of LukinWalker13 on their dataset using AutoSlog-TS BIBREF13 , a weakly-supervised pattern learner that extracts lexico-syntactic patterns associated with the input data. We set up the learner to extract patterns for both sarcastic and not-sarcastic utterances. Our first discovery is that we can classify not-sarcastic posts with very high precision, ranging between 80-90%. Because our main goal is to build a larger, more diverse corpus of sarcasm, we use the high-precision not-sarcastic patterns extracted by AutoSlog-TS to create a "not-sarcastic" filter. We did this by randomly selecting a new set of 30K posts (restricting to posts with between 10 and 150 words) from IAC 2.0 BIBREF14 , and applying the high-precision not-sarcastic patterns from AutoSlog-TS to filter out any posts that contain at least one not-sarcastic cue. We end up filtering out two-thirds of the pool, only keeping posts that did not contain any of our high-precision not-sarcastic cues. We acknowledge that this may also filter out sarcastic posts, but we expect it to increase the ratio of sarcastic posts in the remaining pool. We put out the remaining 11,040 posts on Mechanical Turk. As in LukinWalker13, we present the posts in "quote-response" pairs, where the response post to be annotated is presented in the context of its “dialogic parent”, another post earlier in the thread, or a quote from another post earlier in the thread BIBREF15 . In the task instructions, annotators are presented with a definition of sarcasm, followed by one example of a quote-response pair that clearly contains sarcasm, and one pair that clearly does not. Each task consists of 20 quote-response pairs that follow the instructions. Figure FIGREF13 shows the instructions and layout of a single quote-response pair presented to annotators. As in LukinWalker13 and Walkeretal12d, annotators are asked a binary question: Is any part of the response to this quote sarcastic?. To help filter out unreliable annotators, we create a qualifier consisting of a set of 20 manually-selected quote-response pairs (10 that should receive a sarcastic label and 10 that should receive a not-sarcastic label). A Turker must pass the qualifier with a score above 70% to participate in our sarcasm annotations tasks. Our baseline ratio of sarcasm in online debate forums dialogue is the estimated 12% sarcastic posts in the IAC, which was found previously by Walker et al. by gathering annotations for sarcasm, agreement, emotional language, attacks, and nastiness from a subset of around 20K posts from the IAC across various topics BIBREF6 . Similarly, in his study of recorded conversation among friends, Gibbs cites 8% sarcastic utterances among all conversational turns BIBREF0 . We choose a conservative threshold: a post is only added to the sarcastic set if at least 6 out of 9 annotators labeled it sarcastic. Of the 11,040 posts we put out for annotation, we thus obtain 2,220 new posts, giving us a ratio of about 20% sarcasm – significantly higher than our baseline of 12%. We choose this conservative threshold to ensure the quality of our annotations, and we leave aside posts that 5 out of 9 annotators label as sarcastic for future work – noting that we can get even higher ratios of sarcasm by including them (up to 31%). The percentage agreement between each annotator and the majority vote is 80%. We then expand this set, using only 3 highly-reliable Turkers (based on our first round of annotations), giving them an exclusive sarcasm qualification to do additional HITs. We gain an additional 1,040 posts for each class when using majority agreement (at least 2 out of 3 sarcasm labels) for the additional set (to add to the 2,220 original posts). The average percent agreement with the majority vote is 89% for these three annotators. We supplement our sarcastic data with 2,360 not-sarcastic posts from the original data by BIBREF3 that follow our 150-word length restriction, and complete the set with 900 posts that were filtered out by our not-sarcastic filter – resulting in a total of 3,260 posts per class (6,520 total posts). Rows 1 and 2 of Table TABREF9 show examples of posts that are labeled sarcastic in our final generic sarcasm set. Using our filtering method, we are able to reduce the number of posts annotated from our original 30K to around 11K, achieving a percentage of 20% sarcastic posts, even though we choose to use a conservative threshold of at least 6 out of 9 sarcasm labels. Since the number of posts being annotated is only a third of the original set size, this method reduces annotation effort, time, and cost, and helps us shift the distribution of sarcasm to more efficiently expand our dataset than would otherwise be possible. Rhetorical Questions and Hyperbole The goal of collecting additional corpora for rhetorical questions and hyperbole is to increase the diversity of the corpus, and to allow us to explore the semantic differences between sarcastic and not-sarcastic utterances when particular lexico-syntactic cues are held constant. We hypothesize that identifying surface-level cues that are instantiated in both sarcastic and not sarcastic posts will force learning models to find deeper semantic cues to distinguish between the classes. Using a combination of findings in the theoretical literature, and observations of sarcasm patterns in our generic set, we developed a regex pattern matcher that runs against the 400K unannotated posts in the IAC 2.0 database and retrieves matching posts, only pulling posts that have parent posts and a maximum of 150 words. Table TABREF16 only shows a small subset of the “more successful” regex patterns we defined for each class. Cue annotation experiments. After running a large number of retrieval experiments with our regex pattern matcher, we select batches of the resulting posts that mix different cue classes to put out for annotation, in such a way as to not allow the annotators to determine what regex cues were used. We then successively put out various batches for annotation by 5 of our highly-qualified annotators, in order to determine what percentage of posts with these cues are sarcastic. Table TABREF16 summarizes the results for a sample set of cues, showing the number of posts found containing the cue, the subset that we put out for annotation, and the percentage of posts labeled sarcastic in the annotation experiments. For example, for the hyperbolic cue "wow", 977 utterances with the cue were found, 153 were annotated, and 44% of those were found to be sarcastic (i.e. 56% were found to be not-sarcastic). Posts with the cue "oh wait" had the highest sarcasm ratio, at 87%. It is the distinction between the sarcastic and not-sarcastic instances that we are specifically interested in. We describe the corpus collection process for each subclass below. It is important to note that using particular cues (regex) to retrieve sarcastic posts does not result in posts whose only cue is the regex pattern. We demonstrate this quantitatively in Sec. SECREF4 . Sarcasm is characterized by multiple lexical and morphosyntactic cues: these include the use of intensifiers, elongated words, quotations, false politeness, negative evaluations, emoticons, and tag questions inter alia. Table TABREF17 shows how sarcastic utterances often contain combinations of multiple indicators, each playing a role in the overall sarcastic tone of the post. Rhetorical Questions. There is no previous work on distinguishing sarcastic from non-sarcastic uses of rhetorical questions (RQs). RQs are syntactically formulated as a question, but function as an indirect assertion BIBREF16 . The polarity of the question implies an assertion of the opposite polarity, e.g. Can you read? implies You can't read. RQs are prevalent in persuasive discourse, and are frequently used ironically BIBREF17 , BIBREF18 , BIBREF0 . Previous work focuses on their formal semantic properties BIBREF19 , or distinguishing RQs from standard questions BIBREF20 . We hypothesized that we could find RQs in abundance by searching for questions in the middle of a post, that are followed by a statement, using the assumption that questions followed by a statement are unlikely to be standard information-seeking questions. We test this assumption by randomly extracting 100 potential RQs as per our definition and putting them out on Mechanical Turk to 3 annotators, asking them whether or not the questions (displayed with their following statement) were rhetorical. According to majority vote, 75% of the posts were rhetorical. We thus use this "middle of post" heuristic to obviate the need to gather manual annotations for RQs, and developed regex patterns to find RQs that were more likely to be sarcastic. A sample of the patterns, number of matches in the corpus, the numbers we had annotated, and the percent that are sarcastic after annotation are summarized in Table TABREF16 . We extract 357 posts following the intermediate question-answer pairs heuristic from our generic (Gen) corpus. We then supplement these with posts containing RQ cues from our cue-annotation experiments: posts that received 3 out of 5 sarcastic labels in the experiments were considered sarcastic, and posts that received 2 or fewer sarcastic labels were considered not-sarcastic. Our final rhetorical questions corpus consists of 851 posts per class (1,702 total posts). Table TABREF18 shows some examples of rhetorical questions and self-answering from our corpus. Hyperbole. Hyperbole (Hyp) has been studied as an independent form of figurative language, that can coincide with ironic intent BIBREF21 , BIBREF22 , and previous computational work on sarcasm typically includes features to capture hyperbole BIBREF23 . KreuzRoberts95 describe a standard frame for hyperbole in English where an adverb modifies an extreme, positive adjective, e.g. "That was absolutely amazing!" or "That was simply the most incredible dining experience in my entire life." ColstonObrien00b provide a theoretical framework that explains why hyperbole is so strongly associated with sarcasm. Hyperbole exaggerates the literal situation, introducing a discrepancy between the "truth" and what is said, as a matter of degree. A key observation is that this is a type of contrast BIBREF24 , BIBREF1 . In their framework: An event or situation evokes a scale; An event can be placed on that scale; The utterance about the event contrasts with actual scale placement. Fig. FIGREF22 illustrates that the scales that can be evoked range from negative to positive, undesirable to desirable, unexpected to expected and certain to uncertain. Hyperbole moves the strength of an assertion further up or down the scale from the literal meaning, the degree of movement corresponds to the degree of contrast. Depending on what they modify, adverbial intensifiers like totally, absolutely, incredibly shift the strength of the assertion to extreme negative or positive. Table TABREF23 shows examples of hyperbole from our corpus, showcasing the effect that intensifiers have in terms of strengthening the emotional evaluation of the response. To construct a balanced corpus of sarcastic and not-sarcastic utterances with hyperbole, we developed a number of patterns based on the literature and our observations of the generic corpus. The patterns, number matches on the whole corpus, the numbers we had annotated and the percent that are sarcastic after annotation are summarized in Table TABREF16 . Again, we extract a small subset of examples from our Gen corpus (30 per class), and supplement them with posts that contain our hyperbole cues (considering them sarcastic if they received at least 3/5 sarcastic labels, not-sarcastic otherwise). The final hyperbole dataset consists of 582 posts per class (1,164 posts in total). To recap, Table TABREF10 summarizes the total number of posts for each subset of our final corpus. Learning Experiments Our primary goal is not to optimize classification results, but to explore how results vary across different subcorpora and corpus properties. We also aim to demonstrate that the quality of our corpus makes it more straightforward to achieve high classification performance. We apply both supervised learning using SVM (from Scikit-Learn BIBREF25 ) and weakly-supervised linguistic pattern learning using AutoSlog-TS BIBREF13 . These reveal different aspects of the corpus. Supervised Learning. We restrict our supervised experiments to a default linear SVM learner with Stochastic Gradient Descent (SGD) training and L2 regularization, available in the SciKit-Learn toolkit BIBREF25 . We use 10-fold cross-validation, and only two types of features: n-grams and Word2Vec word embeddings. We expect Word2Vec to be able to capture semantic generalizations that n-grams do not BIBREF26 , BIBREF27 . The n-gram features include unigrams, bigrams, and trigrams, including sequences of punctuation (for example, ellipses or "!!!"), and emoticons. We use GoogleNews Word2Vec features BIBREF28 . Table TABREF25 summarizes the results of our supervised learning experiments on our datasets using 10-fold cross validation. The data is balanced evenly between the sarcastic and not-sarcastic classes, and the best F-Measures for each class are shown in bold. The default W2V model, (trained on Google News), gives the best overall F-measure of 0.74 on the Gen corpus for the sarcastic class, while n-grams give the best not-sarcastic F-measure of 0.73. Both of these results are higher F than previously reported for classifying sarcasm in dialogue, and we might expect that feature engineering could yield even greater performance. On the RQ corpus, n-grams provide the best F-measure for sarcastic at 0.70 and not-sarcastic at 0.71. Although W2V performs well, the n-gram model includes features involving repeated punctuation and emoticons, which the W2V model excludes. Punctuation and emoticons are often used as distinctive feature of sarcasm (i.e. "Oh, really?!?!", [emoticon-rolleyes]). For the Hyp corpus, the best F-measure for both the sarcastic and not-sarcastic classes again comes from n-grams, with F-measures of 0.65 and 0.68 respectively. It is interesting to note that the overall results of the Hyp data are lower than those for Gen and RQs, likely due to the smaller size of the Hyp dataset. To examine the effect of dataset size, we compare F-measure (using the same 10-fold cross-validation setup) for each dataset while holding the number of posts per class constant. Figure FIGREF26 shows the performance of each of the Gen, RQ, and Hyp datasets at intervals of 100 posts per class (up to the maximum size of 582 posts per class for Hyp, and 851 posts per class for RQ). From the graph, we can see that as a general trend, the datasets benefit from larger dataset sizes. Interestingly, the results for the RQ dataset are very comparable to those of Gen. The Gen dataset eventually gets the highest sarcastic F-measure (0.74) at its full dataset size of 3,260 posts per class. Weakly-Supervised Learning. AutoSlog-TS is a weakly supervised pattern learner that only requires training documents labeled broadly as sarcastic or not-sarcastic. AutoSlog-TS uses a set of syntactic templates to define different types of linguistic expressions. The left-hand side of Table TABREF28 lists each pattern template and the right-hand side illustrates a specific lexico-syntactic pattern (in bold) that represents an instantiation of each general pattern template for learning sarcastic patterns in our data. In addition to these 17 templates, we added patterns to AutoSlog for adjective-noun, adverb-adjective and adjective-adjective, because these patterns are frequent in hyperbolic sarcastic utterances. The examples in Table TABREF28 show that Colston's notion of contrast shows up in many learned patterns, and that the source of the contrast is highly variable. For example, Row 1 implies a contrast with a set of people who are not your mother. Row 5 contrasts what you were asked with what you've (just) done. Row 10 contrasts chapter 12 and chapter 13 BIBREF30 . Row 11 contrasts what I am allowed vs. what you have to do. AutoSlog-TS computes statistics on the strength of association of each pattern with each class, i.e. P(sarcastic INLINEFORM0 INLINEFORM1 ) and P(not-sarcastic INLINEFORM2 INLINEFORM3 ), along with the pattern's overall frequency. We define two tuning parameters for each class: INLINEFORM4 , the frequency with which a pattern occurs, INLINEFORM5 , the probability with which a pattern is associated with the given class. We do a grid-search, testing the performance of our patterns thresholds from INLINEFORM6 = {2-6} in intervals of 1, INLINEFORM7 ={0.60-0.85} in intervals of 0.05. Once we extract the subset of patterns passing our thresholds, we search for these patterns in the posts in our development set, classifying a post as a given class if it contains INLINEFORM8 ={1, 2, 3} of the thresholded patterns. For more detail, see BIBREF13 , BIBREF31 . An advantage of AutoSlog-TS is that it supports systematic exploration of recall and precision tradeoffs, by selecting pattern sets using different parameters. The parameters have to be tuned on a training set, so we divide each dataset into 80% training and 20% test. Figure FIGREF30 shows the precision (x-axis) vs. recall (y-axis) tradeoffs on the test set, when optimizing our three parameters for precision. Interestingly, the subcorpora for RQ and Hyp can get higher precision than is possible for Gen. When precision is fixed at 0.75, the recall for RQ is 0.07 and the recall for Hyp is 0.08. This recall is low, but given that each retrieved post provides multiple cues, and that datasets on the web are huge, these P values make it possible to bootstrap these two classes in future. Linguistic Analysis Here we aim to provide a linguistic characterization of the differences between the sarcastic and the not-sarcastic classes. We use the AutoSlog-TS pattern learner to generate patterns automatically, and the Stanford dependency parser to examine relationships between arguments BIBREF13 , BIBREF32 . Table TABREF31 shows the number of sarcastic patterns we extract with AutoSlog-TS, with a frequency of at least 2 and a probability of at least 0.75 for each corpus. We learn many novel lexico-syntactic cue patterns that are not the regex that we search for. We discuss specific novel learned patterns for each class below. Generic Sarcasm. We first examine the different patterns learned on the Gen dataset. Table TABREF29 show examples of extracted patterns for each class. We observe that the not-sarcastic patterns appear to capture technical and scientific language, while the sarcastic patterns tend to capture subjective language that is not topic-specific. We observe an abundance of adjective and adverb patterns for the sarcastic class, although we do not use adjective and adverb patterns in our regex retrieval method. Instead, such cues co-occur with the cues we search for, expanding our pattern inventory as we show in Table TABREF31 . Rhetorical Questions. We notice that while the not-sarcastic patterns generated for RQs are similar to the topic-specific not-sarcastic patterns we find in the general dataset, there are some interesting features of the sarcastic patterns that are more unique to the RQs. Many of our sarcastic questions focus specifically on attacks on the mental abilities of the addressee. This generalization is made clear when we extract and analyze the verb, subject, and object arguments using the Stanford dependency parser BIBREF32 for the questions in the RQ dataset. Table TABREF32 shows a few examples of the relations we extract. Hyperbole. One common pattern for hyperbole involves adverbs and adjectives, as noted above. We did not use this pattern to retrieve hyperbole, but because each hyperbolic sarcastic utterance contains multiple cues, we learn an expanded class of patterns for hyperbole. Table TABREF33 illustrates some of the new adverb adjective patterns that are frequent, high-precision indicators of sarcasm. We learn a number of verbal patterns that we had not previously associated with hyperbole, as shown in Table TABREF34 . Interestingly, many of these instantiate the observations of CanoMora2009 on hyperbole and its related semantic fields: creating contrast by exclusion, e.g. no limit and no way, or by expanding a predicated class, e.g. everyone knows. Many of them are also contrastive. Table TABREF33 shows just a few examples, such as though it in no way and so much knowledge. Conclusion and Future Work We have developed a large scale, highly diverse corpus of sarcasm using a combination of linguistic analysis and crowd-sourced annotation. We use filtering methods to skew the distribution of sarcasm in posts to be annotated to 20-31%, much higher than the estimated 12% distribution of sarcasm in online debate forums. We note that when using Mechanical Turk for sarcasm annotation, it is possible that the level of agreement signals how lexically-signaled the sarcasm is, so we settle on a conservative threshold (at least 6 out of 9 annotators agreeing that a post is sarcastic) to ensure the quality of our annotations. We operationalize lexico-syntactic cues prevalent in sarcasm, finding cues that are highly indicative of sarcasm, with ratios up to 87%. Our final corpus consists of data representing generic sarcasm, rhetorical questions, and hyperbole. We conduct supervised learning experiments to highlight the quality of our corpus, achieving a best F of 0.74 using very simple feature sets. We use weakly-supervised learning to show that we can also achieve high precision (albeit with a low recall) for our rhetorical questions and hyperbole datasets; much higher than the best precision that is possible for the Generic dataset. These high precision values may be used for bootstrapping these two classes in the future. We also present qualitative analysis of the different characteristics of rhetorical questions and hyperbole in sarcastic acts, and of the distinctions between sarcastic/not-sarcastic cues in generic sarcasm data. Our analysis shows that the forms of sarcasm and its underlying semantic contrast in dialogue are highly diverse. In future work, we will focus on feature engineering to improve results on the task of sarcasm classification for both our generic data and subclasses. We will also begin to explore evaluation on real-world data distributions, where the ratio of sarcastic/not-sarcastic posts is inherently unbalanced. As we continue our analysis of the generic and fine-grained categories of sarcasm, we aim to better characterize and model the great diversity of sarcasm in dialogue. Acknowledgments This work was funded by NSF CISE RI 1302668, under the Robust Intelligence Program.
adjective and adverb patterns, verb, subject, and object arguments, verbal patterns
52b113e66fd691ae18b9bb8a8d17e1ee7054bb81
52b113e66fd691ae18b9bb8a8d17e1ee7054bb81_0
Q: what is the source of the song lyrics? Text: Introduction Music is part of the day-to-day life of a huge number of people, and many works try to understand the best way to classify, recommend, and identify similarities between songs. Among the tasks that involve music classification, genre classification has been studied widely in recent years BIBREF0 since musical genres are the main top-level descriptors used by music dealers and librarians to organize their music collections BIBREF1. Automatic music genre classification based only on the lyrics is considered a challenging task in the field of Natural Language Processing (NLP). Music genres remain a poorly defined concept, and boundaries between genres still remain fuzzy, which makes the automatic classification problem a nontrivial task BIBREF1. Traditional approaches in text classification have applied algorithms such as Support Vector Machine (SVM) and Naïve Bayes, combined with handcraft features (POS and chunk tags) and word count-based representations, like bag-of-words. More recently, the usage of Deep Learning methods such as Recurrent Neural Networks (RNNs) and Convolutional Neural Networks (CNNs) has produced great results in text classification tasks. Some works like BIBREF2, BIBREF3 BIBREF4 focus on classification of mood or sentiment of music based on its lyrics or audio content. Other works, like BIBREF1, and BIBREF5, on the other hand, try to automatically classify the music genre; and the work BIBREF6 tries to classify, besides the music genre, the best and the worst songs, and determine the approximate publication time of a song. In this work, we collected a set of about 130 thousand Brazilian songs distributed in 14 genres. We use a Bidirectional Long Short-Term Memory (BLSTM) network to make a lyrics-based music genre classification. We did not apply an elaborate set of handcraft textual features, instead, we represent the lyrics songs with a pre-trained word embeddings model, obtaining an F1 average score of $0.48$. Our experiments and results show some real aspects that exist among the Brazilian music genres and also show the usefulness of the dataset we have built for future works. This paper is organized as follows. In the next section, we cite and comment on some related works. Section SECREF3 describes our experiments from data collection to the proposed model, presenting some important concepts. Our experimental results are presented in Section SECREF4, and Section SECREF5 presents our concluding remarks and future work. Related Works Several works have been carried out to add textual information to genre and mood classification. Fell and Sporleder BIBREF6 used several handcraft features, such as vocabulary, style, semantics, orientation towards the world, and song structure to obtain performance gains on three different classification tasks: detecting genre, distinguishing the best and the worst songs, and determining the approximate publication time of a song. The experiments in genre classification focused on eight genres: Blues, Rap, Metal, Folk, R&B, Reggae, Country, and Religious. Only lyrics in English were included and they used an SVM with the default settings for the classification. Ying et al. BIBREF0 used Part-of-Speech (POS) features extracted from lyrics and combined them with three different machine learning techniques – k-Nearest-Neighbor, Naïve Bayes, and Support Vector Machines – to classify a collection of 600 English songs by the genre and mood. Zaanen and Kanters BIBREF7 used the term frequency and inverse document frequency statistical metrics as features to solve music mood classification, obtaining an accuracy of more than 70%. In recent years, deep learning techniques have also been applied to music genre classification. This kind of approach typically does not rely on handcraft features or external data. In BIBREF5, the authors used a hierarchical attention network to perform the task in a large dataset of nearly half a million song lyrics, obtaining an accuracy of more than 45%. Some papers such as BIBREF8 used word embedding techniques to represent words from the lyrics and then classify them by the genre using a 3-layer Deep Learning model. Methods In this chapter we present all the major steps we have taken, from obtaining the dataset to the proposed approach to address the automatic music genre classification problem. Methods ::: Data Acquisition In order to obtain a large number of Brazilian music lyrics, we created a crawler to navigate into the Vagalume website, extracting, for each musical genre, all the songs by all the listed authors. The implementation of a crawler was necessary because, although the Vagalume site provides an API, it is only for consultation and does not allow obtaining large amounts of data. The crawler was implemented using Scrapy, an open-source and collaborative Python library to extract data from websites. From the Vagalume's music web page, we collect the song title and lyrics, and the artist name. The genre was collected from the page of styles, which lists all the musical genres and, for each one, all the artists. We selected only 14 genres that we consider as representative Brazilian music, shown in Table TABREF8. Figure FIGREF6 presents an example of the Vagalume's music Web page with the song “Como é grande o meu amor por você”, of the Brazilian singer Roberto Carlos. Green boxes indicate information about music that can be extracted directly from the web page. From this information, the language in which the lyrics are available can be obtained by looking at the icon indicating the flag of Brazil preceded by the “Original” word. After extracting data, we obtained a set of $138,368$ songs distributed across 14 genres. Table TABREF8 presents the number of songs and artists by genre. In order to use the data to learn how to automatically classify genre, we split the dataset into tree partitions: training ($96,857$ samples), validation ($27,673$ samples), and test ($13,838$ samples). The total dataset and splits are available for download. Methods ::: Word Embeddings Word embeddings is a technique to represent words as real vectors, so that these vectors maintain some semantic aspects of the real words. Basically, vectors are computed by calculating probabilities of the context of words, with the intuition that semantically similar words have similar contexts, and must therefore have similar vectors. Word2Vec, by Mikolov et al. BIBREF9, is one of the first and most widely used algorithms to make word embeddings. It has two architectures to compute word vectors: Continuous Bag-Of-Words (CBOW) and Skip-gram. CBOW gets a context as input and predicts the current word, while Skip-gram gets the current word as input and predicts its context. In this work, we use the Python Word2Vec implementation provided by the Gensim library. The Portuguese pre-trained word embeddings created by BIBREF10 and available for download was used to represent words as vectors. We only used models of dimension 300 and, for Word2Vec, Wang2Vec, and FastText, skip-gram architectured models. Methods ::: Bidirectional Long Short-Term Memory Long Short-Term Memory (LSTM) is a specification of Recurrent Neural Network (RNN) that was proposed by Hochreiter and Schmidhuber BIBREF11. This kind of network is widely used to solve classification of sequential data and is designed to capture time dynamics through graph cycles. Figure FIGREF14 presents an LSTM unity, which receives an input from the previous unit, processes it, and passes it to the next unit. The following equations are used to update $C_t$ and $h_t$ values. where $W_f$, $W_i$, $W_C$, $W_o$ are the weight matrices for $h_{t-1}$ input; $U_f$, $U_i$, $U_C$, $U_o$ are the weight matrices for $x_t$ input; and $b_f$, $b_i$, $b_C$, $b_o$ are the bias vectors. Basically, a Bidirectional LSTM network consists of using two LSTM networks: a forward LSTM and a backward LSTM. The intuition behind it is that, in some types of problems, past and future information captured by forward and backward LSTM layers are useful to predict the current data. Methods ::: Proposed Approach Our proposed approach consists of three main steps. Firstly, we concatenate the title of the song with its lyrics, put all words in lower case and then we clean up the text by removing line breaks, multiple spaces, and some punctuation (,!.?). Secondly, we represent the text as a vector provided by a pre-trained word embeddings model. For classical learning algorithms like SVM and Random Forest, we generate, for each song, a vectorial representation by calculating the average of the vectors of each word in the song lyrics that can be can be expressed by the equation below: where $L$ is the song lyrics, $w$ is a word in $L$, and $n$ is the number of words in $L$. If a word does not have a vector representation in the word embeddings model, it is not considered in the equation. For the BLSTM algorithm, the representation was made in the format of a matrix, as shown in Figure FIGREF16, where each line is a vector representation of a word in the lyrics. In the third step, we use as features the generated representation for the genre classification tasks using SVM, Random Forests, and BLSTM. Experimental Results In this section, we describe our experiments. We used the Linear SVM and Random Forest Scikit-learn implementations and Keras on top of TensorFlow for the BLSTM implementation. In this study, we did not focus on finding the best combination of parameters for the algorithms, so that for SVM we used the default parameters, and for Random Forest we used a number of 100 trees. Our BLSTM model was trained using 4 epochs, with Adam optimizer, and 256 as the size of the hidden layer. As we can see in Table TABREF20, our BLSTM approach outperforms the other models with an F1-score average of $0.48$. In addition, we can note that the use of Wang2Vec pre-trained word embeddings made it possible to obtain better F1-score results in BLSTM, which is not necessarily noticed in other cases, since for SVM and Random Forest, Glove and FastText, respectively, were the techniques that obtained better F1-scores. Table TABREF21 shows the BLSTM classification results for each genre. We can see that the genres gospel, funk-carioca and sertanejo have a greater distinction in relation to the other genres, since they were better classified by the model. In particular, funk-carioca obtained a good classification result although it did not have a large number of collected song lyrics. In gospel song lyrics, we can identify some typical words, such as “Deus” (God) , “Senhor” (Lord), and “Jesus” (Jesus); in funk-carioca, songs have the words “bonde” (tram), “chão” (floor) and “baile” (dance ball), all used as slang; in sertanejo, some of the most common words are “amor” (love), “coração” (heart) and “saudade” (longing). The occurrence of these typical words could contribute to the higher performance of F1-scores in these genres. The bossa-nova and jovem-guarda genres, which have few instances in the dataset, are among the most difficult ones to classify using the model. The pop genre, by contrast, has a small distribution between the number of songs and the number of artists, and could not be well classified by our model. This may indicate that our model was unable to identify a pattern due to the low number of songs per artist, or that the song lyrics of this genre cover several subjects that are confused with other genres. Figure FIGREF22 shows the confusion matrix of the results produced by our BLSTM model. We can notice that many instances of class forró are often confused with class sertanejo. Indeed, these two genres are very close. Both Forró and sertanejo have as theme the cultural and daily aspects of the Northeast region of Brazil. Instances of class infantil are often confused with class gospel: in infantil we have music for children for both entertainment and education. In some of the songs, songwriters try to address religious education, which could explain the confusion between those genres. The MPB (Brazilian Popular Music) genre was the most confused of all, which may indicate that song lyrics of this genre cover a wide range of subjects that intersect with other genres. Conclusion and Future Works In this work we constructed a dataset of $138,368$ Brazilian song lyrics distributed in 14 genres. We applied SVM, Random Forest, and a Bidirectional Long Short-Term Memory (BLSTM) network combined with different word embeddings techniques to address the automatic genre classification task based only on the song lyrics. We compared the results between the different combinations of classifiers and word embedding techniques, concluding that our BLSTM combined with the Wang2Vec pre-trained model obtained the best F1-score classification result. Beside the dataset construction and the comparison of tools, this work also evidences the lack of an absolute superiority between the different techniques of word embeddings, since their use and efficiency in this specific task showed to be very closely related to the classification technique. As future work, it is possible to explore the dataset to identify genre or artist similarities, generating visualizations that may or may not confirm aspects pre-conceived by the consumers of Brazilian music. It is also possible to perform classification tasks by artists of a specific genre.
Vagalume website
163a21c0701d5cda15be2d0eb4981a686e54a842
163a21c0701d5cda15be2d0eb4981a686e54a842_0
Q: what genre was the most difficult to classify? Text: Introduction Music is part of the day-to-day life of a huge number of people, and many works try to understand the best way to classify, recommend, and identify similarities between songs. Among the tasks that involve music classification, genre classification has been studied widely in recent years BIBREF0 since musical genres are the main top-level descriptors used by music dealers and librarians to organize their music collections BIBREF1. Automatic music genre classification based only on the lyrics is considered a challenging task in the field of Natural Language Processing (NLP). Music genres remain a poorly defined concept, and boundaries between genres still remain fuzzy, which makes the automatic classification problem a nontrivial task BIBREF1. Traditional approaches in text classification have applied algorithms such as Support Vector Machine (SVM) and Naïve Bayes, combined with handcraft features (POS and chunk tags) and word count-based representations, like bag-of-words. More recently, the usage of Deep Learning methods such as Recurrent Neural Networks (RNNs) and Convolutional Neural Networks (CNNs) has produced great results in text classification tasks. Some works like BIBREF2, BIBREF3 BIBREF4 focus on classification of mood or sentiment of music based on its lyrics or audio content. Other works, like BIBREF1, and BIBREF5, on the other hand, try to automatically classify the music genre; and the work BIBREF6 tries to classify, besides the music genre, the best and the worst songs, and determine the approximate publication time of a song. In this work, we collected a set of about 130 thousand Brazilian songs distributed in 14 genres. We use a Bidirectional Long Short-Term Memory (BLSTM) network to make a lyrics-based music genre classification. We did not apply an elaborate set of handcraft textual features, instead, we represent the lyrics songs with a pre-trained word embeddings model, obtaining an F1 average score of $0.48$. Our experiments and results show some real aspects that exist among the Brazilian music genres and also show the usefulness of the dataset we have built for future works. This paper is organized as follows. In the next section, we cite and comment on some related works. Section SECREF3 describes our experiments from data collection to the proposed model, presenting some important concepts. Our experimental results are presented in Section SECREF4, and Section SECREF5 presents our concluding remarks and future work. Related Works Several works have been carried out to add textual information to genre and mood classification. Fell and Sporleder BIBREF6 used several handcraft features, such as vocabulary, style, semantics, orientation towards the world, and song structure to obtain performance gains on three different classification tasks: detecting genre, distinguishing the best and the worst songs, and determining the approximate publication time of a song. The experiments in genre classification focused on eight genres: Blues, Rap, Metal, Folk, R&B, Reggae, Country, and Religious. Only lyrics in English were included and they used an SVM with the default settings for the classification. Ying et al. BIBREF0 used Part-of-Speech (POS) features extracted from lyrics and combined them with three different machine learning techniques – k-Nearest-Neighbor, Naïve Bayes, and Support Vector Machines – to classify a collection of 600 English songs by the genre and mood. Zaanen and Kanters BIBREF7 used the term frequency and inverse document frequency statistical metrics as features to solve music mood classification, obtaining an accuracy of more than 70%. In recent years, deep learning techniques have also been applied to music genre classification. This kind of approach typically does not rely on handcraft features or external data. In BIBREF5, the authors used a hierarchical attention network to perform the task in a large dataset of nearly half a million song lyrics, obtaining an accuracy of more than 45%. Some papers such as BIBREF8 used word embedding techniques to represent words from the lyrics and then classify them by the genre using a 3-layer Deep Learning model. Methods In this chapter we present all the major steps we have taken, from obtaining the dataset to the proposed approach to address the automatic music genre classification problem. Methods ::: Data Acquisition In order to obtain a large number of Brazilian music lyrics, we created a crawler to navigate into the Vagalume website, extracting, for each musical genre, all the songs by all the listed authors. The implementation of a crawler was necessary because, although the Vagalume site provides an API, it is only for consultation and does not allow obtaining large amounts of data. The crawler was implemented using Scrapy, an open-source and collaborative Python library to extract data from websites. From the Vagalume's music web page, we collect the song title and lyrics, and the artist name. The genre was collected from the page of styles, which lists all the musical genres and, for each one, all the artists. We selected only 14 genres that we consider as representative Brazilian music, shown in Table TABREF8. Figure FIGREF6 presents an example of the Vagalume's music Web page with the song “Como é grande o meu amor por você”, of the Brazilian singer Roberto Carlos. Green boxes indicate information about music that can be extracted directly from the web page. From this information, the language in which the lyrics are available can be obtained by looking at the icon indicating the flag of Brazil preceded by the “Original” word. After extracting data, we obtained a set of $138,368$ songs distributed across 14 genres. Table TABREF8 presents the number of songs and artists by genre. In order to use the data to learn how to automatically classify genre, we split the dataset into tree partitions: training ($96,857$ samples), validation ($27,673$ samples), and test ($13,838$ samples). The total dataset and splits are available for download. Methods ::: Word Embeddings Word embeddings is a technique to represent words as real vectors, so that these vectors maintain some semantic aspects of the real words. Basically, vectors are computed by calculating probabilities of the context of words, with the intuition that semantically similar words have similar contexts, and must therefore have similar vectors. Word2Vec, by Mikolov et al. BIBREF9, is one of the first and most widely used algorithms to make word embeddings. It has two architectures to compute word vectors: Continuous Bag-Of-Words (CBOW) and Skip-gram. CBOW gets a context as input and predicts the current word, while Skip-gram gets the current word as input and predicts its context. In this work, we use the Python Word2Vec implementation provided by the Gensim library. The Portuguese pre-trained word embeddings created by BIBREF10 and available for download was used to represent words as vectors. We only used models of dimension 300 and, for Word2Vec, Wang2Vec, and FastText, skip-gram architectured models. Methods ::: Bidirectional Long Short-Term Memory Long Short-Term Memory (LSTM) is a specification of Recurrent Neural Network (RNN) that was proposed by Hochreiter and Schmidhuber BIBREF11. This kind of network is widely used to solve classification of sequential data and is designed to capture time dynamics through graph cycles. Figure FIGREF14 presents an LSTM unity, which receives an input from the previous unit, processes it, and passes it to the next unit. The following equations are used to update $C_t$ and $h_t$ values. where $W_f$, $W_i$, $W_C$, $W_o$ are the weight matrices for $h_{t-1}$ input; $U_f$, $U_i$, $U_C$, $U_o$ are the weight matrices for $x_t$ input; and $b_f$, $b_i$, $b_C$, $b_o$ are the bias vectors. Basically, a Bidirectional LSTM network consists of using two LSTM networks: a forward LSTM and a backward LSTM. The intuition behind it is that, in some types of problems, past and future information captured by forward and backward LSTM layers are useful to predict the current data. Methods ::: Proposed Approach Our proposed approach consists of three main steps. Firstly, we concatenate the title of the song with its lyrics, put all words in lower case and then we clean up the text by removing line breaks, multiple spaces, and some punctuation (,!.?). Secondly, we represent the text as a vector provided by a pre-trained word embeddings model. For classical learning algorithms like SVM and Random Forest, we generate, for each song, a vectorial representation by calculating the average of the vectors of each word in the song lyrics that can be can be expressed by the equation below: where $L$ is the song lyrics, $w$ is a word in $L$, and $n$ is the number of words in $L$. If a word does not have a vector representation in the word embeddings model, it is not considered in the equation. For the BLSTM algorithm, the representation was made in the format of a matrix, as shown in Figure FIGREF16, where each line is a vector representation of a word in the lyrics. In the third step, we use as features the generated representation for the genre classification tasks using SVM, Random Forests, and BLSTM. Experimental Results In this section, we describe our experiments. We used the Linear SVM and Random Forest Scikit-learn implementations and Keras on top of TensorFlow for the BLSTM implementation. In this study, we did not focus on finding the best combination of parameters for the algorithms, so that for SVM we used the default parameters, and for Random Forest we used a number of 100 trees. Our BLSTM model was trained using 4 epochs, with Adam optimizer, and 256 as the size of the hidden layer. As we can see in Table TABREF20, our BLSTM approach outperforms the other models with an F1-score average of $0.48$. In addition, we can note that the use of Wang2Vec pre-trained word embeddings made it possible to obtain better F1-score results in BLSTM, which is not necessarily noticed in other cases, since for SVM and Random Forest, Glove and FastText, respectively, were the techniques that obtained better F1-scores. Table TABREF21 shows the BLSTM classification results for each genre. We can see that the genres gospel, funk-carioca and sertanejo have a greater distinction in relation to the other genres, since they were better classified by the model. In particular, funk-carioca obtained a good classification result although it did not have a large number of collected song lyrics. In gospel song lyrics, we can identify some typical words, such as “Deus” (God) , “Senhor” (Lord), and “Jesus” (Jesus); in funk-carioca, songs have the words “bonde” (tram), “chão” (floor) and “baile” (dance ball), all used as slang; in sertanejo, some of the most common words are “amor” (love), “coração” (heart) and “saudade” (longing). The occurrence of these typical words could contribute to the higher performance of F1-scores in these genres. The bossa-nova and jovem-guarda genres, which have few instances in the dataset, are among the most difficult ones to classify using the model. The pop genre, by contrast, has a small distribution between the number of songs and the number of artists, and could not be well classified by our model. This may indicate that our model was unable to identify a pattern due to the low number of songs per artist, or that the song lyrics of this genre cover several subjects that are confused with other genres. Figure FIGREF22 shows the confusion matrix of the results produced by our BLSTM model. We can notice that many instances of class forró are often confused with class sertanejo. Indeed, these two genres are very close. Both Forró and sertanejo have as theme the cultural and daily aspects of the Northeast region of Brazil. Instances of class infantil are often confused with class gospel: in infantil we have music for children for both entertainment and education. In some of the songs, songwriters try to address religious education, which could explain the confusion between those genres. The MPB (Brazilian Popular Music) genre was the most confused of all, which may indicate that song lyrics of this genre cover a wide range of subjects that intersect with other genres. Conclusion and Future Works In this work we constructed a dataset of $138,368$ Brazilian song lyrics distributed in 14 genres. We applied SVM, Random Forest, and a Bidirectional Long Short-Term Memory (BLSTM) network combined with different word embeddings techniques to address the automatic genre classification task based only on the song lyrics. We compared the results between the different combinations of classifiers and word embedding techniques, concluding that our BLSTM combined with the Wang2Vec pre-trained model obtained the best F1-score classification result. Beside the dataset construction and the comparison of tools, this work also evidences the lack of an absolute superiority between the different techniques of word embeddings, since their use and efficiency in this specific task showed to be very closely related to the classification technique. As future work, it is possible to explore the dataset to identify genre or artist similarities, generating visualizations that may or may not confirm aspects pre-conceived by the consumers of Brazilian music. It is also possible to perform classification tasks by artists of a specific genre.
bossa-nova and jovem-guarda genres
36b5f0f62ee9be1ab50d1bb6170e98328d45997d
36b5f0f62ee9be1ab50d1bb6170e98328d45997d_0
Q: what word embedding techniques did they experiment with? Text: Introduction Music is part of the day-to-day life of a huge number of people, and many works try to understand the best way to classify, recommend, and identify similarities between songs. Among the tasks that involve music classification, genre classification has been studied widely in recent years BIBREF0 since musical genres are the main top-level descriptors used by music dealers and librarians to organize their music collections BIBREF1. Automatic music genre classification based only on the lyrics is considered a challenging task in the field of Natural Language Processing (NLP). Music genres remain a poorly defined concept, and boundaries between genres still remain fuzzy, which makes the automatic classification problem a nontrivial task BIBREF1. Traditional approaches in text classification have applied algorithms such as Support Vector Machine (SVM) and Naïve Bayes, combined with handcraft features (POS and chunk tags) and word count-based representations, like bag-of-words. More recently, the usage of Deep Learning methods such as Recurrent Neural Networks (RNNs) and Convolutional Neural Networks (CNNs) has produced great results in text classification tasks. Some works like BIBREF2, BIBREF3 BIBREF4 focus on classification of mood or sentiment of music based on its lyrics or audio content. Other works, like BIBREF1, and BIBREF5, on the other hand, try to automatically classify the music genre; and the work BIBREF6 tries to classify, besides the music genre, the best and the worst songs, and determine the approximate publication time of a song. In this work, we collected a set of about 130 thousand Brazilian songs distributed in 14 genres. We use a Bidirectional Long Short-Term Memory (BLSTM) network to make a lyrics-based music genre classification. We did not apply an elaborate set of handcraft textual features, instead, we represent the lyrics songs with a pre-trained word embeddings model, obtaining an F1 average score of $0.48$. Our experiments and results show some real aspects that exist among the Brazilian music genres and also show the usefulness of the dataset we have built for future works. This paper is organized as follows. In the next section, we cite and comment on some related works. Section SECREF3 describes our experiments from data collection to the proposed model, presenting some important concepts. Our experimental results are presented in Section SECREF4, and Section SECREF5 presents our concluding remarks and future work. Related Works Several works have been carried out to add textual information to genre and mood classification. Fell and Sporleder BIBREF6 used several handcraft features, such as vocabulary, style, semantics, orientation towards the world, and song structure to obtain performance gains on three different classification tasks: detecting genre, distinguishing the best and the worst songs, and determining the approximate publication time of a song. The experiments in genre classification focused on eight genres: Blues, Rap, Metal, Folk, R&B, Reggae, Country, and Religious. Only lyrics in English were included and they used an SVM with the default settings for the classification. Ying et al. BIBREF0 used Part-of-Speech (POS) features extracted from lyrics and combined them with three different machine learning techniques – k-Nearest-Neighbor, Naïve Bayes, and Support Vector Machines – to classify a collection of 600 English songs by the genre and mood. Zaanen and Kanters BIBREF7 used the term frequency and inverse document frequency statistical metrics as features to solve music mood classification, obtaining an accuracy of more than 70%. In recent years, deep learning techniques have also been applied to music genre classification. This kind of approach typically does not rely on handcraft features or external data. In BIBREF5, the authors used a hierarchical attention network to perform the task in a large dataset of nearly half a million song lyrics, obtaining an accuracy of more than 45%. Some papers such as BIBREF8 used word embedding techniques to represent words from the lyrics and then classify them by the genre using a 3-layer Deep Learning model. Methods In this chapter we present all the major steps we have taken, from obtaining the dataset to the proposed approach to address the automatic music genre classification problem. Methods ::: Data Acquisition In order to obtain a large number of Brazilian music lyrics, we created a crawler to navigate into the Vagalume website, extracting, for each musical genre, all the songs by all the listed authors. The implementation of a crawler was necessary because, although the Vagalume site provides an API, it is only for consultation and does not allow obtaining large amounts of data. The crawler was implemented using Scrapy, an open-source and collaborative Python library to extract data from websites. From the Vagalume's music web page, we collect the song title and lyrics, and the artist name. The genre was collected from the page of styles, which lists all the musical genres and, for each one, all the artists. We selected only 14 genres that we consider as representative Brazilian music, shown in Table TABREF8. Figure FIGREF6 presents an example of the Vagalume's music Web page with the song “Como é grande o meu amor por você”, of the Brazilian singer Roberto Carlos. Green boxes indicate information about music that can be extracted directly from the web page. From this information, the language in which the lyrics are available can be obtained by looking at the icon indicating the flag of Brazil preceded by the “Original” word. After extracting data, we obtained a set of $138,368$ songs distributed across 14 genres. Table TABREF8 presents the number of songs and artists by genre. In order to use the data to learn how to automatically classify genre, we split the dataset into tree partitions: training ($96,857$ samples), validation ($27,673$ samples), and test ($13,838$ samples). The total dataset and splits are available for download. Methods ::: Word Embeddings Word embeddings is a technique to represent words as real vectors, so that these vectors maintain some semantic aspects of the real words. Basically, vectors are computed by calculating probabilities of the context of words, with the intuition that semantically similar words have similar contexts, and must therefore have similar vectors. Word2Vec, by Mikolov et al. BIBREF9, is one of the first and most widely used algorithms to make word embeddings. It has two architectures to compute word vectors: Continuous Bag-Of-Words (CBOW) and Skip-gram. CBOW gets a context as input and predicts the current word, while Skip-gram gets the current word as input and predicts its context. In this work, we use the Python Word2Vec implementation provided by the Gensim library. The Portuguese pre-trained word embeddings created by BIBREF10 and available for download was used to represent words as vectors. We only used models of dimension 300 and, for Word2Vec, Wang2Vec, and FastText, skip-gram architectured models. Methods ::: Bidirectional Long Short-Term Memory Long Short-Term Memory (LSTM) is a specification of Recurrent Neural Network (RNN) that was proposed by Hochreiter and Schmidhuber BIBREF11. This kind of network is widely used to solve classification of sequential data and is designed to capture time dynamics through graph cycles. Figure FIGREF14 presents an LSTM unity, which receives an input from the previous unit, processes it, and passes it to the next unit. The following equations are used to update $C_t$ and $h_t$ values. where $W_f$, $W_i$, $W_C$, $W_o$ are the weight matrices for $h_{t-1}$ input; $U_f$, $U_i$, $U_C$, $U_o$ are the weight matrices for $x_t$ input; and $b_f$, $b_i$, $b_C$, $b_o$ are the bias vectors. Basically, a Bidirectional LSTM network consists of using two LSTM networks: a forward LSTM and a backward LSTM. The intuition behind it is that, in some types of problems, past and future information captured by forward and backward LSTM layers are useful to predict the current data. Methods ::: Proposed Approach Our proposed approach consists of three main steps. Firstly, we concatenate the title of the song with its lyrics, put all words in lower case and then we clean up the text by removing line breaks, multiple spaces, and some punctuation (,!.?). Secondly, we represent the text as a vector provided by a pre-trained word embeddings model. For classical learning algorithms like SVM and Random Forest, we generate, for each song, a vectorial representation by calculating the average of the vectors of each word in the song lyrics that can be can be expressed by the equation below: where $L$ is the song lyrics, $w$ is a word in $L$, and $n$ is the number of words in $L$. If a word does not have a vector representation in the word embeddings model, it is not considered in the equation. For the BLSTM algorithm, the representation was made in the format of a matrix, as shown in Figure FIGREF16, where each line is a vector representation of a word in the lyrics. In the third step, we use as features the generated representation for the genre classification tasks using SVM, Random Forests, and BLSTM. Experimental Results In this section, we describe our experiments. We used the Linear SVM and Random Forest Scikit-learn implementations and Keras on top of TensorFlow for the BLSTM implementation. In this study, we did not focus on finding the best combination of parameters for the algorithms, so that for SVM we used the default parameters, and for Random Forest we used a number of 100 trees. Our BLSTM model was trained using 4 epochs, with Adam optimizer, and 256 as the size of the hidden layer. As we can see in Table TABREF20, our BLSTM approach outperforms the other models with an F1-score average of $0.48$. In addition, we can note that the use of Wang2Vec pre-trained word embeddings made it possible to obtain better F1-score results in BLSTM, which is not necessarily noticed in other cases, since for SVM and Random Forest, Glove and FastText, respectively, were the techniques that obtained better F1-scores. Table TABREF21 shows the BLSTM classification results for each genre. We can see that the genres gospel, funk-carioca and sertanejo have a greater distinction in relation to the other genres, since they were better classified by the model. In particular, funk-carioca obtained a good classification result although it did not have a large number of collected song lyrics. In gospel song lyrics, we can identify some typical words, such as “Deus” (God) , “Senhor” (Lord), and “Jesus” (Jesus); in funk-carioca, songs have the words “bonde” (tram), “chão” (floor) and “baile” (dance ball), all used as slang; in sertanejo, some of the most common words are “amor” (love), “coração” (heart) and “saudade” (longing). The occurrence of these typical words could contribute to the higher performance of F1-scores in these genres. The bossa-nova and jovem-guarda genres, which have few instances in the dataset, are among the most difficult ones to classify using the model. The pop genre, by contrast, has a small distribution between the number of songs and the number of artists, and could not be well classified by our model. This may indicate that our model was unable to identify a pattern due to the low number of songs per artist, or that the song lyrics of this genre cover several subjects that are confused with other genres. Figure FIGREF22 shows the confusion matrix of the results produced by our BLSTM model. We can notice that many instances of class forró are often confused with class sertanejo. Indeed, these two genres are very close. Both Forró and sertanejo have as theme the cultural and daily aspects of the Northeast region of Brazil. Instances of class infantil are often confused with class gospel: in infantil we have music for children for both entertainment and education. In some of the songs, songwriters try to address religious education, which could explain the confusion between those genres. The MPB (Brazilian Popular Music) genre was the most confused of all, which may indicate that song lyrics of this genre cover a wide range of subjects that intersect with other genres. Conclusion and Future Works In this work we constructed a dataset of $138,368$ Brazilian song lyrics distributed in 14 genres. We applied SVM, Random Forest, and a Bidirectional Long Short-Term Memory (BLSTM) network combined with different word embeddings techniques to address the automatic genre classification task based only on the song lyrics. We compared the results between the different combinations of classifiers and word embedding techniques, concluding that our BLSTM combined with the Wang2Vec pre-trained model obtained the best F1-score classification result. Beside the dataset construction and the comparison of tools, this work also evidences the lack of an absolute superiority between the different techniques of word embeddings, since their use and efficiency in this specific task showed to be very closely related to the classification technique. As future work, it is possible to explore the dataset to identify genre or artist similarities, generating visualizations that may or may not confirm aspects pre-conceived by the consumers of Brazilian music. It is also possible to perform classification tasks by artists of a specific genre.
Word2Vec, Wang2Vec, and FastText
6b91fe29175be8cd8f22abf27fb3460e43b9889a
6b91fe29175be8cd8f22abf27fb3460e43b9889a_0
Q: what genres do they songs fall under? Text: Introduction Music is part of the day-to-day life of a huge number of people, and many works try to understand the best way to classify, recommend, and identify similarities between songs. Among the tasks that involve music classification, genre classification has been studied widely in recent years BIBREF0 since musical genres are the main top-level descriptors used by music dealers and librarians to organize their music collections BIBREF1. Automatic music genre classification based only on the lyrics is considered a challenging task in the field of Natural Language Processing (NLP). Music genres remain a poorly defined concept, and boundaries between genres still remain fuzzy, which makes the automatic classification problem a nontrivial task BIBREF1. Traditional approaches in text classification have applied algorithms such as Support Vector Machine (SVM) and Naïve Bayes, combined with handcraft features (POS and chunk tags) and word count-based representations, like bag-of-words. More recently, the usage of Deep Learning methods such as Recurrent Neural Networks (RNNs) and Convolutional Neural Networks (CNNs) has produced great results in text classification tasks. Some works like BIBREF2, BIBREF3 BIBREF4 focus on classification of mood or sentiment of music based on its lyrics or audio content. Other works, like BIBREF1, and BIBREF5, on the other hand, try to automatically classify the music genre; and the work BIBREF6 tries to classify, besides the music genre, the best and the worst songs, and determine the approximate publication time of a song. In this work, we collected a set of about 130 thousand Brazilian songs distributed in 14 genres. We use a Bidirectional Long Short-Term Memory (BLSTM) network to make a lyrics-based music genre classification. We did not apply an elaborate set of handcraft textual features, instead, we represent the lyrics songs with a pre-trained word embeddings model, obtaining an F1 average score of $0.48$. Our experiments and results show some real aspects that exist among the Brazilian music genres and also show the usefulness of the dataset we have built for future works. This paper is organized as follows. In the next section, we cite and comment on some related works. Section SECREF3 describes our experiments from data collection to the proposed model, presenting some important concepts. Our experimental results are presented in Section SECREF4, and Section SECREF5 presents our concluding remarks and future work. Related Works Several works have been carried out to add textual information to genre and mood classification. Fell and Sporleder BIBREF6 used several handcraft features, such as vocabulary, style, semantics, orientation towards the world, and song structure to obtain performance gains on three different classification tasks: detecting genre, distinguishing the best and the worst songs, and determining the approximate publication time of a song. The experiments in genre classification focused on eight genres: Blues, Rap, Metal, Folk, R&B, Reggae, Country, and Religious. Only lyrics in English were included and they used an SVM with the default settings for the classification. Ying et al. BIBREF0 used Part-of-Speech (POS) features extracted from lyrics and combined them with three different machine learning techniques – k-Nearest-Neighbor, Naïve Bayes, and Support Vector Machines – to classify a collection of 600 English songs by the genre and mood. Zaanen and Kanters BIBREF7 used the term frequency and inverse document frequency statistical metrics as features to solve music mood classification, obtaining an accuracy of more than 70%. In recent years, deep learning techniques have also been applied to music genre classification. This kind of approach typically does not rely on handcraft features or external data. In BIBREF5, the authors used a hierarchical attention network to perform the task in a large dataset of nearly half a million song lyrics, obtaining an accuracy of more than 45%. Some papers such as BIBREF8 used word embedding techniques to represent words from the lyrics and then classify them by the genre using a 3-layer Deep Learning model. Methods In this chapter we present all the major steps we have taken, from obtaining the dataset to the proposed approach to address the automatic music genre classification problem. Methods ::: Data Acquisition In order to obtain a large number of Brazilian music lyrics, we created a crawler to navigate into the Vagalume website, extracting, for each musical genre, all the songs by all the listed authors. The implementation of a crawler was necessary because, although the Vagalume site provides an API, it is only for consultation and does not allow obtaining large amounts of data. The crawler was implemented using Scrapy, an open-source and collaborative Python library to extract data from websites. From the Vagalume's music web page, we collect the song title and lyrics, and the artist name. The genre was collected from the page of styles, which lists all the musical genres and, for each one, all the artists. We selected only 14 genres that we consider as representative Brazilian music, shown in Table TABREF8. Figure FIGREF6 presents an example of the Vagalume's music Web page with the song “Como é grande o meu amor por você”, of the Brazilian singer Roberto Carlos. Green boxes indicate information about music that can be extracted directly from the web page. From this information, the language in which the lyrics are available can be obtained by looking at the icon indicating the flag of Brazil preceded by the “Original” word. After extracting data, we obtained a set of $138,368$ songs distributed across 14 genres. Table TABREF8 presents the number of songs and artists by genre. In order to use the data to learn how to automatically classify genre, we split the dataset into tree partitions: training ($96,857$ samples), validation ($27,673$ samples), and test ($13,838$ samples). The total dataset and splits are available for download. Methods ::: Word Embeddings Word embeddings is a technique to represent words as real vectors, so that these vectors maintain some semantic aspects of the real words. Basically, vectors are computed by calculating probabilities of the context of words, with the intuition that semantically similar words have similar contexts, and must therefore have similar vectors. Word2Vec, by Mikolov et al. BIBREF9, is one of the first and most widely used algorithms to make word embeddings. It has two architectures to compute word vectors: Continuous Bag-Of-Words (CBOW) and Skip-gram. CBOW gets a context as input and predicts the current word, while Skip-gram gets the current word as input and predicts its context. In this work, we use the Python Word2Vec implementation provided by the Gensim library. The Portuguese pre-trained word embeddings created by BIBREF10 and available for download was used to represent words as vectors. We only used models of dimension 300 and, for Word2Vec, Wang2Vec, and FastText, skip-gram architectured models. Methods ::: Bidirectional Long Short-Term Memory Long Short-Term Memory (LSTM) is a specification of Recurrent Neural Network (RNN) that was proposed by Hochreiter and Schmidhuber BIBREF11. This kind of network is widely used to solve classification of sequential data and is designed to capture time dynamics through graph cycles. Figure FIGREF14 presents an LSTM unity, which receives an input from the previous unit, processes it, and passes it to the next unit. The following equations are used to update $C_t$ and $h_t$ values. where $W_f$, $W_i$, $W_C$, $W_o$ are the weight matrices for $h_{t-1}$ input; $U_f$, $U_i$, $U_C$, $U_o$ are the weight matrices for $x_t$ input; and $b_f$, $b_i$, $b_C$, $b_o$ are the bias vectors. Basically, a Bidirectional LSTM network consists of using two LSTM networks: a forward LSTM and a backward LSTM. The intuition behind it is that, in some types of problems, past and future information captured by forward and backward LSTM layers are useful to predict the current data. Methods ::: Proposed Approach Our proposed approach consists of three main steps. Firstly, we concatenate the title of the song with its lyrics, put all words in lower case and then we clean up the text by removing line breaks, multiple spaces, and some punctuation (,!.?). Secondly, we represent the text as a vector provided by a pre-trained word embeddings model. For classical learning algorithms like SVM and Random Forest, we generate, for each song, a vectorial representation by calculating the average of the vectors of each word in the song lyrics that can be can be expressed by the equation below: where $L$ is the song lyrics, $w$ is a word in $L$, and $n$ is the number of words in $L$. If a word does not have a vector representation in the word embeddings model, it is not considered in the equation. For the BLSTM algorithm, the representation was made in the format of a matrix, as shown in Figure FIGREF16, where each line is a vector representation of a word in the lyrics. In the third step, we use as features the generated representation for the genre classification tasks using SVM, Random Forests, and BLSTM. Experimental Results In this section, we describe our experiments. We used the Linear SVM and Random Forest Scikit-learn implementations and Keras on top of TensorFlow for the BLSTM implementation. In this study, we did not focus on finding the best combination of parameters for the algorithms, so that for SVM we used the default parameters, and for Random Forest we used a number of 100 trees. Our BLSTM model was trained using 4 epochs, with Adam optimizer, and 256 as the size of the hidden layer. As we can see in Table TABREF20, our BLSTM approach outperforms the other models with an F1-score average of $0.48$. In addition, we can note that the use of Wang2Vec pre-trained word embeddings made it possible to obtain better F1-score results in BLSTM, which is not necessarily noticed in other cases, since for SVM and Random Forest, Glove and FastText, respectively, were the techniques that obtained better F1-scores. Table TABREF21 shows the BLSTM classification results for each genre. We can see that the genres gospel, funk-carioca and sertanejo have a greater distinction in relation to the other genres, since they were better classified by the model. In particular, funk-carioca obtained a good classification result although it did not have a large number of collected song lyrics. In gospel song lyrics, we can identify some typical words, such as “Deus” (God) , “Senhor” (Lord), and “Jesus” (Jesus); in funk-carioca, songs have the words “bonde” (tram), “chão” (floor) and “baile” (dance ball), all used as slang; in sertanejo, some of the most common words are “amor” (love), “coração” (heart) and “saudade” (longing). The occurrence of these typical words could contribute to the higher performance of F1-scores in these genres. The bossa-nova and jovem-guarda genres, which have few instances in the dataset, are among the most difficult ones to classify using the model. The pop genre, by contrast, has a small distribution between the number of songs and the number of artists, and could not be well classified by our model. This may indicate that our model was unable to identify a pattern due to the low number of songs per artist, or that the song lyrics of this genre cover several subjects that are confused with other genres. Figure FIGREF22 shows the confusion matrix of the results produced by our BLSTM model. We can notice that many instances of class forró are often confused with class sertanejo. Indeed, these two genres are very close. Both Forró and sertanejo have as theme the cultural and daily aspects of the Northeast region of Brazil. Instances of class infantil are often confused with class gospel: in infantil we have music for children for both entertainment and education. In some of the songs, songwriters try to address religious education, which could explain the confusion between those genres. The MPB (Brazilian Popular Music) genre was the most confused of all, which may indicate that song lyrics of this genre cover a wide range of subjects that intersect with other genres. Conclusion and Future Works In this work we constructed a dataset of $138,368$ Brazilian song lyrics distributed in 14 genres. We applied SVM, Random Forest, and a Bidirectional Long Short-Term Memory (BLSTM) network combined with different word embeddings techniques to address the automatic genre classification task based only on the song lyrics. We compared the results between the different combinations of classifiers and word embedding techniques, concluding that our BLSTM combined with the Wang2Vec pre-trained model obtained the best F1-score classification result. Beside the dataset construction and the comparison of tools, this work also evidences the lack of an absolute superiority between the different techniques of word embeddings, since their use and efficiency in this specific task showed to be very closely related to the classification technique. As future work, it is possible to explore the dataset to identify genre or artist similarities, generating visualizations that may or may not confirm aspects pre-conceived by the consumers of Brazilian music. It is also possible to perform classification tasks by artists of a specific genre.
Gospel, Sertanejo, MPB, Forró, Pagode, Rock, Samba, Pop, Axé, Funk-carioca, Infantil, Velha-guarda, Bossa-nova and Jovem-guarda
aa7decee4e3006c2c99b1f331a5b32d44a565ef6
aa7decee4e3006c2c99b1f331a5b32d44a565ef6_0
Q: Is the filter based feature selection (FSE) a form of regularization? Text: Abstract Text document classification is an important task for diverse natural language processing based applications. Traditional machine learning approaches mainly focused on reducing dimensionality of textual data to perform classification. This although improved the overall classification accuracy, the classifiers still faced sparsity problem due to lack of better data representation techniques. Deep learning based text document classification, on the other hand, benefitted greatly from the invention of word embeddings that have solved the sparsity problem and researchers’ focus mainly remained on the development of deep architectures. Deeper architectures, however, learn some redundant features that limit the performance of deep learning based solutions. In this paper, we propose a two stage text document classification methodology which combines traditional feature engineering with automatic feature engineering (using deep learning). The proposed methodology comprises a filter based feature selection (FSE) algorithm followed by a deep convolutional neural network. This methodology is evaluated on the two most commonly used public datasets, i.e., 20 Newsgroups data and BBC news data. Evaluation results reveal that the proposed methodology outperforms the state-of-the-art of both the (traditional) machine learning and deep learning based text document classification methodologies with a significant margin of 7.7% on 20 Newsgroups and 6.6% on BBC news datasets. Text Document Classification, Filter based feature selection, 20 News Group, BBC News, Multi-channel CNN Introduction Text classification is extensively being used in several applications such as information filtering, recommendation systems, sentiment analysis, opinion mining, and web searching BIBREF0. Broadly, text classification methodologies are divided into two classes statistical, and rule-based BIBREF1. Statistical approaches utilize arithmetical knowledge, whereas rule-based approaches require extensive domain knowledge to develop rules on the basis of which samples could be classified into a predefined set of categories. Rule-based approaches are not extensively being used because it is a difficult job to develop robust rules which do not need to update periodically. Previously, researchers performed automatic document text classification by using machine learning classifiers such as Naive Bayes BIBREF2, SVM, NN, Decision Trees BIBREF3, BIBREF4. In recent years, a number of feature selection algorithms have been proposed which significantly improve the performance of text classification BIBREF5, BIBREF6, BIBREF7, BIBREF8. Although feature selection techniques reduce the dimensionality of data to a certain level, however still traditional machine learning based text classification methodologies face the problem of feature representation as trivial feature representation algorithms use a bag of words model which consider unigrams, n-grams or specific patterns as features BIBREF9. Thus, these algorithms do not capture the complete contextual information of data and face the problem of data sparsity. The problem of data sparsity is solved by word embeddings which do not only capture syntactic but semantic information of textual data as well BIBREF10. Deep learning based text classification methodologies are not only successfully capturing the contextual information of data, but also resolving the data sparsity problems, thus, they are outperforming state-of-the-art machine learning based classification approaches BIBREF11, BIBREF12. Primarily in computer vision and NLP, researchers have been trying to develop deeper neural network architectures which could extract a better set of features for classification BIBREF13, BIBREF14. However, deeper architectures are not only computationally more expensive but complicated relationships learned by deeper architectures will actually be the outcome of sampling noise in case of small scale datasets. Recent researches showed that deeper architectures extract redundant features which eventually reduce the classification performance BIBREF15, BIBREF16, BIBREF17. This paper proposes a two stage text classification(TSCNN) methodology, which is a hybrid approach. The first stage relies on the feature selection algorithm where the aim is to rank and remove all irrelevant and redundant features. While the second stage is based on deep learning, where from first stage discriminative features are fed to multi-channel CNN model. In this novel setting, the proposed approach reap the benefits of both traditional feature engineering and automated feature engineering (using deep learning). Extensive evaluation of two commonly used publicly available datasets reveals that the proposed approach outperforms state-of-the-art methods with a significant margin. Related Work This section provides a birds-eye view on state-of-the-art filter based feature selection algorithms used in statistical based text document classification approaches. Moreover, recent deep learning based text classification methodologies are also briefly described. Feature selection is considered an indispensable task in text classification as it removes redundant and irrelevant features of the corpus BIBREF18. Broadly, feature selection approaches can be divided into three classes namely wrapper, embedded, and filter BIBREF7, BIBREF8. In recent years, researchers have proposed various filter based feature selection methods to raise the performance of document text classification BIBREF19. Document frequency BIBREF20 is the simplest metric used to rank the features in training data by utilizing the presence of a certain feature in positive and negative class documents respectively. Another simplest feature selection algorithm namely Accuracy ($ACC$) is the difference between true positive and false positive of a feature BIBREF21. $ACC$ is biased towards true positive ($t_p$) because it assigns a higher score to those features which are more frequent in positive class. In order to tackle the biaseness, an advanced version of $ACC$ namely Balanced Accuracy Measure (ACC2) was introduced which is based on true positive rate ($t_{pr}$) and false positive rate ($f_{pr}$). Although $ACC2$ resolves the issue of class unbalance through normalizing true and false positives with the respective size of the classes, however, $ACC2$ assigns the same rank to those features which reveal same difference value ($|t_{pr}-fpr|$) despite having different $t_{pr}$ or $f_{pr}$. Furthermore, Information Gain ($IG$) is another commonly used feature selection algorithm in text classification BIBREF22. It determines whether the information required to predict the target class of a document is raised or declined through the addition or elimination of a feature. Likewise, Chi-squared (CHISQ) considers the existence or non-existence of a feature to be independent of class labels. CHISQ does not reveal promising performance when the dataset is enriched with infrequent features, however, its results can be raised through pruning BIBREF21, BIBREF23. Odds Ratio (OR) BIBREF24 is the likelihood ratio among the occurrence of a feature and the absence of a feature in the certain document. It gives the highest rank to the rare features. Thus, it performs well with fewer features, however, its performance starts getting deteriorated with the increasing number of features. Similarly,Distinguish Feature Selector (DFS) BIBREF25 considers those features to be more significant which occur more frequently in one class and less frequent in other classes. Furthermore, Gini Index is used to estimate the distribution of a feature over given classes. Although it was originally used to estimate the GDP_per_Capita, however, in text classification, it is used to rank the features BIBREF26. It is considered that deep learning models automate the process of feature engineering, contrarily, recent research in computer vision reveals that deep learning models extract some irrelevant and redundant features BIBREF17. In order to raise the performance of text document classification, several researchers have utilized diverse deep learning based methodologies. For instance, Lai et al. BIBREF27 proposed a Bi-directional recurrent structure in a convolutional neural network for text classification. This recurrent structure captures the contextual information while learning word representations and produced less noise as compared to the trivial window based convolutional network. Moreover, a max pooling layer was used in order to select highly significant words. Through combining recurrent structure and max-pooling layer, they utilized the benefits of both convolutional and recurrent neural networks. The approach was evaluated on sentiment analysis, topic classification, and writing style classification. Aziguli et al. BIBREF1 utilized hybrid deep learning methods and proposed denoising deep neural network (DDNN) based on restricted Boltzmann machine (RBM), and denoising autoencoder (DAE). DDNN alleviated noise and raised the performance of feature extraction. Likewise, in order to resolve the problem of computing high dimensional sparse matrix for the task of text classification, Jiang et al. BIBREF11 proposed hybrid text classification model which was utilizing deep belief network (DBN) for feature extraction, and softmax regression to classify given text. They claimed that the proposed hybrid methodology performed better than trivial classification methods on two benchmark datasets. Moreover, Huang et al. BIBREF28 utilized deep belief networks in order to acquire emotional features from speech signals. Extracted features were fed to non-linear support vector machine (SVM) classifier and in this way a hybrid system was established for the task of identifying emotions from speech. Zhou et al. BIBREF29 presented an algorithm namely active hybrid deep belief network (semi-supervised) for the task of sentiment classification. In their two fold network, first, they extracted features using restricted Boltzmann machines and then preceding hidden layers learned the comments using convolutional RBM (CRBM). Kahou et al. BIBREF30 revealed that dropout performance could be further enhanced by using Relu unites rather than max-out units. Srivastava et al BIBREF31 revealed that dropout technique raises the performance of all neural networks on several supervised tasks like document classification, speech recognition, and computational biology. Liu et al. BIBREF32 presented an attentional framework based on deep linguistics. This framework incorporated concept information of corpus words into neural network based classification models. MetaMap and WordNet were used to annotate biomedical and general text respectively. Shih et al. BIBREF31 purposed the novel use of Siamese long short-term memory (LSTM) based deep learning method to better learn document representation for the task of text classification. Methodology This section briefly describes the proposed methodology of two stage text classification shown in Figure FIGREF1. First stage, is dedicated for feature selection where irrelevant and redundant features are removed using Normalized Difference Measure ($NDM$). While second stage use multi-channel CNN model for the classification of textual documents into predefined categories based on discriminative patterns extracted by convolution layers. Methodology ::: Discriminative Feature Selection To develop the vocabulary of most discriminative features, we remove all punctuation symbols and non-significant words (stop words) as a part of the preprocessing step. Furthermore, in order to rank the terms based on their discriminative power among the classes, we use filter based feature selection method named as Normalized Difference Measure (NDM)BIBREF5. Considering the features contour plot, Rehman et al. BIBREF5 suggested that all those features which exist in top left, and bottom right corners of the contour are extremely significant as compared to those features which exist around diagonals. State-of-the-art filter based feature selection algorithms such as ACC2 treat all those features in the same fashion which exist around the diagonals BIBREF5. For instance, ACC2 assigns same rank to those features which has equal difference ($|t_{pr} -f_{pr}|$) value but different $t_{pr}$ and $f_{pr}$ values. Whereas NDM normalizes the difference ($|t_{pr} -f_{pr}|$) with the minimum of $t_{pr}$ and $f_{pr}$ (min($t_{pr}$, $f_{pr}$)) and assign different rank to those terms which have same difference value. Normalized Difference Measure (NDM) considers those features highly significant which have the following properties: High $|t_{pr} - f_{pr}|$ value. $t_{pr}$ or $f_{pr}$ must be close to zero. If two features got the same difference $|t_{pr} - t_{pr}|$ value, then a greater rank shall be assigned to that feature which reveal least min($t_{pr}$, $f_{pr}$) value. Mathematically NDM is represented as follows: where $t_{pr}$ refers to true positive rate and $f_{pr}$ refers to false positive rate. True positive rate is the ratio between the number of positive class documents having term t and the size of positive class. False positive rate is the ratio between the number of negative class documents having term t and the size of negative class. Methodology ::: Multi-Channel CNN Model In second stage, a convolutional neural network (CNN) based on three channel is used. Each channel has two wide convolutional layers with 16 filters of size 5 and 3 respectively. We use multi-Channel CNN model to extract a variety of features at each channel by feeding different representation of features at the embedding layer. The first channel contains features obtained from FastText embedding provided by Mikolov et al. BIBREF33. These pre-trained word vectors were developed after training the skip-gram model on Wikipedia 2017 documents, UMBC web base corpus, and statmt.org news dataset using Fasttext API. There are total one million words provided with pre-trained word vectors of dimension 300, whereas the other two channels are exploiting randomly initialized embedding layers. Finally, the features of all three channels are concatenated to form a single vector. All wide convolution layers are using $Tanh$ as activation function and allow every feature to equally take part while convolving. Each convolution layer is followed by a global max pooling layer which extracts the most discriminative feature from yielded feature maps. After global max pooling, all discriminative features are concatenated and normalized using L2 normalization technique. These normalized features are then passed to a fully connected layer which has 128 output units and using relu as the activation function. Finally, last fully connected layer use softmax as activation function and acts as a classifier. Experimental Setup This section describes the experimental setup used to evaluate the integrity of proposed text classification methodology on two benchmark datasets namely BBC News and 20 NewsGroup. In our experimentation, CNN is trained on two different versions of each dataset. In the first version named as Standard CNN (SCNN), the entire vocabulary of each dataset obtained after preprocessing is fed to the model. Whereas, in the second version named as Two Stage CNN (TSCNN), after preprocessing, the vocabulary of each class is ranked using filter based feature selection algorithm namely NDM and then only top k ranked features of each class are selected to feed the embedding layer of underlay model. Top 1000 features of BBC, and 10,000 features of 20 Newsgroup dataset are selected and only these selected features are fed to the embedding layer of each channel. Furthermore, as 20 Newsgroup dataset has more unique features as compared to BBC dataset, so in final vocabulary, for 2o Newsgroup dataset we select more features as compared to BBC news dataset. Keeping only top 1000, and 10,000 features, two vocabularies of size 4208 and 41701 are constructed for respective datasets. Since the features are ranked on class level, therefore, many features overlap in different classes. For experiments, we use 20 newsgroup dataset which has a standard split of 70% training samples, and 30% test samples. We use 10% of training samples for validation. Moreover, BBC news dataset has no standard split, therefore, we consider 60% of data for training, 10% for validation, and 30% for testing. Table TABREF9 summarizes the statistics of two datasets (20NewsGroup, BBC News) used in our experimentation. RMSprop is used as an optimizer with learning rate of 0.001 and categorical cross-entropy is used as a loss function. Batch size of 50 is used and we train the model for 20 epochs. Results This section provides detailed insight and analysis of several experiments performed to uncover pros and cons of the proposed approach in comparison standard CNN model (SCNN). To evaluate the effect of irrelevant and redundant features on the performance of convolutional neural network, we have also shown confusion matrices to reveal the performance of two stage classification and standard CNN methodologies. Moreover, we also compare the performance of proposed two stage classification methodology with the state-of-the-art machine and deep learning based text classification methodologies. Figure FIGREF10 shows the accuracy of proposed two stage classification, and standard CNN classification methodologies on the validation set of 20 newsgroup, and BBC news datasets respectively. For 20 newsgroup dataset, the accuracy of standard CNN classification methodology begins at low of 73% as compared to the accuracy of two stage (TSCNN) classification methodology which reveals a promising figure of 90%. This performance gap occurs due to lack of discriminative features in standard CNN. TSCNN is fed with highly discriminative features, whereas the standard CNN model extracts significant features from given vocabulary on its own. This is why standard CNN performance gets improve until 4 epochs as compared to TSCNN whose performance increases slightly. However, the standard CNN model still does not manage to surpass the promising performance of TSCNN at any epoch. Likewise, for BBC news dataset, both models depict similar performance trend as discussed for 20 newsgroup dataset. Figure FIGREF11 compares the loss values produce by TSCNN and SCNN at different epochs of two datasets. For 20 newsgroup dataset, at first epoch, there is a difference of 0.55 between the loss values of TSCNN and SCNN, due to the fact that the vocabulary of unique words fed to TSCNN is noise free and it has to learn more discriminative features from a vocabulary of irrelevant and redundant features. On the other hand, complete vocabulary was fed to SCNN which contains both relevant and irrelevant features. The assumption was that SCNN will automatically select relevant features and discard which are unimportant. Furthermore, SCNN was unable to remove noise effectively since there is a gap of almost 0.2 between the losses of SCNN and TSCNN after 8th epochs. Similarly, for BBC news dataset, both models have revealed a similar trend as discussed for 20 newsgroup dataset. To evaluate the effect of noise on the performance of TSCNN and SCNN we have shown confusion matrices for both datasets. Figure FIGREF12 illustrates that in the case of standard CNN classes like talk.politics.misc and talk.religion.misc are slightly confused with classes talk.politics.guns and alt.atheism respectively. However classes like sci.electronics and comp.graphics are confused with many other classes. On the other hand, confusion matrix of TSCNN confirms that the confusion between the classes is resolved by using two stage classification methodology which develops a vocabulary of discriminative words. It can also be confirmed by observing the accuracy of the classes such as talk.religion.misc, sci.electronics and comp.os.ms-windows.misc were increased from 61%, 69% and 74% to 83%, 91% and 92% respectively. Similarly, the confusion matrices for BBC News Datasets are also shown in figure FIGREF13, which also demonstrate the same phenomenon mentioned before. As it can be clearly seen that business and entertainment classes are confused with other classes when classified using standard CNN. Whereas, two stage classification discard the inter-class dependencies almost completely as the accuracy of business and entertainment classes increases from 92% to 99% and 100% respectively. Results ::: Comparison with state-of-the-art This section provides insight into the presented hybrid approach in comparison to the state-of-the-art machine and deep learning based methodologies. Table TABREF15 illustrates the results of the proposed methodology, and 12 well known methods from literature including state-of-the-art results produced by machine and deep learning methodologies on 20 newsgroup and BBC news datasets. In order to improve the performance of machine learning based text classification, Rehman et al. BIBREF5 proposed a filter based feature selection algorithm namely Normalized Difference Measure (NDM). They compared its performance with seven state-of-the-art feature selection algorithms (ODDS, CHI, IG, DFS, GINI, ACC2, POISON) using SVM, and Naive Bayes classifiers. Their experimentation proved that the removal of irrelevant and redundant features improves the performance of text classification. They reported the highest macro $F_1$ score of 75% on 20 newsgroup dataset. Lately, Rehman et al. BIBREF19 proposed a new version of NDM and named it as MMR. MMR outperformed NDM with the figure of 9%. Moreover, Shirsat et al. BIBREF34 performed sentiment identification on sentence level using positive, and negative words list provided by Bing Liu dictionary. Their proposed methodology marked the performance of 96% with SVM classifier on BBC news dataset. Recently, Pradhan et al. BIBREF36 compared the performance of several classification algorithms (SVM, Naive Bayes, Decision Tree, KNN, Rocchio) on number of news datasets. They extrapolated that SVM outperformed other four classifiers on all datasets. SVM produced the performance figure of 86%, and 97% on 20 newsgroup and BBC news datasets. Elghannam BIBREF37 used the bi-gram frequency for the representation of the document in a typical machine learning based methodology. The proposed approach did not require any NLP tools and alleviated data sparsity up to great extent. They reported the $f_1$ score of 92% on BBC news dataset. Wang et al. BIBREF38 presented transfer learning method in order to perform text classification for cross-domain text. They performed experimentation on six classes of 20 newsgroup dataset and managed to produce the performance of 95%. On the other hand, researchers have utilized deep learning based diverse methodologies to raise the performance of text classification. For instance, the convolutional neural network based on Bi-directional recurrent structure BIBREF27 successfully extracted the semantics of underlay data. It produced the performance of 96.49% on four classes (politics,comp,religion,rec) of 20 newsgroup dataset. Likewise, Aziguli et al. BIBREF1 proposed denoising deep neural networks exploited restricted boltzmann machine and denoising autoencoder to produce the performance of 75%, and 97% on 20 newsgroup, and BBC datasets respectively. Whereas, deep belief network and softmax regression were combinely used BIBREF11 to select discriminative features for text classification. Combination of both managed to mark the accuracy of 85% on 20 newsgroup dataset. Moreover, a deep linguistics based framework BIBREF32 utilized WordNet, and MetaMap to extend concept information of underlay text. This approach produced the accuracy of just 69% on 20 newsgroup dataset. Similarly, in order to improve the learning of document representation, siamese long short-term memory (LSTM) based deep learning methodology was proposed BIBREF12 which revealed the performance of 86% on 20 newsgroup dataset. Camacho-Collados and PilehvarBIBREF35 revealed effective preprocessing practices in order to train word embeddings for the task of topic categorization. Their experimentation utilized two versions of CNN namely standard CNN with ReLU, and standard CNN with the addition of recurrent layer (LSTM) to produce the accuracy of 97% on BBC, and 90% on 20 newsgroup datasets using 6 classes only. The proposed two stage classification methodology has outperformed state-of-the-art machine and deep learning based methodologies. Moreover, in order to reveal the impact of feeding discriminative feature based vocabulary, we compare the proposed two stage classification methodology with the standard convolutional neural network. Table TABREF15 clearly depicts that simple CNN produces the $F_1$ score of 94%, and 82% on BBC, and 20 newsgroup datasets respectively using SCNN. Whereas, TSCNN reveals the f1-score of 99% and 91% on BBC, and 20 newsgroup datasets using the set of features ranked by NDM. Conclusion This paper proposes a two stage classification methodology for text classification. Firstly, we employ filter based feature selection algorithm(NDM) to develop noiseless vocabulary. Secondly, this vocabulary is fed to a multi-channel convolutional neural network, where each channel has two filters of size 5, and 3 respectively and 2 dense layers. Trivial convolutional layers, do not convolve all the features equally, this is why wide convolutional layers are used. Experimental results reveal that instead of feeding the whole vocabulary to CNN model, vocabulary of most discriminative features produces better performance. In future, we will assess the performance of the proposed two stage classification methodology using RNN, and Hybrid deep learning methodologies. Moreover, other renowned feature selection algorithms will be applied in first stage of proposed methodology.
No
4b8a0e99bf3f2f6c80c57c0e474c47a5ee842b2c
4b8a0e99bf3f2f6c80c57c0e474c47a5ee842b2c_0
Q: To what other competitive baselines is this approach compared? Text: Introduction Many modern dialogue generation models use a sequence-to-sequence architecture as their backbone BIBREF0, following its success when applied to Machine Translation (MT) BIBREF1. However, dialogue tasks also have a requirement different from that of MT: the response not only has to be "correct" (coherent and relevant), but also needs to be diverse and informative. However, seq2seq has been reported by many previous works to have low corpus-level diversity BIBREF2, BIBREF3, BIBREF0, BIBREF4, as it tends to generate safe, terse, and uninformative responses, such as "I don't know.". These responses unnecessarily make a dialogue system much less interactive than it should be. To increase the diversity of dialogue responses, the first step is to faithfully evaluate how diverse a response is. There are metrics used by previous work that are correlated to diversity, but not strongly, such as ratio of distinct tokens BIBREF2 and response length BIBREF5. However, a response can be long but extremely boring in meaning, such as "I am sure that I don't know about it.", or short but interesting (i.e., contains a lot of information), such as "Dad was mean.". Only investigating discrete token output by the model is also not ideal, because these tokens are only a single realization of the model's output probability distribution at each time step, which unavoidably loses valuable information indicated by the whole distribution. BIBREF6 (BIBREF6) manually collect a shortlist of dull responses, and during training discourage the model from producing such utterances. However, an important drawback of hand-crafted rules is that the set of dull tokens or utterances is static, while in fact it usually evolves during training: when the current dull tokens are eliminated, another set of them might reveal themselves. In our work, we begin with a simple yet effective approach to measure how diverse a response is. This metric, which we name "Average Output Probability Distribution", or AvgOut, draws information directly from the training-in-session model itself. We calculate it by keeping track of the exponential average of all output probability distributions on the decoder side during training. This metric dynamically measures which tokens the model is biased toward without any hand-crafted rules, thus making it a faithful evaluation of the model diversity (i.e., for diverse models, the token probabilities should be more evenly distributed rather than peaked at a few dull tokens). In addition, since AvgOut is a one-dimensional categorical distribution rather than a dimensionless numerical value like entropy, it naturally carries and conveys more information about model diversity. We then propose three models that leverage our novel metric to promote diversity in dialogue generation. The first MinAvgOut model minimizes the dot product of current batch AvgOut and the exponential average AvgOut across batches, which encourages low-frequency tokens to be generated. The second LFT model uses a labeled transduction method and scales a "diversity label" by the diversity score of the ground-truth target sequence during training, while during testing can generate responses of different levels of diversity by tweaking the intended diversity score. The third RL model leverages reinforcement learning, where our novel metric is applied to discrete tokens and serve as a reward signal. In addition, since MinAvgOut regularizes directly on the continuous distribution while RL calculates its reward based on discrete sampled tokens, we simply add up the loss terms of the two models, creating an even stronger hybrid model. We first employ diverse automatic metrics, including Distinct-1 and -2 from previous work BIBREF2 and our novel metric Diveristy-iAUC (which calculates one minus the sum of normalized frequencies of the most frequent tokens produced by the model), plus activity/entity F1s, to evaluate the diversity and relevance of the generated responses. We then conduct human evaluations to verify that these models not only outperform their base model LSTM by a large margin, but are also comparable to or better than an advanced decoding algorithm MMI BIBREF2 and a very competitive model VHRED BIBREF7 on the Ubuntu dataset. AvgOut as an Effective Diversity Metric By only keeping a static shortlist of boring responses or tokens, one basically assumes that we humans should decide which tokens are dull. However, we argue that we should instead look from the model's perspective to identify dull tokens, because even if the model outputs a word that we consider rare, including it in too many responses is still considered a dull behavior. Motivated by this thought experiment, we propose a novel metric, Average Output Probability Distribution (AvgOut), that dynamically keeps track of which tokens the model is biased toward. To calculate this, during training, we average out all the output probability distributions for each time step of the decoder for the whole mini-batch. The resulting vector $D^{\prime }$ will reflect each token's probability of being generated from the model's perspective. Note that we do not use discrete ground-truth tokens to evaluate the model's bias, because there is a fine distinction between the two: a statistics of frequency on ground-truth tokens is an evaluation of the corpus's bias, while AvgOut is an evaluation of what bias the model has learned because by generating dull responses more frequently than the training corpus has, it is the model itself that we should adjust. Also note that the reason we take the average is that a single output distribution will largely depend on the context and the previous target tokens (which are fed as inputs to the decoder during training), but on average the distribution should be a faithful evaluation on which words are more likely to be generated from model's perspective. To avoid batches that have AvgOut significantly different from those of other batches, which would lead the model astray, we keep the exponential average of this metric across batches to make it less biased toward any specific batch. Let it be $D$. After training on a mini-batch and obtain $D^{\prime }$, we update $D$ like the following: where $\gamma $ is $0.01$ in our experiments. Another consideration of AvgOut is that theoretically we can have two choices. The first is to use the output distributions when we are teacher-forcing (i.e., only feeding ground-truth tokens); the other is to let the model use its own predictions during greedy/beam-search decoding or sampling. We reason that the former is a much better estimation of the model's bias, because the latter will result in a cascading enlargement of the model bias due to the auto-regressive nature of LSTM-RNN models (i.e., the tokens fed to the decoder are themselves also polluted by the model's bias). Our early experimental results also agreed with the above reasoning. Although we try to come up with the most faithful evaluation of how diverse a response is, our approach certainly has its drawbacks too. For example, using very frequent words but less frequent combinations of them may result in a good response which will be penalized by our metric. A natural solution to this is to also use bigram and trigram diversities and take a linear combination of them, which on a high-level is similar to BLEU BIBREF8. However, considering even bigram distribution takes up $O(|V|^2)$ space and calculation time, hence we did not try it due to limited resources. However, as will be presented in Section SECREF5, regularizing unigram distributions can already greatly help on higher-gram diversities, while also improving relevance. Three Models to Leverage AvgOut AvgOut can play at least three roles. First, it can be used to directly supervise output distribution during training; second, it can be used as a prior in labeled sequence transduction methods to control diversity of the generated response; and third, it can be used as a reward signal for Reinforcement Learning to encourage diverse sampled responses. In this section, we begin with a base vanilla seq2seq model, and next present our three models to diversify responses based on AvgOut. Our base model LSTM is identical to that proposed by BIBREF1 (BIBREF1), which consists of a single-layer bi-directional LSTM-RNN BIBREF9 encoder and a single-layer LSTM-RNN decoder with additive attention. Three Models to Leverage AvgOut ::: Regularization by Minimizing Continuous-AvgOut Our MinAvgOut model (Figure FIGREF3) directly integrates AvgOut into the loss function by summarizing it into a single numerical value named Continuous-AvgOut. We do this by taking the dot-product of $D$ and $D^{\prime }$ (Figure FIGREF6). The intuition behind this simple calculation is that $D$ can also be viewed as a set of weights which add up to $1.0$, since it is a probability vector. By taking the dot product, we are actually calculating a weighted average of each probability in $D^{\prime }$. To evaluate how diverse the model currently is, the duller tokens should obviously carry higher weights since they contribute more to the "dullness" of the whole utterance. Assuming that $D$ is a column vector, the continuous diversity score is $B_c$, and the resulting extra loss term is $L_B$, the total loss $L$ is given by: where $\alpha $ is a coefficient to balance the regularization loss with the maximum likelihood loss (a.k.a. teacher forcing loss) $L_{ML}$. This is important because the regularization term continues to discourage the model from generating the ground-truth token, which we need to balance by ML loss to reduce the impact (otherwise the model will be led astray). Note that since $D$ is a moving average which does not depend on the model parameters of the current mini-batch, only $D^{\prime }$ will result in gradient flow during back-propagation, which is what we intend. Three Models to Leverage AvgOut ::: Label-Fine-Tuning Model We also borrow the continuous version of the Label-Fine-Tuning (LFT) model from BIBREF10 (BIBREF10), which is an extension of the discrete labeled sequence transduction methods BIBREF11. The LFT model leverages a continuous label to serve as a prior for generating the target sequence. This label corresponds to an embedding just like a normal token does, but can be scaled by a continuous value. This model is applicable to our case because the diversity score of a response can also be viewed as a style, ranging from $0.0$ to $1.0$. Specifically, we add to the vocabulary a diversity label and scale its embedding vector with the intended diversity score of the target sequence. During training, this score is obtained by evaluating the diversity of the ground-truth target sequence (see Figure FIGREF8); during test time, we instead feed the model a diversity label scaled by a score of our choice (i.e., when we want the model to generate a more diverse response, we scale the label's embedding by a higher score, while to generate a duller response, we scale the embedding by a lower one). Three Models to Leverage AvgOut ::: Reward-Based Reinforcement Learning We also explore a model (see Figure FIGREF11) which regularizes on the discrete token level, because merely monitoring output probability distribution may ignore certain bad styles such as repetition (e.g. "I don't don't know."). We use Discrete-AvgOut to calculate the continuous diversity score of a discrete sequence. Let $\lbrace G_1, G_2, ..., G_{N_G}\rbrace $ be a sequence of $N_G$ tokens sampled by the model during training. Then from $D$, we extract the probabilities $\lbrace P_1, P_2, ..., P_{N_G}\rbrace $ corresponding to each generated token. The diversity score $B_{d}$ on these discrete tokens will be: where $N_{unique}$ is the number of unique tokens in the sampled sequence (see Figure FIGREF12). Note that this division explicitly discourages the model from outputting repeated tokens, because when that happens, the nominator will stay the same, while the denominator will decrease, resulting in a lower diversity score. Also note that MinAvgOut can be complementary to RL since calculating diversity scores based on discrete tokens unavoidably loses valuable information from the output distribution before argmax is taken. In Section SECREF5, we show with both automatic and human evaluations that this combination indeed achieves the best results among our models. Following BIBREF12 (BIBREF12), our loss function consists of two terms. The first term is the Maximum Likelihood loss ($L_{\textsc {ml}}$); the other is the Reinforcement Learning loss ($L_{\textsc {rl}}$). The total loss $L$ is then: where $\beta $ is a hyperparameter indicating how much weight we want to assign to the rl part of the loss, $x$ is the source sequence, $\lbrace y_t^*\rbrace $ are the ground truth tokens and $\lbrace y_t^s\rbrace $ are the sampled tokens. We use a policy gradient method BIBREF13 to calculate the RL loss. Specifically, we sample a response for each context $x$, and assign to it a reward $R$, which is equal to $B_d$ because we want to encourage the model to be diverse. We also use a baseline $R_b$ that helps reduce variance during training BIBREF14. In our case this baseline is again the exponential average of all $B_d$ in previous mini-batches. Experimental Setup ::: Dataset and Task We use the task-oriented Ubuntu Dialogue dataset BIBREF15, because it not only has F1 metrics to evaluate the relevance of responses, but the dialogues in them are also open-ended to allow enough space for diversity. We also chose this dataset because previous work, e.g., HRED BIBREF3 and VHRED BIBREF7 both used Ubuntu to showcase their diversity-promotion models. Due to the popularity of this dataset, we were able to reproduce almost all models on this same dataset and have a meaningful comparison on their effectiveness of eliminating dullness. As future work, we plan to apply our models to other datasets where diversity is desired. Experimental Setup ::: Automatic Evaluation To measure the relevance of the model responses, we follow BIBREF16 (BIBREF16) and evaluate on F1's for both activities (technical verbs, e.g., "upload", "install") and entities (technical nouns, e.g., "root", "internet"). The F1's are computed by mapping the ground-truth and model responses to their corresponding activity-entity representations BIBREF16, who considered F1 to be "particularly suited for the goal-oriented Ubuntu Dialogue Corpus". We did not evaluate on BLEU score BIBREF8 because BIBREF17 showed that BLEU does not correlate well with dialogue quality. BIBREF18 (BIBREF18) also made similar observations on BLEU. To evaluate diversity, we employ two evaluation metrics from previous work, namely Distinct-1 and Distinct-2 BIBREF2. These are the ratios between the number of unique tokens and all tokens for unigrams and bigrams, respectively. In addition, we propose a novel diversity graph and its corresponding metric, which we name Diversity-32 and Diversity-AUC, respectively. We gather statistics of sentence, unigram, bigram and trigram, and sort their normalized frequencies from highest to lowest. Observing that all four graphs follow long-tail distributions, we only keep the highest 32 frequencies and plot them. We then calculate one minus the Area under Curve (Diversity-AUC) for each graph, which draws a high-level picture of how diverse a model is. Experimental Setup ::: Human Evaluation Although we proposed the effective AvgOut metric, we did find that the model sometimes still cheats to gain higher automatic diversity score. For example, as can be seen in the selected output examples (Section SECREF5), the model tends to generate words with typo since these are rarer tokens as compared to their correct counterparts. This is unavoidable for noisy datasets like Ubuntu. Thus, without human evaluation, we can never be sure if our models are good or they only look good because our metrics are exploited. We thus also conducted human studies on Amazon MTurk to evaluate the generated responses with pairwise comparison for dialogue quality. We compare our models with an advanced decoding algorithm MMI BIBREF2 and two models, namely LSTM BIBREF0 and VHRED BIBREF7, both with additive attention. To our best knowledge, LSTM and VHRED were the primary models with which F1's were reported on the Ubuntu dataset. Following BIBREF5 (BIBREF5), we employ two criteria: Plausibility and Content Richness. The first criterion measures whether the response is plausible given the context, while the second gauges whether the response is diverse and informative. The utterances were randomly shuffled to anonymize model identity. We only allowed annotators located in the US-located with at least an approval rate of $98\%$ and $10,000$ approved HITs. We collected 100 annotations in total after rejecting those completed by people who assign exactly the same score to all model responses. Since we evaluated 7 models, we collected 700 annotations in total, which came from a diverse pool of annotators. Experimental Setup ::: Training Details For each of the three models, the hidden size of the encoder is 256, while the decoder hidden size is 512. For MinAvgOut, the coefficient of the regularization loss term $\alpha $ is $100.0$; For LFT, during inference we feed a score of $0.015$ since it achieves a good balance between response coherence and diversity. For RL, the coefficient of the RL term $\beta $ is $100.0$. For the hybrid model MinAvgOut + RL, $\alpha $ and $\beta $ share a coefficient of $50.0$. Results and Analysis ::: Automatic Evaluation Results We employ several complementary metrics to capture different aspects of the model. The F1 results are shown in Table TABREF24. Among all single models, LFT performs the best, followed by MinAvgOut. RL is also comparable with previous state-of-the-art models VHRED (attn) and Reranking-RL. We think that this is because LFT exerts no force in pulling the model predictions away from the ground-truth tokens, but rather just makes itself aware of how dull each response is. Consequently, its responses appear more relevant than the other two approaches. Moreover, the hybrid model (last row) outperforms all other models by a large margin. One might expect that minimizing AVGOUT causes the models to move further away from the ground-truth tokens, so that it will hurt relevance. However, our F1 results show that as the responses become more diverse, they are more likely to include information more related and specific to the input contexts, which actually makes the model gain on both diversity and relevance. This will be further confirmed by the output examples in Table TABREF29. We also present Diversity-32 graphs (Figure FIGREF16) and report Diversity-AUC as well as Distinct-1 and -2 for each model (Table TABREF25). We can see that all our models have significantly better sentence-level diversity than VHRED, let alone LSTM. For unigram diversity, they are also better than LSTM, though hard to distinguish from VHRED. Both bigram and trigram graphs reveal that all models are more diverse than LSTM, except that RL shows lower diversity than the other models, which agree with our F1 results. Note that since our models are only trained based on unigram output distributions, the bigram and trigram diversities are still far away from that of the ground-truth, which points to future direction. That said, the table does show that encouraging unigram diversity can already have positive influence on higher grams as well. Also note that the hybrid model (last row) does not achieve the best result in terms of diversity. We hypothesize that this is because RL, which is usually harder to optimize than ML losses, faces exacerbated issues when combined with a strong MinAvgOut loss, which tries to pull the model output distribution away from the token distribution in the training corpus. Neither Distinct-1 nor -2 correlates well with our observation and evaluation of diversity and relevance. We reason that this is because these metrics only capture how many distinct tokens are used rather than each token's frequency, which is easier to be exploited because whether each token is used unnecessarily often (a strong sign of dullness) is not reflected in this measure. Results and Analysis ::: Human Evaluation Results As mentioned in experimental setup, we conducted human evaluations on our models for both Plausibility and Content Richness, as well as calculating their average (to show overall score) and their difference (to show balance between the two criteria) (Table TABREF26). We can see from the table that all our models are statistically significantly better than the baseline models on both Plausibility and Content Richness, except that RL is slightly weaker on Content Richness, which agrees with the trend in automatic evaluations. Although MinAvgOut+RL model only ranks the second on average score (statistically equivalent to MinAvgOut) in human evaluation, it achieves a good balance, and it also ranks the second in automatic diversity and the first in F1 values. We thus consider it to be our best model. Results and Analysis ::: Selected Output Examples We present two selected examples of generated responses from the investigated models (Table TABREF29). We can see that all our models learn to attend well to the context, generating coherent and informative responses. Related Work ::: Measurements of Response Diversity Multiple metrics and approaches have been proposed to measure dialogue diversity. Some focus more on how similar the responses are to the ground-truth sequences, such as Word Error Rate BIBREF3 and BLEU BIBREF20, while the others explicitly have diversity in mind when being created, such as Distinct-1 and -2 BIBREF2. The key difference between AvgOut and the previous work is that first, our metric is dynamic with no feature-engineering; second, ours is versatile enough to be applied to both continuous distributions and discrete sequences, while theirs are only for discrete tokens; third, ours can be used for both sentence-level and corpus-level evaluation, while theirs are only meaningful as corpus-level metrics because they measure the extent of repetition across responses rather than for a single response. Related Work ::: Diversity-Promoting Dialogue Models Researchers have different opinions on why dull responses are generated, which lead to various solutions. They can be roughly divided into four categories. The first category considers using conditional likelihood as a decoding objective the culprit BIBREF5, BIBREF2, BIBREF21, BIBREF22. They thus focus on improving the decoding algorithm during training. The second category traces the cause of the low-diversity problem back to the lack of model variability. They then adopt Variational Autoencoders and rely on sampling from a latent random variable as an additional prior to the decoder BIBREF7, BIBREF23, BIBREF24. The third category thinks that the issue is a lack of universal background knowledge and common sense beyond the input context. They consequently aim to integrate prior knowledge into the generation process BIBREF25, BIBREF26, BIBREF27, BIBREF28. The fourth category believes that the underlying model itself needs improvement. Some use hierarchical LSTM-RNN to encourage the model to capture high-level context BIBREF3; some use more advanced attention mechanism such as multi-head attention BIBREF29; and some use either more complicated architectures or models prone to degeneracies, such as Generative Adversarial Networks BIBREF30, Deep Reinforcement Learning BIBREF6 and Mixture Models BIBREF31. Our RL model has the same architecture as the Reinforcement Learning model, except with different rewards. BIBREF32 (BIBREF32) consider the reason for dull responses as the model's over-confidence. They then propose to add to the loss function a regularization term to maximize the entropy of the output probability distribution. Interestingly, they only proposed this simple approach rather than actually implementing it. Our MinAvgOut approach is related to their idea. Our approach is also related to posterior regularization BIBREF33, BIBREF34, BIBREF35, but our work is neural-based. Conclusion We proposed a novel measure AvgOut to dynamically evaluate how diverse a model or a response is based on the models' own parameters, which themselves evolve during training. We then leveraged this effective measure to train three models, plus a hybrid model, to eliminate dull responses for dialogue generation tasks. In addition, we designed novel automatic metrics to evaluate the trained models on diversity, in addition to the ones from previous work. Both automatic and human evaluations consolidated that our models are able to generate more diverse and relevant responses, even when compared with state-of-the-art approaches. For future work, we plan to apply these models to different generative tasks where diversity is desired. Acknowledgments We thank the reviewers for their helpful comments. This work was supported by NSF-CAREER Award #1846185, ONR #N00014-18-1-2871, and awards from Google, Facebook, Salesforce (views are not of the funding agency).
LSTMs with and without attention, HRED, VHRED with and without attention, MMI and Reranking-RL
a09633584df1e4b9577876f35e38b37fdd83fa63
a09633584df1e4b9577876f35e38b37fdd83fa63_0
Q: How is human evaluation performed, what was the criteria? Text: Introduction Many modern dialogue generation models use a sequence-to-sequence architecture as their backbone BIBREF0, following its success when applied to Machine Translation (MT) BIBREF1. However, dialogue tasks also have a requirement different from that of MT: the response not only has to be "correct" (coherent and relevant), but also needs to be diverse and informative. However, seq2seq has been reported by many previous works to have low corpus-level diversity BIBREF2, BIBREF3, BIBREF0, BIBREF4, as it tends to generate safe, terse, and uninformative responses, such as "I don't know.". These responses unnecessarily make a dialogue system much less interactive than it should be. To increase the diversity of dialogue responses, the first step is to faithfully evaluate how diverse a response is. There are metrics used by previous work that are correlated to diversity, but not strongly, such as ratio of distinct tokens BIBREF2 and response length BIBREF5. However, a response can be long but extremely boring in meaning, such as "I am sure that I don't know about it.", or short but interesting (i.e., contains a lot of information), such as "Dad was mean.". Only investigating discrete token output by the model is also not ideal, because these tokens are only a single realization of the model's output probability distribution at each time step, which unavoidably loses valuable information indicated by the whole distribution. BIBREF6 (BIBREF6) manually collect a shortlist of dull responses, and during training discourage the model from producing such utterances. However, an important drawback of hand-crafted rules is that the set of dull tokens or utterances is static, while in fact it usually evolves during training: when the current dull tokens are eliminated, another set of them might reveal themselves. In our work, we begin with a simple yet effective approach to measure how diverse a response is. This metric, which we name "Average Output Probability Distribution", or AvgOut, draws information directly from the training-in-session model itself. We calculate it by keeping track of the exponential average of all output probability distributions on the decoder side during training. This metric dynamically measures which tokens the model is biased toward without any hand-crafted rules, thus making it a faithful evaluation of the model diversity (i.e., for diverse models, the token probabilities should be more evenly distributed rather than peaked at a few dull tokens). In addition, since AvgOut is a one-dimensional categorical distribution rather than a dimensionless numerical value like entropy, it naturally carries and conveys more information about model diversity. We then propose three models that leverage our novel metric to promote diversity in dialogue generation. The first MinAvgOut model minimizes the dot product of current batch AvgOut and the exponential average AvgOut across batches, which encourages low-frequency tokens to be generated. The second LFT model uses a labeled transduction method and scales a "diversity label" by the diversity score of the ground-truth target sequence during training, while during testing can generate responses of different levels of diversity by tweaking the intended diversity score. The third RL model leverages reinforcement learning, where our novel metric is applied to discrete tokens and serve as a reward signal. In addition, since MinAvgOut regularizes directly on the continuous distribution while RL calculates its reward based on discrete sampled tokens, we simply add up the loss terms of the two models, creating an even stronger hybrid model. We first employ diverse automatic metrics, including Distinct-1 and -2 from previous work BIBREF2 and our novel metric Diveristy-iAUC (which calculates one minus the sum of normalized frequencies of the most frequent tokens produced by the model), plus activity/entity F1s, to evaluate the diversity and relevance of the generated responses. We then conduct human evaluations to verify that these models not only outperform their base model LSTM by a large margin, but are also comparable to or better than an advanced decoding algorithm MMI BIBREF2 and a very competitive model VHRED BIBREF7 on the Ubuntu dataset. AvgOut as an Effective Diversity Metric By only keeping a static shortlist of boring responses or tokens, one basically assumes that we humans should decide which tokens are dull. However, we argue that we should instead look from the model's perspective to identify dull tokens, because even if the model outputs a word that we consider rare, including it in too many responses is still considered a dull behavior. Motivated by this thought experiment, we propose a novel metric, Average Output Probability Distribution (AvgOut), that dynamically keeps track of which tokens the model is biased toward. To calculate this, during training, we average out all the output probability distributions for each time step of the decoder for the whole mini-batch. The resulting vector $D^{\prime }$ will reflect each token's probability of being generated from the model's perspective. Note that we do not use discrete ground-truth tokens to evaluate the model's bias, because there is a fine distinction between the two: a statistics of frequency on ground-truth tokens is an evaluation of the corpus's bias, while AvgOut is an evaluation of what bias the model has learned because by generating dull responses more frequently than the training corpus has, it is the model itself that we should adjust. Also note that the reason we take the average is that a single output distribution will largely depend on the context and the previous target tokens (which are fed as inputs to the decoder during training), but on average the distribution should be a faithful evaluation on which words are more likely to be generated from model's perspective. To avoid batches that have AvgOut significantly different from those of other batches, which would lead the model astray, we keep the exponential average of this metric across batches to make it less biased toward any specific batch. Let it be $D$. After training on a mini-batch and obtain $D^{\prime }$, we update $D$ like the following: where $\gamma $ is $0.01$ in our experiments. Another consideration of AvgOut is that theoretically we can have two choices. The first is to use the output distributions when we are teacher-forcing (i.e., only feeding ground-truth tokens); the other is to let the model use its own predictions during greedy/beam-search decoding or sampling. We reason that the former is a much better estimation of the model's bias, because the latter will result in a cascading enlargement of the model bias due to the auto-regressive nature of LSTM-RNN models (i.e., the tokens fed to the decoder are themselves also polluted by the model's bias). Our early experimental results also agreed with the above reasoning. Although we try to come up with the most faithful evaluation of how diverse a response is, our approach certainly has its drawbacks too. For example, using very frequent words but less frequent combinations of them may result in a good response which will be penalized by our metric. A natural solution to this is to also use bigram and trigram diversities and take a linear combination of them, which on a high-level is similar to BLEU BIBREF8. However, considering even bigram distribution takes up $O(|V|^2)$ space and calculation time, hence we did not try it due to limited resources. However, as will be presented in Section SECREF5, regularizing unigram distributions can already greatly help on higher-gram diversities, while also improving relevance. Three Models to Leverage AvgOut AvgOut can play at least three roles. First, it can be used to directly supervise output distribution during training; second, it can be used as a prior in labeled sequence transduction methods to control diversity of the generated response; and third, it can be used as a reward signal for Reinforcement Learning to encourage diverse sampled responses. In this section, we begin with a base vanilla seq2seq model, and next present our three models to diversify responses based on AvgOut. Our base model LSTM is identical to that proposed by BIBREF1 (BIBREF1), which consists of a single-layer bi-directional LSTM-RNN BIBREF9 encoder and a single-layer LSTM-RNN decoder with additive attention. Three Models to Leverage AvgOut ::: Regularization by Minimizing Continuous-AvgOut Our MinAvgOut model (Figure FIGREF3) directly integrates AvgOut into the loss function by summarizing it into a single numerical value named Continuous-AvgOut. We do this by taking the dot-product of $D$ and $D^{\prime }$ (Figure FIGREF6). The intuition behind this simple calculation is that $D$ can also be viewed as a set of weights which add up to $1.0$, since it is a probability vector. By taking the dot product, we are actually calculating a weighted average of each probability in $D^{\prime }$. To evaluate how diverse the model currently is, the duller tokens should obviously carry higher weights since they contribute more to the "dullness" of the whole utterance. Assuming that $D$ is a column vector, the continuous diversity score is $B_c$, and the resulting extra loss term is $L_B$, the total loss $L$ is given by: where $\alpha $ is a coefficient to balance the regularization loss with the maximum likelihood loss (a.k.a. teacher forcing loss) $L_{ML}$. This is important because the regularization term continues to discourage the model from generating the ground-truth token, which we need to balance by ML loss to reduce the impact (otherwise the model will be led astray). Note that since $D$ is a moving average which does not depend on the model parameters of the current mini-batch, only $D^{\prime }$ will result in gradient flow during back-propagation, which is what we intend. Three Models to Leverage AvgOut ::: Label-Fine-Tuning Model We also borrow the continuous version of the Label-Fine-Tuning (LFT) model from BIBREF10 (BIBREF10), which is an extension of the discrete labeled sequence transduction methods BIBREF11. The LFT model leverages a continuous label to serve as a prior for generating the target sequence. This label corresponds to an embedding just like a normal token does, but can be scaled by a continuous value. This model is applicable to our case because the diversity score of a response can also be viewed as a style, ranging from $0.0$ to $1.0$. Specifically, we add to the vocabulary a diversity label and scale its embedding vector with the intended diversity score of the target sequence. During training, this score is obtained by evaluating the diversity of the ground-truth target sequence (see Figure FIGREF8); during test time, we instead feed the model a diversity label scaled by a score of our choice (i.e., when we want the model to generate a more diverse response, we scale the label's embedding by a higher score, while to generate a duller response, we scale the embedding by a lower one). Three Models to Leverage AvgOut ::: Reward-Based Reinforcement Learning We also explore a model (see Figure FIGREF11) which regularizes on the discrete token level, because merely monitoring output probability distribution may ignore certain bad styles such as repetition (e.g. "I don't don't know."). We use Discrete-AvgOut to calculate the continuous diversity score of a discrete sequence. Let $\lbrace G_1, G_2, ..., G_{N_G}\rbrace $ be a sequence of $N_G$ tokens sampled by the model during training. Then from $D$, we extract the probabilities $\lbrace P_1, P_2, ..., P_{N_G}\rbrace $ corresponding to each generated token. The diversity score $B_{d}$ on these discrete tokens will be: where $N_{unique}$ is the number of unique tokens in the sampled sequence (see Figure FIGREF12). Note that this division explicitly discourages the model from outputting repeated tokens, because when that happens, the nominator will stay the same, while the denominator will decrease, resulting in a lower diversity score. Also note that MinAvgOut can be complementary to RL since calculating diversity scores based on discrete tokens unavoidably loses valuable information from the output distribution before argmax is taken. In Section SECREF5, we show with both automatic and human evaluations that this combination indeed achieves the best results among our models. Following BIBREF12 (BIBREF12), our loss function consists of two terms. The first term is the Maximum Likelihood loss ($L_{\textsc {ml}}$); the other is the Reinforcement Learning loss ($L_{\textsc {rl}}$). The total loss $L$ is then: where $\beta $ is a hyperparameter indicating how much weight we want to assign to the rl part of the loss, $x$ is the source sequence, $\lbrace y_t^*\rbrace $ are the ground truth tokens and $\lbrace y_t^s\rbrace $ are the sampled tokens. We use a policy gradient method BIBREF13 to calculate the RL loss. Specifically, we sample a response for each context $x$, and assign to it a reward $R$, which is equal to $B_d$ because we want to encourage the model to be diverse. We also use a baseline $R_b$ that helps reduce variance during training BIBREF14. In our case this baseline is again the exponential average of all $B_d$ in previous mini-batches. Experimental Setup ::: Dataset and Task We use the task-oriented Ubuntu Dialogue dataset BIBREF15, because it not only has F1 metrics to evaluate the relevance of responses, but the dialogues in them are also open-ended to allow enough space for diversity. We also chose this dataset because previous work, e.g., HRED BIBREF3 and VHRED BIBREF7 both used Ubuntu to showcase their diversity-promotion models. Due to the popularity of this dataset, we were able to reproduce almost all models on this same dataset and have a meaningful comparison on their effectiveness of eliminating dullness. As future work, we plan to apply our models to other datasets where diversity is desired. Experimental Setup ::: Automatic Evaluation To measure the relevance of the model responses, we follow BIBREF16 (BIBREF16) and evaluate on F1's for both activities (technical verbs, e.g., "upload", "install") and entities (technical nouns, e.g., "root", "internet"). The F1's are computed by mapping the ground-truth and model responses to their corresponding activity-entity representations BIBREF16, who considered F1 to be "particularly suited for the goal-oriented Ubuntu Dialogue Corpus". We did not evaluate on BLEU score BIBREF8 because BIBREF17 showed that BLEU does not correlate well with dialogue quality. BIBREF18 (BIBREF18) also made similar observations on BLEU. To evaluate diversity, we employ two evaluation metrics from previous work, namely Distinct-1 and Distinct-2 BIBREF2. These are the ratios between the number of unique tokens and all tokens for unigrams and bigrams, respectively. In addition, we propose a novel diversity graph and its corresponding metric, which we name Diversity-32 and Diversity-AUC, respectively. We gather statistics of sentence, unigram, bigram and trigram, and sort their normalized frequencies from highest to lowest. Observing that all four graphs follow long-tail distributions, we only keep the highest 32 frequencies and plot them. We then calculate one minus the Area under Curve (Diversity-AUC) for each graph, which draws a high-level picture of how diverse a model is. Experimental Setup ::: Human Evaluation Although we proposed the effective AvgOut metric, we did find that the model sometimes still cheats to gain higher automatic diversity score. For example, as can be seen in the selected output examples (Section SECREF5), the model tends to generate words with typo since these are rarer tokens as compared to their correct counterparts. This is unavoidable for noisy datasets like Ubuntu. Thus, without human evaluation, we can never be sure if our models are good or they only look good because our metrics are exploited. We thus also conducted human studies on Amazon MTurk to evaluate the generated responses with pairwise comparison for dialogue quality. We compare our models with an advanced decoding algorithm MMI BIBREF2 and two models, namely LSTM BIBREF0 and VHRED BIBREF7, both with additive attention. To our best knowledge, LSTM and VHRED were the primary models with which F1's were reported on the Ubuntu dataset. Following BIBREF5 (BIBREF5), we employ two criteria: Plausibility and Content Richness. The first criterion measures whether the response is plausible given the context, while the second gauges whether the response is diverse and informative. The utterances were randomly shuffled to anonymize model identity. We only allowed annotators located in the US-located with at least an approval rate of $98\%$ and $10,000$ approved HITs. We collected 100 annotations in total after rejecting those completed by people who assign exactly the same score to all model responses. Since we evaluated 7 models, we collected 700 annotations in total, which came from a diverse pool of annotators. Experimental Setup ::: Training Details For each of the three models, the hidden size of the encoder is 256, while the decoder hidden size is 512. For MinAvgOut, the coefficient of the regularization loss term $\alpha $ is $100.0$; For LFT, during inference we feed a score of $0.015$ since it achieves a good balance between response coherence and diversity. For RL, the coefficient of the RL term $\beta $ is $100.0$. For the hybrid model MinAvgOut + RL, $\alpha $ and $\beta $ share a coefficient of $50.0$. Results and Analysis ::: Automatic Evaluation Results We employ several complementary metrics to capture different aspects of the model. The F1 results are shown in Table TABREF24. Among all single models, LFT performs the best, followed by MinAvgOut. RL is also comparable with previous state-of-the-art models VHRED (attn) and Reranking-RL. We think that this is because LFT exerts no force in pulling the model predictions away from the ground-truth tokens, but rather just makes itself aware of how dull each response is. Consequently, its responses appear more relevant than the other two approaches. Moreover, the hybrid model (last row) outperforms all other models by a large margin. One might expect that minimizing AVGOUT causes the models to move further away from the ground-truth tokens, so that it will hurt relevance. However, our F1 results show that as the responses become more diverse, they are more likely to include information more related and specific to the input contexts, which actually makes the model gain on both diversity and relevance. This will be further confirmed by the output examples in Table TABREF29. We also present Diversity-32 graphs (Figure FIGREF16) and report Diversity-AUC as well as Distinct-1 and -2 for each model (Table TABREF25). We can see that all our models have significantly better sentence-level diversity than VHRED, let alone LSTM. For unigram diversity, they are also better than LSTM, though hard to distinguish from VHRED. Both bigram and trigram graphs reveal that all models are more diverse than LSTM, except that RL shows lower diversity than the other models, which agree with our F1 results. Note that since our models are only trained based on unigram output distributions, the bigram and trigram diversities are still far away from that of the ground-truth, which points to future direction. That said, the table does show that encouraging unigram diversity can already have positive influence on higher grams as well. Also note that the hybrid model (last row) does not achieve the best result in terms of diversity. We hypothesize that this is because RL, which is usually harder to optimize than ML losses, faces exacerbated issues when combined with a strong MinAvgOut loss, which tries to pull the model output distribution away from the token distribution in the training corpus. Neither Distinct-1 nor -2 correlates well with our observation and evaluation of diversity and relevance. We reason that this is because these metrics only capture how many distinct tokens are used rather than each token's frequency, which is easier to be exploited because whether each token is used unnecessarily often (a strong sign of dullness) is not reflected in this measure. Results and Analysis ::: Human Evaluation Results As mentioned in experimental setup, we conducted human evaluations on our models for both Plausibility and Content Richness, as well as calculating their average (to show overall score) and their difference (to show balance between the two criteria) (Table TABREF26). We can see from the table that all our models are statistically significantly better than the baseline models on both Plausibility and Content Richness, except that RL is slightly weaker on Content Richness, which agrees with the trend in automatic evaluations. Although MinAvgOut+RL model only ranks the second on average score (statistically equivalent to MinAvgOut) in human evaluation, it achieves a good balance, and it also ranks the second in automatic diversity and the first in F1 values. We thus consider it to be our best model. Results and Analysis ::: Selected Output Examples We present two selected examples of generated responses from the investigated models (Table TABREF29). We can see that all our models learn to attend well to the context, generating coherent and informative responses. Related Work ::: Measurements of Response Diversity Multiple metrics and approaches have been proposed to measure dialogue diversity. Some focus more on how similar the responses are to the ground-truth sequences, such as Word Error Rate BIBREF3 and BLEU BIBREF20, while the others explicitly have diversity in mind when being created, such as Distinct-1 and -2 BIBREF2. The key difference between AvgOut and the previous work is that first, our metric is dynamic with no feature-engineering; second, ours is versatile enough to be applied to both continuous distributions and discrete sequences, while theirs are only for discrete tokens; third, ours can be used for both sentence-level and corpus-level evaluation, while theirs are only meaningful as corpus-level metrics because they measure the extent of repetition across responses rather than for a single response. Related Work ::: Diversity-Promoting Dialogue Models Researchers have different opinions on why dull responses are generated, which lead to various solutions. They can be roughly divided into four categories. The first category considers using conditional likelihood as a decoding objective the culprit BIBREF5, BIBREF2, BIBREF21, BIBREF22. They thus focus on improving the decoding algorithm during training. The second category traces the cause of the low-diversity problem back to the lack of model variability. They then adopt Variational Autoencoders and rely on sampling from a latent random variable as an additional prior to the decoder BIBREF7, BIBREF23, BIBREF24. The third category thinks that the issue is a lack of universal background knowledge and common sense beyond the input context. They consequently aim to integrate prior knowledge into the generation process BIBREF25, BIBREF26, BIBREF27, BIBREF28. The fourth category believes that the underlying model itself needs improvement. Some use hierarchical LSTM-RNN to encourage the model to capture high-level context BIBREF3; some use more advanced attention mechanism such as multi-head attention BIBREF29; and some use either more complicated architectures or models prone to degeneracies, such as Generative Adversarial Networks BIBREF30, Deep Reinforcement Learning BIBREF6 and Mixture Models BIBREF31. Our RL model has the same architecture as the Reinforcement Learning model, except with different rewards. BIBREF32 (BIBREF32) consider the reason for dull responses as the model's over-confidence. They then propose to add to the loss function a regularization term to maximize the entropy of the output probability distribution. Interestingly, they only proposed this simple approach rather than actually implementing it. Our MinAvgOut approach is related to their idea. Our approach is also related to posterior regularization BIBREF33, BIBREF34, BIBREF35, but our work is neural-based. Conclusion We proposed a novel measure AvgOut to dynamically evaluate how diverse a model or a response is based on the models' own parameters, which themselves evolve during training. We then leveraged this effective measure to train three models, plus a hybrid model, to eliminate dull responses for dialogue generation tasks. In addition, we designed novel automatic metrics to evaluate the trained models on diversity, in addition to the ones from previous work. Both automatic and human evaluations consolidated that our models are able to generate more diverse and relevant responses, even when compared with state-of-the-art approaches. For future work, we plan to apply these models to different generative tasks where diversity is desired. Acknowledgments We thank the reviewers for their helpful comments. This work was supported by NSF-CAREER Award #1846185, ONR #N00014-18-1-2871, and awards from Google, Facebook, Salesforce (views are not of the funding agency).
Through Amazon MTurk annotators to determine plausibility and content richness of the response
5e9732ff8595b31f81740082333b241d0a5f7c9a
5e9732ff8595b31f81740082333b241d0a5f7c9a_0
Q: How much better were results of the proposed models than base LSTM-RNN model? Text: Introduction Many modern dialogue generation models use a sequence-to-sequence architecture as their backbone BIBREF0, following its success when applied to Machine Translation (MT) BIBREF1. However, dialogue tasks also have a requirement different from that of MT: the response not only has to be "correct" (coherent and relevant), but also needs to be diverse and informative. However, seq2seq has been reported by many previous works to have low corpus-level diversity BIBREF2, BIBREF3, BIBREF0, BIBREF4, as it tends to generate safe, terse, and uninformative responses, such as "I don't know.". These responses unnecessarily make a dialogue system much less interactive than it should be. To increase the diversity of dialogue responses, the first step is to faithfully evaluate how diverse a response is. There are metrics used by previous work that are correlated to diversity, but not strongly, such as ratio of distinct tokens BIBREF2 and response length BIBREF5. However, a response can be long but extremely boring in meaning, such as "I am sure that I don't know about it.", or short but interesting (i.e., contains a lot of information), such as "Dad was mean.". Only investigating discrete token output by the model is also not ideal, because these tokens are only a single realization of the model's output probability distribution at each time step, which unavoidably loses valuable information indicated by the whole distribution. BIBREF6 (BIBREF6) manually collect a shortlist of dull responses, and during training discourage the model from producing such utterances. However, an important drawback of hand-crafted rules is that the set of dull tokens or utterances is static, while in fact it usually evolves during training: when the current dull tokens are eliminated, another set of them might reveal themselves. In our work, we begin with a simple yet effective approach to measure how diverse a response is. This metric, which we name "Average Output Probability Distribution", or AvgOut, draws information directly from the training-in-session model itself. We calculate it by keeping track of the exponential average of all output probability distributions on the decoder side during training. This metric dynamically measures which tokens the model is biased toward without any hand-crafted rules, thus making it a faithful evaluation of the model diversity (i.e., for diverse models, the token probabilities should be more evenly distributed rather than peaked at a few dull tokens). In addition, since AvgOut is a one-dimensional categorical distribution rather than a dimensionless numerical value like entropy, it naturally carries and conveys more information about model diversity. We then propose three models that leverage our novel metric to promote diversity in dialogue generation. The first MinAvgOut model minimizes the dot product of current batch AvgOut and the exponential average AvgOut across batches, which encourages low-frequency tokens to be generated. The second LFT model uses a labeled transduction method and scales a "diversity label" by the diversity score of the ground-truth target sequence during training, while during testing can generate responses of different levels of diversity by tweaking the intended diversity score. The third RL model leverages reinforcement learning, where our novel metric is applied to discrete tokens and serve as a reward signal. In addition, since MinAvgOut regularizes directly on the continuous distribution while RL calculates its reward based on discrete sampled tokens, we simply add up the loss terms of the two models, creating an even stronger hybrid model. We first employ diverse automatic metrics, including Distinct-1 and -2 from previous work BIBREF2 and our novel metric Diveristy-iAUC (which calculates one minus the sum of normalized frequencies of the most frequent tokens produced by the model), plus activity/entity F1s, to evaluate the diversity and relevance of the generated responses. We then conduct human evaluations to verify that these models not only outperform their base model LSTM by a large margin, but are also comparable to or better than an advanced decoding algorithm MMI BIBREF2 and a very competitive model VHRED BIBREF7 on the Ubuntu dataset. AvgOut as an Effective Diversity Metric By only keeping a static shortlist of boring responses or tokens, one basically assumes that we humans should decide which tokens are dull. However, we argue that we should instead look from the model's perspective to identify dull tokens, because even if the model outputs a word that we consider rare, including it in too many responses is still considered a dull behavior. Motivated by this thought experiment, we propose a novel metric, Average Output Probability Distribution (AvgOut), that dynamically keeps track of which tokens the model is biased toward. To calculate this, during training, we average out all the output probability distributions for each time step of the decoder for the whole mini-batch. The resulting vector $D^{\prime }$ will reflect each token's probability of being generated from the model's perspective. Note that we do not use discrete ground-truth tokens to evaluate the model's bias, because there is a fine distinction between the two: a statistics of frequency on ground-truth tokens is an evaluation of the corpus's bias, while AvgOut is an evaluation of what bias the model has learned because by generating dull responses more frequently than the training corpus has, it is the model itself that we should adjust. Also note that the reason we take the average is that a single output distribution will largely depend on the context and the previous target tokens (which are fed as inputs to the decoder during training), but on average the distribution should be a faithful evaluation on which words are more likely to be generated from model's perspective. To avoid batches that have AvgOut significantly different from those of other batches, which would lead the model astray, we keep the exponential average of this metric across batches to make it less biased toward any specific batch. Let it be $D$. After training on a mini-batch and obtain $D^{\prime }$, we update $D$ like the following: where $\gamma $ is $0.01$ in our experiments. Another consideration of AvgOut is that theoretically we can have two choices. The first is to use the output distributions when we are teacher-forcing (i.e., only feeding ground-truth tokens); the other is to let the model use its own predictions during greedy/beam-search decoding or sampling. We reason that the former is a much better estimation of the model's bias, because the latter will result in a cascading enlargement of the model bias due to the auto-regressive nature of LSTM-RNN models (i.e., the tokens fed to the decoder are themselves also polluted by the model's bias). Our early experimental results also agreed with the above reasoning. Although we try to come up with the most faithful evaluation of how diverse a response is, our approach certainly has its drawbacks too. For example, using very frequent words but less frequent combinations of them may result in a good response which will be penalized by our metric. A natural solution to this is to also use bigram and trigram diversities and take a linear combination of them, which on a high-level is similar to BLEU BIBREF8. However, considering even bigram distribution takes up $O(|V|^2)$ space and calculation time, hence we did not try it due to limited resources. However, as will be presented in Section SECREF5, regularizing unigram distributions can already greatly help on higher-gram diversities, while also improving relevance. Three Models to Leverage AvgOut AvgOut can play at least three roles. First, it can be used to directly supervise output distribution during training; second, it can be used as a prior in labeled sequence transduction methods to control diversity of the generated response; and third, it can be used as a reward signal for Reinforcement Learning to encourage diverse sampled responses. In this section, we begin with a base vanilla seq2seq model, and next present our three models to diversify responses based on AvgOut. Our base model LSTM is identical to that proposed by BIBREF1 (BIBREF1), which consists of a single-layer bi-directional LSTM-RNN BIBREF9 encoder and a single-layer LSTM-RNN decoder with additive attention. Three Models to Leverage AvgOut ::: Regularization by Minimizing Continuous-AvgOut Our MinAvgOut model (Figure FIGREF3) directly integrates AvgOut into the loss function by summarizing it into a single numerical value named Continuous-AvgOut. We do this by taking the dot-product of $D$ and $D^{\prime }$ (Figure FIGREF6). The intuition behind this simple calculation is that $D$ can also be viewed as a set of weights which add up to $1.0$, since it is a probability vector. By taking the dot product, we are actually calculating a weighted average of each probability in $D^{\prime }$. To evaluate how diverse the model currently is, the duller tokens should obviously carry higher weights since they contribute more to the "dullness" of the whole utterance. Assuming that $D$ is a column vector, the continuous diversity score is $B_c$, and the resulting extra loss term is $L_B$, the total loss $L$ is given by: where $\alpha $ is a coefficient to balance the regularization loss with the maximum likelihood loss (a.k.a. teacher forcing loss) $L_{ML}$. This is important because the regularization term continues to discourage the model from generating the ground-truth token, which we need to balance by ML loss to reduce the impact (otherwise the model will be led astray). Note that since $D$ is a moving average which does not depend on the model parameters of the current mini-batch, only $D^{\prime }$ will result in gradient flow during back-propagation, which is what we intend. Three Models to Leverage AvgOut ::: Label-Fine-Tuning Model We also borrow the continuous version of the Label-Fine-Tuning (LFT) model from BIBREF10 (BIBREF10), which is an extension of the discrete labeled sequence transduction methods BIBREF11. The LFT model leverages a continuous label to serve as a prior for generating the target sequence. This label corresponds to an embedding just like a normal token does, but can be scaled by a continuous value. This model is applicable to our case because the diversity score of a response can also be viewed as a style, ranging from $0.0$ to $1.0$. Specifically, we add to the vocabulary a diversity label and scale its embedding vector with the intended diversity score of the target sequence. During training, this score is obtained by evaluating the diversity of the ground-truth target sequence (see Figure FIGREF8); during test time, we instead feed the model a diversity label scaled by a score of our choice (i.e., when we want the model to generate a more diverse response, we scale the label's embedding by a higher score, while to generate a duller response, we scale the embedding by a lower one). Three Models to Leverage AvgOut ::: Reward-Based Reinforcement Learning We also explore a model (see Figure FIGREF11) which regularizes on the discrete token level, because merely monitoring output probability distribution may ignore certain bad styles such as repetition (e.g. "I don't don't know."). We use Discrete-AvgOut to calculate the continuous diversity score of a discrete sequence. Let $\lbrace G_1, G_2, ..., G_{N_G}\rbrace $ be a sequence of $N_G$ tokens sampled by the model during training. Then from $D$, we extract the probabilities $\lbrace P_1, P_2, ..., P_{N_G}\rbrace $ corresponding to each generated token. The diversity score $B_{d}$ on these discrete tokens will be: where $N_{unique}$ is the number of unique tokens in the sampled sequence (see Figure FIGREF12). Note that this division explicitly discourages the model from outputting repeated tokens, because when that happens, the nominator will stay the same, while the denominator will decrease, resulting in a lower diversity score. Also note that MinAvgOut can be complementary to RL since calculating diversity scores based on discrete tokens unavoidably loses valuable information from the output distribution before argmax is taken. In Section SECREF5, we show with both automatic and human evaluations that this combination indeed achieves the best results among our models. Following BIBREF12 (BIBREF12), our loss function consists of two terms. The first term is the Maximum Likelihood loss ($L_{\textsc {ml}}$); the other is the Reinforcement Learning loss ($L_{\textsc {rl}}$). The total loss $L$ is then: where $\beta $ is a hyperparameter indicating how much weight we want to assign to the rl part of the loss, $x$ is the source sequence, $\lbrace y_t^*\rbrace $ are the ground truth tokens and $\lbrace y_t^s\rbrace $ are the sampled tokens. We use a policy gradient method BIBREF13 to calculate the RL loss. Specifically, we sample a response for each context $x$, and assign to it a reward $R$, which is equal to $B_d$ because we want to encourage the model to be diverse. We also use a baseline $R_b$ that helps reduce variance during training BIBREF14. In our case this baseline is again the exponential average of all $B_d$ in previous mini-batches. Experimental Setup ::: Dataset and Task We use the task-oriented Ubuntu Dialogue dataset BIBREF15, because it not only has F1 metrics to evaluate the relevance of responses, but the dialogues in them are also open-ended to allow enough space for diversity. We also chose this dataset because previous work, e.g., HRED BIBREF3 and VHRED BIBREF7 both used Ubuntu to showcase their diversity-promotion models. Due to the popularity of this dataset, we were able to reproduce almost all models on this same dataset and have a meaningful comparison on their effectiveness of eliminating dullness. As future work, we plan to apply our models to other datasets where diversity is desired. Experimental Setup ::: Automatic Evaluation To measure the relevance of the model responses, we follow BIBREF16 (BIBREF16) and evaluate on F1's for both activities (technical verbs, e.g., "upload", "install") and entities (technical nouns, e.g., "root", "internet"). The F1's are computed by mapping the ground-truth and model responses to their corresponding activity-entity representations BIBREF16, who considered F1 to be "particularly suited for the goal-oriented Ubuntu Dialogue Corpus". We did not evaluate on BLEU score BIBREF8 because BIBREF17 showed that BLEU does not correlate well with dialogue quality. BIBREF18 (BIBREF18) also made similar observations on BLEU. To evaluate diversity, we employ two evaluation metrics from previous work, namely Distinct-1 and Distinct-2 BIBREF2. These are the ratios between the number of unique tokens and all tokens for unigrams and bigrams, respectively. In addition, we propose a novel diversity graph and its corresponding metric, which we name Diversity-32 and Diversity-AUC, respectively. We gather statistics of sentence, unigram, bigram and trigram, and sort their normalized frequencies from highest to lowest. Observing that all four graphs follow long-tail distributions, we only keep the highest 32 frequencies and plot them. We then calculate one minus the Area under Curve (Diversity-AUC) for each graph, which draws a high-level picture of how diverse a model is. Experimental Setup ::: Human Evaluation Although we proposed the effective AvgOut metric, we did find that the model sometimes still cheats to gain higher automatic diversity score. For example, as can be seen in the selected output examples (Section SECREF5), the model tends to generate words with typo since these are rarer tokens as compared to their correct counterparts. This is unavoidable for noisy datasets like Ubuntu. Thus, without human evaluation, we can never be sure if our models are good or they only look good because our metrics are exploited. We thus also conducted human studies on Amazon MTurk to evaluate the generated responses with pairwise comparison for dialogue quality. We compare our models with an advanced decoding algorithm MMI BIBREF2 and two models, namely LSTM BIBREF0 and VHRED BIBREF7, both with additive attention. To our best knowledge, LSTM and VHRED were the primary models with which F1's were reported on the Ubuntu dataset. Following BIBREF5 (BIBREF5), we employ two criteria: Plausibility and Content Richness. The first criterion measures whether the response is plausible given the context, while the second gauges whether the response is diverse and informative. The utterances were randomly shuffled to anonymize model identity. We only allowed annotators located in the US-located with at least an approval rate of $98\%$ and $10,000$ approved HITs. We collected 100 annotations in total after rejecting those completed by people who assign exactly the same score to all model responses. Since we evaluated 7 models, we collected 700 annotations in total, which came from a diverse pool of annotators. Experimental Setup ::: Training Details For each of the three models, the hidden size of the encoder is 256, while the decoder hidden size is 512. For MinAvgOut, the coefficient of the regularization loss term $\alpha $ is $100.0$; For LFT, during inference we feed a score of $0.015$ since it achieves a good balance between response coherence and diversity. For RL, the coefficient of the RL term $\beta $ is $100.0$. For the hybrid model MinAvgOut + RL, $\alpha $ and $\beta $ share a coefficient of $50.0$. Results and Analysis ::: Automatic Evaluation Results We employ several complementary metrics to capture different aspects of the model. The F1 results are shown in Table TABREF24. Among all single models, LFT performs the best, followed by MinAvgOut. RL is also comparable with previous state-of-the-art models VHRED (attn) and Reranking-RL. We think that this is because LFT exerts no force in pulling the model predictions away from the ground-truth tokens, but rather just makes itself aware of how dull each response is. Consequently, its responses appear more relevant than the other two approaches. Moreover, the hybrid model (last row) outperforms all other models by a large margin. One might expect that minimizing AVGOUT causes the models to move further away from the ground-truth tokens, so that it will hurt relevance. However, our F1 results show that as the responses become more diverse, they are more likely to include information more related and specific to the input contexts, which actually makes the model gain on both diversity and relevance. This will be further confirmed by the output examples in Table TABREF29. We also present Diversity-32 graphs (Figure FIGREF16) and report Diversity-AUC as well as Distinct-1 and -2 for each model (Table TABREF25). We can see that all our models have significantly better sentence-level diversity than VHRED, let alone LSTM. For unigram diversity, they are also better than LSTM, though hard to distinguish from VHRED. Both bigram and trigram graphs reveal that all models are more diverse than LSTM, except that RL shows lower diversity than the other models, which agree with our F1 results. Note that since our models are only trained based on unigram output distributions, the bigram and trigram diversities are still far away from that of the ground-truth, which points to future direction. That said, the table does show that encouraging unigram diversity can already have positive influence on higher grams as well. Also note that the hybrid model (last row) does not achieve the best result in terms of diversity. We hypothesize that this is because RL, which is usually harder to optimize than ML losses, faces exacerbated issues when combined with a strong MinAvgOut loss, which tries to pull the model output distribution away from the token distribution in the training corpus. Neither Distinct-1 nor -2 correlates well with our observation and evaluation of diversity and relevance. We reason that this is because these metrics only capture how many distinct tokens are used rather than each token's frequency, which is easier to be exploited because whether each token is used unnecessarily often (a strong sign of dullness) is not reflected in this measure. Results and Analysis ::: Human Evaluation Results As mentioned in experimental setup, we conducted human evaluations on our models for both Plausibility and Content Richness, as well as calculating their average (to show overall score) and their difference (to show balance between the two criteria) (Table TABREF26). We can see from the table that all our models are statistically significantly better than the baseline models on both Plausibility and Content Richness, except that RL is slightly weaker on Content Richness, which agrees with the trend in automatic evaluations. Although MinAvgOut+RL model only ranks the second on average score (statistically equivalent to MinAvgOut) in human evaluation, it achieves a good balance, and it also ranks the second in automatic diversity and the first in F1 values. We thus consider it to be our best model. Results and Analysis ::: Selected Output Examples We present two selected examples of generated responses from the investigated models (Table TABREF29). We can see that all our models learn to attend well to the context, generating coherent and informative responses. Related Work ::: Measurements of Response Diversity Multiple metrics and approaches have been proposed to measure dialogue diversity. Some focus more on how similar the responses are to the ground-truth sequences, such as Word Error Rate BIBREF3 and BLEU BIBREF20, while the others explicitly have diversity in mind when being created, such as Distinct-1 and -2 BIBREF2. The key difference between AvgOut and the previous work is that first, our metric is dynamic with no feature-engineering; second, ours is versatile enough to be applied to both continuous distributions and discrete sequences, while theirs are only for discrete tokens; third, ours can be used for both sentence-level and corpus-level evaluation, while theirs are only meaningful as corpus-level metrics because they measure the extent of repetition across responses rather than for a single response. Related Work ::: Diversity-Promoting Dialogue Models Researchers have different opinions on why dull responses are generated, which lead to various solutions. They can be roughly divided into four categories. The first category considers using conditional likelihood as a decoding objective the culprit BIBREF5, BIBREF2, BIBREF21, BIBREF22. They thus focus on improving the decoding algorithm during training. The second category traces the cause of the low-diversity problem back to the lack of model variability. They then adopt Variational Autoencoders and rely on sampling from a latent random variable as an additional prior to the decoder BIBREF7, BIBREF23, BIBREF24. The third category thinks that the issue is a lack of universal background knowledge and common sense beyond the input context. They consequently aim to integrate prior knowledge into the generation process BIBREF25, BIBREF26, BIBREF27, BIBREF28. The fourth category believes that the underlying model itself needs improvement. Some use hierarchical LSTM-RNN to encourage the model to capture high-level context BIBREF3; some use more advanced attention mechanism such as multi-head attention BIBREF29; and some use either more complicated architectures or models prone to degeneracies, such as Generative Adversarial Networks BIBREF30, Deep Reinforcement Learning BIBREF6 and Mixture Models BIBREF31. Our RL model has the same architecture as the Reinforcement Learning model, except with different rewards. BIBREF32 (BIBREF32) consider the reason for dull responses as the model's over-confidence. They then propose to add to the loss function a regularization term to maximize the entropy of the output probability distribution. Interestingly, they only proposed this simple approach rather than actually implementing it. Our MinAvgOut approach is related to their idea. Our approach is also related to posterior regularization BIBREF33, BIBREF34, BIBREF35, but our work is neural-based. Conclusion We proposed a novel measure AvgOut to dynamically evaluate how diverse a model or a response is based on the models' own parameters, which themselves evolve during training. We then leveraged this effective measure to train three models, plus a hybrid model, to eliminate dull responses for dialogue generation tasks. In addition, we designed novel automatic metrics to evaluate the trained models on diversity, in addition to the ones from previous work. Both automatic and human evaluations consolidated that our models are able to generate more diverse and relevant responses, even when compared with state-of-the-art approaches. For future work, we plan to apply these models to different generative tasks where diversity is desired. Acknowledgments We thank the reviewers for their helpful comments. This work was supported by NSF-CAREER Award #1846185, ONR #N00014-18-1-2871, and awards from Google, Facebook, Salesforce (views are not of the funding agency).
on diversity 6.87 and on relevance 4.6 points higher
58edc6ed7d6966715022179ab63137c782105eaf
58edc6ed7d6966715022179ab63137c782105eaf_0
Q: Which one of the four proposed models performed best? Text: Introduction Many modern dialogue generation models use a sequence-to-sequence architecture as their backbone BIBREF0, following its success when applied to Machine Translation (MT) BIBREF1. However, dialogue tasks also have a requirement different from that of MT: the response not only has to be "correct" (coherent and relevant), but also needs to be diverse and informative. However, seq2seq has been reported by many previous works to have low corpus-level diversity BIBREF2, BIBREF3, BIBREF0, BIBREF4, as it tends to generate safe, terse, and uninformative responses, such as "I don't know.". These responses unnecessarily make a dialogue system much less interactive than it should be. To increase the diversity of dialogue responses, the first step is to faithfully evaluate how diverse a response is. There are metrics used by previous work that are correlated to diversity, but not strongly, such as ratio of distinct tokens BIBREF2 and response length BIBREF5. However, a response can be long but extremely boring in meaning, such as "I am sure that I don't know about it.", or short but interesting (i.e., contains a lot of information), such as "Dad was mean.". Only investigating discrete token output by the model is also not ideal, because these tokens are only a single realization of the model's output probability distribution at each time step, which unavoidably loses valuable information indicated by the whole distribution. BIBREF6 (BIBREF6) manually collect a shortlist of dull responses, and during training discourage the model from producing such utterances. However, an important drawback of hand-crafted rules is that the set of dull tokens or utterances is static, while in fact it usually evolves during training: when the current dull tokens are eliminated, another set of them might reveal themselves. In our work, we begin with a simple yet effective approach to measure how diverse a response is. This metric, which we name "Average Output Probability Distribution", or AvgOut, draws information directly from the training-in-session model itself. We calculate it by keeping track of the exponential average of all output probability distributions on the decoder side during training. This metric dynamically measures which tokens the model is biased toward without any hand-crafted rules, thus making it a faithful evaluation of the model diversity (i.e., for diverse models, the token probabilities should be more evenly distributed rather than peaked at a few dull tokens). In addition, since AvgOut is a one-dimensional categorical distribution rather than a dimensionless numerical value like entropy, it naturally carries and conveys more information about model diversity. We then propose three models that leverage our novel metric to promote diversity in dialogue generation. The first MinAvgOut model minimizes the dot product of current batch AvgOut and the exponential average AvgOut across batches, which encourages low-frequency tokens to be generated. The second LFT model uses a labeled transduction method and scales a "diversity label" by the diversity score of the ground-truth target sequence during training, while during testing can generate responses of different levels of diversity by tweaking the intended diversity score. The third RL model leverages reinforcement learning, where our novel metric is applied to discrete tokens and serve as a reward signal. In addition, since MinAvgOut regularizes directly on the continuous distribution while RL calculates its reward based on discrete sampled tokens, we simply add up the loss terms of the two models, creating an even stronger hybrid model. We first employ diverse automatic metrics, including Distinct-1 and -2 from previous work BIBREF2 and our novel metric Diveristy-iAUC (which calculates one minus the sum of normalized frequencies of the most frequent tokens produced by the model), plus activity/entity F1s, to evaluate the diversity and relevance of the generated responses. We then conduct human evaluations to verify that these models not only outperform their base model LSTM by a large margin, but are also comparable to or better than an advanced decoding algorithm MMI BIBREF2 and a very competitive model VHRED BIBREF7 on the Ubuntu dataset. AvgOut as an Effective Diversity Metric By only keeping a static shortlist of boring responses or tokens, one basically assumes that we humans should decide which tokens are dull. However, we argue that we should instead look from the model's perspective to identify dull tokens, because even if the model outputs a word that we consider rare, including it in too many responses is still considered a dull behavior. Motivated by this thought experiment, we propose a novel metric, Average Output Probability Distribution (AvgOut), that dynamically keeps track of which tokens the model is biased toward. To calculate this, during training, we average out all the output probability distributions for each time step of the decoder for the whole mini-batch. The resulting vector $D^{\prime }$ will reflect each token's probability of being generated from the model's perspective. Note that we do not use discrete ground-truth tokens to evaluate the model's bias, because there is a fine distinction between the two: a statistics of frequency on ground-truth tokens is an evaluation of the corpus's bias, while AvgOut is an evaluation of what bias the model has learned because by generating dull responses more frequently than the training corpus has, it is the model itself that we should adjust. Also note that the reason we take the average is that a single output distribution will largely depend on the context and the previous target tokens (which are fed as inputs to the decoder during training), but on average the distribution should be a faithful evaluation on which words are more likely to be generated from model's perspective. To avoid batches that have AvgOut significantly different from those of other batches, which would lead the model astray, we keep the exponential average of this metric across batches to make it less biased toward any specific batch. Let it be $D$. After training on a mini-batch and obtain $D^{\prime }$, we update $D$ like the following: where $\gamma $ is $0.01$ in our experiments. Another consideration of AvgOut is that theoretically we can have two choices. The first is to use the output distributions when we are teacher-forcing (i.e., only feeding ground-truth tokens); the other is to let the model use its own predictions during greedy/beam-search decoding or sampling. We reason that the former is a much better estimation of the model's bias, because the latter will result in a cascading enlargement of the model bias due to the auto-regressive nature of LSTM-RNN models (i.e., the tokens fed to the decoder are themselves also polluted by the model's bias). Our early experimental results also agreed with the above reasoning. Although we try to come up with the most faithful evaluation of how diverse a response is, our approach certainly has its drawbacks too. For example, using very frequent words but less frequent combinations of them may result in a good response which will be penalized by our metric. A natural solution to this is to also use bigram and trigram diversities and take a linear combination of them, which on a high-level is similar to BLEU BIBREF8. However, considering even bigram distribution takes up $O(|V|^2)$ space and calculation time, hence we did not try it due to limited resources. However, as will be presented in Section SECREF5, regularizing unigram distributions can already greatly help on higher-gram diversities, while also improving relevance. Three Models to Leverage AvgOut AvgOut can play at least three roles. First, it can be used to directly supervise output distribution during training; second, it can be used as a prior in labeled sequence transduction methods to control diversity of the generated response; and third, it can be used as a reward signal for Reinforcement Learning to encourage diverse sampled responses. In this section, we begin with a base vanilla seq2seq model, and next present our three models to diversify responses based on AvgOut. Our base model LSTM is identical to that proposed by BIBREF1 (BIBREF1), which consists of a single-layer bi-directional LSTM-RNN BIBREF9 encoder and a single-layer LSTM-RNN decoder with additive attention. Three Models to Leverage AvgOut ::: Regularization by Minimizing Continuous-AvgOut Our MinAvgOut model (Figure FIGREF3) directly integrates AvgOut into the loss function by summarizing it into a single numerical value named Continuous-AvgOut. We do this by taking the dot-product of $D$ and $D^{\prime }$ (Figure FIGREF6). The intuition behind this simple calculation is that $D$ can also be viewed as a set of weights which add up to $1.0$, since it is a probability vector. By taking the dot product, we are actually calculating a weighted average of each probability in $D^{\prime }$. To evaluate how diverse the model currently is, the duller tokens should obviously carry higher weights since they contribute more to the "dullness" of the whole utterance. Assuming that $D$ is a column vector, the continuous diversity score is $B_c$, and the resulting extra loss term is $L_B$, the total loss $L$ is given by: where $\alpha $ is a coefficient to balance the regularization loss with the maximum likelihood loss (a.k.a. teacher forcing loss) $L_{ML}$. This is important because the regularization term continues to discourage the model from generating the ground-truth token, which we need to balance by ML loss to reduce the impact (otherwise the model will be led astray). Note that since $D$ is a moving average which does not depend on the model parameters of the current mini-batch, only $D^{\prime }$ will result in gradient flow during back-propagation, which is what we intend. Three Models to Leverage AvgOut ::: Label-Fine-Tuning Model We also borrow the continuous version of the Label-Fine-Tuning (LFT) model from BIBREF10 (BIBREF10), which is an extension of the discrete labeled sequence transduction methods BIBREF11. The LFT model leverages a continuous label to serve as a prior for generating the target sequence. This label corresponds to an embedding just like a normal token does, but can be scaled by a continuous value. This model is applicable to our case because the diversity score of a response can also be viewed as a style, ranging from $0.0$ to $1.0$. Specifically, we add to the vocabulary a diversity label and scale its embedding vector with the intended diversity score of the target sequence. During training, this score is obtained by evaluating the diversity of the ground-truth target sequence (see Figure FIGREF8); during test time, we instead feed the model a diversity label scaled by a score of our choice (i.e., when we want the model to generate a more diverse response, we scale the label's embedding by a higher score, while to generate a duller response, we scale the embedding by a lower one). Three Models to Leverage AvgOut ::: Reward-Based Reinforcement Learning We also explore a model (see Figure FIGREF11) which regularizes on the discrete token level, because merely monitoring output probability distribution may ignore certain bad styles such as repetition (e.g. "I don't don't know."). We use Discrete-AvgOut to calculate the continuous diversity score of a discrete sequence. Let $\lbrace G_1, G_2, ..., G_{N_G}\rbrace $ be a sequence of $N_G$ tokens sampled by the model during training. Then from $D$, we extract the probabilities $\lbrace P_1, P_2, ..., P_{N_G}\rbrace $ corresponding to each generated token. The diversity score $B_{d}$ on these discrete tokens will be: where $N_{unique}$ is the number of unique tokens in the sampled sequence (see Figure FIGREF12). Note that this division explicitly discourages the model from outputting repeated tokens, because when that happens, the nominator will stay the same, while the denominator will decrease, resulting in a lower diversity score. Also note that MinAvgOut can be complementary to RL since calculating diversity scores based on discrete tokens unavoidably loses valuable information from the output distribution before argmax is taken. In Section SECREF5, we show with both automatic and human evaluations that this combination indeed achieves the best results among our models. Following BIBREF12 (BIBREF12), our loss function consists of two terms. The first term is the Maximum Likelihood loss ($L_{\textsc {ml}}$); the other is the Reinforcement Learning loss ($L_{\textsc {rl}}$). The total loss $L$ is then: where $\beta $ is a hyperparameter indicating how much weight we want to assign to the rl part of the loss, $x$ is the source sequence, $\lbrace y_t^*\rbrace $ are the ground truth tokens and $\lbrace y_t^s\rbrace $ are the sampled tokens. We use a policy gradient method BIBREF13 to calculate the RL loss. Specifically, we sample a response for each context $x$, and assign to it a reward $R$, which is equal to $B_d$ because we want to encourage the model to be diverse. We also use a baseline $R_b$ that helps reduce variance during training BIBREF14. In our case this baseline is again the exponential average of all $B_d$ in previous mini-batches. Experimental Setup ::: Dataset and Task We use the task-oriented Ubuntu Dialogue dataset BIBREF15, because it not only has F1 metrics to evaluate the relevance of responses, but the dialogues in them are also open-ended to allow enough space for diversity. We also chose this dataset because previous work, e.g., HRED BIBREF3 and VHRED BIBREF7 both used Ubuntu to showcase their diversity-promotion models. Due to the popularity of this dataset, we were able to reproduce almost all models on this same dataset and have a meaningful comparison on their effectiveness of eliminating dullness. As future work, we plan to apply our models to other datasets where diversity is desired. Experimental Setup ::: Automatic Evaluation To measure the relevance of the model responses, we follow BIBREF16 (BIBREF16) and evaluate on F1's for both activities (technical verbs, e.g., "upload", "install") and entities (technical nouns, e.g., "root", "internet"). The F1's are computed by mapping the ground-truth and model responses to their corresponding activity-entity representations BIBREF16, who considered F1 to be "particularly suited for the goal-oriented Ubuntu Dialogue Corpus". We did not evaluate on BLEU score BIBREF8 because BIBREF17 showed that BLEU does not correlate well with dialogue quality. BIBREF18 (BIBREF18) also made similar observations on BLEU. To evaluate diversity, we employ two evaluation metrics from previous work, namely Distinct-1 and Distinct-2 BIBREF2. These are the ratios between the number of unique tokens and all tokens for unigrams and bigrams, respectively. In addition, we propose a novel diversity graph and its corresponding metric, which we name Diversity-32 and Diversity-AUC, respectively. We gather statistics of sentence, unigram, bigram and trigram, and sort their normalized frequencies from highest to lowest. Observing that all four graphs follow long-tail distributions, we only keep the highest 32 frequencies and plot them. We then calculate one minus the Area under Curve (Diversity-AUC) for each graph, which draws a high-level picture of how diverse a model is. Experimental Setup ::: Human Evaluation Although we proposed the effective AvgOut metric, we did find that the model sometimes still cheats to gain higher automatic diversity score. For example, as can be seen in the selected output examples (Section SECREF5), the model tends to generate words with typo since these are rarer tokens as compared to their correct counterparts. This is unavoidable for noisy datasets like Ubuntu. Thus, without human evaluation, we can never be sure if our models are good or they only look good because our metrics are exploited. We thus also conducted human studies on Amazon MTurk to evaluate the generated responses with pairwise comparison for dialogue quality. We compare our models with an advanced decoding algorithm MMI BIBREF2 and two models, namely LSTM BIBREF0 and VHRED BIBREF7, both with additive attention. To our best knowledge, LSTM and VHRED were the primary models with which F1's were reported on the Ubuntu dataset. Following BIBREF5 (BIBREF5), we employ two criteria: Plausibility and Content Richness. The first criterion measures whether the response is plausible given the context, while the second gauges whether the response is diverse and informative. The utterances were randomly shuffled to anonymize model identity. We only allowed annotators located in the US-located with at least an approval rate of $98\%$ and $10,000$ approved HITs. We collected 100 annotations in total after rejecting those completed by people who assign exactly the same score to all model responses. Since we evaluated 7 models, we collected 700 annotations in total, which came from a diverse pool of annotators. Experimental Setup ::: Training Details For each of the three models, the hidden size of the encoder is 256, while the decoder hidden size is 512. For MinAvgOut, the coefficient of the regularization loss term $\alpha $ is $100.0$; For LFT, during inference we feed a score of $0.015$ since it achieves a good balance between response coherence and diversity. For RL, the coefficient of the RL term $\beta $ is $100.0$. For the hybrid model MinAvgOut + RL, $\alpha $ and $\beta $ share a coefficient of $50.0$. Results and Analysis ::: Automatic Evaluation Results We employ several complementary metrics to capture different aspects of the model. The F1 results are shown in Table TABREF24. Among all single models, LFT performs the best, followed by MinAvgOut. RL is also comparable with previous state-of-the-art models VHRED (attn) and Reranking-RL. We think that this is because LFT exerts no force in pulling the model predictions away from the ground-truth tokens, but rather just makes itself aware of how dull each response is. Consequently, its responses appear more relevant than the other two approaches. Moreover, the hybrid model (last row) outperforms all other models by a large margin. One might expect that minimizing AVGOUT causes the models to move further away from the ground-truth tokens, so that it will hurt relevance. However, our F1 results show that as the responses become more diverse, they are more likely to include information more related and specific to the input contexts, which actually makes the model gain on both diversity and relevance. This will be further confirmed by the output examples in Table TABREF29. We also present Diversity-32 graphs (Figure FIGREF16) and report Diversity-AUC as well as Distinct-1 and -2 for each model (Table TABREF25). We can see that all our models have significantly better sentence-level diversity than VHRED, let alone LSTM. For unigram diversity, they are also better than LSTM, though hard to distinguish from VHRED. Both bigram and trigram graphs reveal that all models are more diverse than LSTM, except that RL shows lower diversity than the other models, which agree with our F1 results. Note that since our models are only trained based on unigram output distributions, the bigram and trigram diversities are still far away from that of the ground-truth, which points to future direction. That said, the table does show that encouraging unigram diversity can already have positive influence on higher grams as well. Also note that the hybrid model (last row) does not achieve the best result in terms of diversity. We hypothesize that this is because RL, which is usually harder to optimize than ML losses, faces exacerbated issues when combined with a strong MinAvgOut loss, which tries to pull the model output distribution away from the token distribution in the training corpus. Neither Distinct-1 nor -2 correlates well with our observation and evaluation of diversity and relevance. We reason that this is because these metrics only capture how many distinct tokens are used rather than each token's frequency, which is easier to be exploited because whether each token is used unnecessarily often (a strong sign of dullness) is not reflected in this measure. Results and Analysis ::: Human Evaluation Results As mentioned in experimental setup, we conducted human evaluations on our models for both Plausibility and Content Richness, as well as calculating their average (to show overall score) and their difference (to show balance between the two criteria) (Table TABREF26). We can see from the table that all our models are statistically significantly better than the baseline models on both Plausibility and Content Richness, except that RL is slightly weaker on Content Richness, which agrees with the trend in automatic evaluations. Although MinAvgOut+RL model only ranks the second on average score (statistically equivalent to MinAvgOut) in human evaluation, it achieves a good balance, and it also ranks the second in automatic diversity and the first in F1 values. We thus consider it to be our best model. Results and Analysis ::: Selected Output Examples We present two selected examples of generated responses from the investigated models (Table TABREF29). We can see that all our models learn to attend well to the context, generating coherent and informative responses. Related Work ::: Measurements of Response Diversity Multiple metrics and approaches have been proposed to measure dialogue diversity. Some focus more on how similar the responses are to the ground-truth sequences, such as Word Error Rate BIBREF3 and BLEU BIBREF20, while the others explicitly have diversity in mind when being created, such as Distinct-1 and -2 BIBREF2. The key difference between AvgOut and the previous work is that first, our metric is dynamic with no feature-engineering; second, ours is versatile enough to be applied to both continuous distributions and discrete sequences, while theirs are only for discrete tokens; third, ours can be used for both sentence-level and corpus-level evaluation, while theirs are only meaningful as corpus-level metrics because they measure the extent of repetition across responses rather than for a single response. Related Work ::: Diversity-Promoting Dialogue Models Researchers have different opinions on why dull responses are generated, which lead to various solutions. They can be roughly divided into four categories. The first category considers using conditional likelihood as a decoding objective the culprit BIBREF5, BIBREF2, BIBREF21, BIBREF22. They thus focus on improving the decoding algorithm during training. The second category traces the cause of the low-diversity problem back to the lack of model variability. They then adopt Variational Autoencoders and rely on sampling from a latent random variable as an additional prior to the decoder BIBREF7, BIBREF23, BIBREF24. The third category thinks that the issue is a lack of universal background knowledge and common sense beyond the input context. They consequently aim to integrate prior knowledge into the generation process BIBREF25, BIBREF26, BIBREF27, BIBREF28. The fourth category believes that the underlying model itself needs improvement. Some use hierarchical LSTM-RNN to encourage the model to capture high-level context BIBREF3; some use more advanced attention mechanism such as multi-head attention BIBREF29; and some use either more complicated architectures or models prone to degeneracies, such as Generative Adversarial Networks BIBREF30, Deep Reinforcement Learning BIBREF6 and Mixture Models BIBREF31. Our RL model has the same architecture as the Reinforcement Learning model, except with different rewards. BIBREF32 (BIBREF32) consider the reason for dull responses as the model's over-confidence. They then propose to add to the loss function a regularization term to maximize the entropy of the output probability distribution. Interestingly, they only proposed this simple approach rather than actually implementing it. Our MinAvgOut approach is related to their idea. Our approach is also related to posterior regularization BIBREF33, BIBREF34, BIBREF35, but our work is neural-based. Conclusion We proposed a novel measure AvgOut to dynamically evaluate how diverse a model or a response is based on the models' own parameters, which themselves evolve during training. We then leveraged this effective measure to train three models, plus a hybrid model, to eliminate dull responses for dialogue generation tasks. In addition, we designed novel automatic metrics to evaluate the trained models on diversity, in addition to the ones from previous work. Both automatic and human evaluations consolidated that our models are able to generate more diverse and relevant responses, even when compared with state-of-the-art approaches. For future work, we plan to apply these models to different generative tasks where diversity is desired. Acknowledgments We thank the reviewers for their helpful comments. This work was supported by NSF-CAREER Award #1846185, ONR #N00014-18-1-2871, and awards from Google, Facebook, Salesforce (views are not of the funding agency).
the hybrid model MinAvgOut + RL
b366706e2fff6dd8edc89cc0c6b9d5b0790f43aa
b366706e2fff6dd8edc89cc0c6b9d5b0790f43aa_0
Q: What metrics are used to measure performance of models? Text: Introduction Task-oriented dialogue system is an important tool to build personal virtual assistants, which can help users to complete most of the daily tasks by interacting with devices via natural language. It's attracting increasing attention of researchers, and lots of works have been proposed in this area BIBREF0, BIBREF1, BIBREF2, BIBREF3, BIBREF4, BIBREF5, BIBREF6, BIBREF7. The existing task-oriented dialogue systems usually consist of four components: (1) natural language understanding (NLU), it tries to identify the intent of a user; (2) dialogue state tracker (DST), it keeps the track of user goals and constraints in every turn; (3) dialogue policy maker (DP), it aims to generate the next available dialogue action; and (4) natural language generator (NLG), it generates a natural language response based on the dialogue action. Among the four components, dialogue policy maker plays a key role in order to complete dialogues effectively, because it decides the next dialogue action to be executed. As far as we know, the dialogue policy makers in most existing task-oriented dialogue systems just use the classifiers of the predefined acts to obtain dialogue policy BIBREF0, BIBREF2, BIBREF4, BIBREF8, BIBREF9. The classification-based dialogue policy learning methods can assign either only a dialogue act and its corresponding parameters BIBREF10, BIBREF2, BIBREF0 or multiple dialogue acts without their corresponding parameters for a dialogue action BIBREF11. However, all these existing methods cannot obtain multiple dialogue acts and their corresponding parameters for a dialogue action at the same time. Intuitively, it will be more reasonable to construct multiple dialogue acts and their corresponding parameters for a dialogue action at the same time. For example, it can be shown that there are 49.4% of turns in the DSTC2 dataset and 61.5% of turns in the Maluuba dataset have multiple dialogue acts and their corresponding parameters as the dialogue action. If multiple dialogue acts and their corresponding parameters can be obtained at the same time, the final response of task-oriented dialogue systems will become more accurate and effective. For example, as shown in Figure FIGREF3, a user wants to get the name of a cheap french restaurant. The correct dialogue policy should generate three acts in current dialogue turn: offer(name=name_slot), inform(food=french) and inform(food=cheap). Thus, the user's real thought may be: “name_slot is a cheap french restaurant". If losing the act offer, the system may generate a response like “There are some french restaurants", which will be far from the user's goal. To address this challenge, we propose a Generative Dialogue Policy model (GDP) by casting the dialogue policy learning problem as a sequence optimization problem. The proposed model generates a series of acts and their corresponding parameters by the learned dialogue policy. Specifically, our proposed model uses a recurrent neural network (RNN) as action decoder to construct dialogue policy maker instead of traditional classifiers. Attention mechanism is used to help the decoder decode dialogue acts and their corresponding parameters, and then the template-based natural language generator uses the results of the dialogue policy maker to choose an appropriate sentence template as the final response to the user. Extensive experiments conducted on two benchmark datasets verify the effectiveness of our proposed method. Our contributions in this work are three-fold. The existing methods cannot construct multiple dialogue acts and their corresponding parameters at the same time. In this paper, We propose a novel generative dialogue policy model to solve the problem. The extensive experiments demonstrate that the proposed model significantly outperforms the state-of-the-art baselines on two benchmarks. We publicly release the source code. Related Work Usually, the existing task-oriented dialogue systems use a pipeline of four separate modules: natural language understanding, dialogue belief tracker, dialogue policy and natural language generator. Among these four modules, dialogue policy maker plays a key role in task-oriented dialogue systems, which generates the next dialogue action. As far as we know, nearly all the existing approaches obtain the dialogue policy by using the classifiers of all predefined dialogue acts BIBREF12, BIBREF13. There are usually two kinds of dialogue policy learning methods. One constructs a dialogue act and its corresponding parameters for a dialogue action. For example, BIBREF0 constructs a simple classifier for all the predefined dialogue acts. BIBREF2 build a complex classifier for some predefined dialogue acts, addtionally BIBREF2 adds two acts for each parameter: one to inform its value and the other to request it. The other obtains the dialogue policy by using multi-label classification to consider multiple dialogue acts without their parameters. BIBREF11 performs multi-label multi-class classification for dialogue policy learning and then the multiple acts can be decided based on a threshold. Based on these classifiers, the reinforcement learning can be used to further update the dialogue policy of task-oriented dialogue systems BIBREF3, BIBREF14, BIBREF9. In the real scene, an correct dialogue action usually consists of multiple dialogue acts and their corresponding parameters. However, it is very hard for existing classification-based dialogue policy maker to achieve this goal. Thus, in this paper we propose a novel generative dialogue policy maker to address this issue by casting the dialogue policy learning problem as a sequence optimization problem. Technical Background ::: Encoder-Decoder Seq2Seq Models Seq2Seq model was first introduced by BIBREF15 for statistical machine translation. It uses two recurrent neural networks (RNN) to solve the sequence-to-sequence mapping problem. One called encoder encodes the user utterance into a dense vector representing its semantics, the other called decoder decodes this vector to the target sentence. Now Seq2Seq framework has already been used in task-oriented dialog systems such as BIBREF4 and BIBREF1, and shows the challenging performance. In the Seq2Seq model, given the user utterance $Q=(x_1, x_2, ..., x_n)$, the encoder squeezes it into a context vector $C$ and then used by decoder to generate the response $R=(y_1, y_2, ..., y_m)$ word by word by maximizing the generation probability of $R$ conditioned on $Q$. The objective function of Seq2Seq can be written as: In particular, the encoder RNN produces the context vector $C$ by doing calculation below: The $h_t$ is the hidden state of the encoder RNN at time step $t$ and $f$ is the non-linear transformation which can be a long-short term memory unit LSTM BIBREF16 or a gated recurrent unit GRU BIBREF15. In this paper, we implement $f$ by using GRU. The decoder RNN generates each word in reply conditioned on the context vector $C$. The probability distribution of candidate words at every time step $t$ is calculated as: The $s_t$ is the hidden state of decoder RNN at time step $t$ and $y_{t-1}$ is the generated word in the reply at time $t-1$ calculated by softmax operations. Technical Background ::: Attention Mechanism Attention mechanisms BIBREF17 have been proved to improved effectively the generation quality for the Seq2Seq framework. In Seq2Seq with attention, each $y_i$ corresponds to a context vector $C_i$ which is calculated dynamically. It is a weighted average of all hidden states of the encoder RNN. Formally, $C_i$ is defined as $C_i=\sum _{j=1}^{n} \alpha _{ij}h_j$, where $\alpha _{ij}$ is given by: where $s_{i-1}$ is the last hidden state of the decoder, the $\eta $ is often implemented as a multi-layer-perceptron (MLP) with tanh as the activation function. Generative Dialogue Policy Figure FIGREF13 shows the overall system architecture of the proposed GDP model. Our model contains five main components: (1) utterance encoder; (2) dialogue belief tracker; (3) dialogue policy maker; (4) knowledge base; (5) template-based natural language generator. Next, we will describe each component of our proposed GDP model in detail. Generative Dialogue Policy ::: Notations and Task Formulation Given the user utterance $U_t$ at turn $t$ and the dialogue context $C_{t-1}$ which contains the result of the dialogue belief tracker at turn $t-1$, the task-oriented dialog system needs to generate user's intents $C_t$ by dialogue belief tracker and then uses this information to get the knowledge base query result $k_t \in \mathbb {R}^k$. Then the model needs to generate the next dialogue action $A_t$ based on $k_t$, $U_t$ and $C_t$. The natural language generator provides the template-based response $R_t$ as the final reply by using $A_t$. The $U_t$ and $C_t$ are the sequences, $k_t$ is a one-hot vector representing the number of the query results. For baselines, in this paper, the $A_t$ is the classification result of the next dialogue action, but in our proposed model it's a sequence which contains multiple acts and their corresponding parameters. Generative Dialogue Policy ::: Utterance Encoder A bidirectional GRU is used to encode the user utterance $U_t$, the last turn response $R_{t-1}$ made by the system and the dialogue context $C_{t-1}$ into a continuous representation. The vector is generated by concatenating the last forward and backward GRU states. $U_t = (w_1, w_2, ..., w_{T_m})$ is the user utterance at turn $t$. $C_{t-1}=(c_1, c_2, ..., c_{T_n})$ is the dialogue context made by dialogue belief tracker at $t-1$ turn. $R_{t-1}$ is the response made by our task-oriented dialogue system at last turn. Then the words of $[C_{t-1}, R_{t-1}, U_t]$ are firstly mapped into an embedding space and further serve as the inputs of each step to the bidirectional GRU. Let $n$ denotes the number of words in the sequence $[C_{t-1}, R_{t-1}, U_t]$. The $\overrightarrow{h_{t^{\prime }}^u}$ and $\overleftarrow{h_{t^{\prime }}^u}$ represent the forward and backward GRU state outputs at time step $t^{\prime }$. The encoder output of timestep $i$ denote as $\overline{h_i^u}$. where $e([C_{t-1}, R_{t-1}, U_t])$ is the embedding of the input sequence, $d_h$ is the hidden size of the GRU. $H_u$ contains the encoder hidden state of each timestep, which will be used by attention mechanism in dialogue policy maker. Generative Dialogue Policy ::: Dialogue State Tracker Dialogue state tracker maintains the state of a conversation and collects the user's goals during the dialogue. Recent work successfully represents this component as discriminative classifiers. BIBREF5 verified that the generation is a better way to model the dialogue state tracker. Specifically, we use a GRU as the generator to decode the $C_t$ of current turn. In order to capture user intent information accurately, the basic attention mechanism is calculated when the decoder decodes the $C_t$ at each step, which is the same as the Eq. (DISPLAY_FORM12). where $m$ is the length of $C_t$, $e(y_i)$ is the embedding of the token, $d_h$ is the hidden size of the GRU and the hidden state at $i$ timestep of the RNN in dialogue state tracker denote as $h_i^d$. The decoded token at step $i$ denotes as $y_i^d$. Generative Dialogue Policy ::: Knowledge Base Knowledge base is a database that stores information about the related task. For example, in the restaurant reservation, a knowledge base stores the information of all the restaurants, such as location and price. After dialogue belief tracker, the $C_t$ will be used as the constraints to search the results in knowledge base. Then the one-hot vector $k_t$ will be produced when the system gets the number of the results. The search result $k_t$ has a great influence on dialogue policy. For example, if the result has multiple matches, the system should request more constraints of the user. In practice, let $k_t$ be an one-hot vector of 20 dimensions to represent the number of query results. Then $k_t$ will be used as the cue for dialogue policy maker. Generative Dialogue Policy ::: Dialogue Policy Maker In task-oriented dialogue systems, supervised classification is a straightforward solution for dialogue policy modeling. However, we observe that classification cannot hold enough information for dialogue policy modeling. The generative approach is another way to model the dialogue policy maker for task-oriented dialogue systems, which generates the next dialogue acts and their corresponding parameters based on the dialogue context word by word. Thus the generative approach converts the dialogue policy learning problem into a sequence optimization problem. The dialogue policy maker generates the next dialogue action $A_t$ based on $k_t$ and $[H_u, H_d]$. Our proposed model uses the GRU as the action decoder to decode the acts and their parameters for the response. Particularly, at step $i$, for decoding $y_i^p$ of $A_t$, the decoder GRU takes the embedding of $y_{i-1}^p$ to generate a hidden vector $h_i^p$. Basic attention mechanism is calculated. where $e$ is the embedding of the token, $c_u$ is the context vector of the input utterance and $c_d$ is the context vector of the dialogue state tracker. $h_i^p$ is the hidden state of the GRU in dialogue policy maker at $i$ timestep. where $y_i^p$ is the token decoded at $i$ timestep. And the final results of dialogue policy maker denote as $A_t$, and the $k$ is the length of it. In our proposed model, the dialogue policy maker can be viewed as a decoder of the seq2seq model conditioned on $[C_{t-1},R_{t-1},U_t]$ and $k_t$. Generative Dialogue Policy ::: Nature Language Generator After getting the dialogue action $A_t$ by the learned dialogue policy maker, the task-oriented dialogue system needs to generate an appropriate response $R_t$ for users. We construct the natural language generator by using template sentences. For each dataset, we extract all the system responses, then we manually modify responses to construct the sentence templates for task-oriented dialogue systems. In our proposed model, the sequence of the acts and parameters $A_t$ will be used for searching appropriate template. However, the classification-based baselines use the categories of acts and their corresponding parameters to search the corresponding template. Generative Dialogue Policy ::: Training In supervised learning, because our proposed model is built in a seq2seq way, the standard cross entropy is adopted as our objective function to train dialogue belief tracker and dialogue policy maker. After supervised learning, the dialogue policy can be further updated by using reinforcement learning. In the context of reinforcement learning, the decoder of dialogue policy maker can be viewed as a policy network, denoted as $\pi _{\theta }(y_j)$ for decoding $y_j$, $\theta $ is the parameters of the decoder. Accordingly, the hidden state created by GRU is the corresponding state, and the choice of the current token $y_j$ is an action. Reward function is also very important for reinforcement learning when decoding every token. To encourage our policy maker to generate correct acts and their corresponding parameters, we set the reward function as follows: once the dialogue acts and their parameters are decoded correctly, the reward is 2; otherwise, the reward is -5; only the label of the dialogue act is decoded correctly but parameters is wrong, the reward is 1; $\lambda $ is a decay parameter. More details are shown in Sec SECREF41. In our proposed model, rewards can only be obtained at the end of decoding $A_t$. In order to get the rewards at each decoding step, we sample some results $A_t$ after choosing $y_j$, and the reward of $y_j$ is set as the average of all the sampled results' rewards. In order to ensure that the model's performance is stable during the fine-tuning phase of reinforcement learning, we freeze the parameters of user utterance and dialogue belief tracker, only the parameters of the dialogue policy maker will be optimized by reinforcement learning. Policy gradient algorithm REINFORCE BIBREF18 is used for pretrained dialogue policy maker: where the $m$ is the length of the decoded action. The objective function $J$ can be optimized by gradient descent. Experiments We evaluate the performance of the proposed model in three aspects: (1) the accuracy of the dialogue state tracker, it aims to show the impact of the dialogue state tracker on the dialogue policy maker; (2) the accuracy of dialogue policy maker, it aims to explain the performance of different methods of constructing dialogue policy; (3) the quality of the final response, it aims to explain the impact of the dialogue policy on the final dialogue response. The evaluation metrics are listed as follows: BPRA: Belief Per-Response Accuracy (BPRA) tests the ability to generate the correct user intents during the dialogue. This metric is used to evaluate the accuracy of dialogue belief tracker BIBREF1. APRA: Action Per-Response Accuracy (APRA) evaluates the per-turn accuracy of the dialogue actions generated by dialogue policy maker. For baselines, APRA evaluates the classification accuracy of the dialogue policy maker. But our model actually generates each individual token of actions, and we consider a prediction to be correct only if every token of the model output matches the corresponding token in the ground truth. BLEU BIBREF19: The metric evaluates the quality of the final response generated by natural language generator. The metric is usually used to measure the performance of the task-oriented dialogue system. We also choose the following metrics to evaluate the efficiency of training the model: $\mathbf {Time_{full}}$: The time for training the whole model, which is important for industry settings. $\mathbf {Time_{DP}}$: The time for training the dialogue policy maker in a task-oriented dialogue system. Experiments ::: Datasets We adopt the DSTC2 BIBREF20 dataset and Maluuba BIBREF21 dataset to evaluate our proposed model. Both of them are the benchmark datasets for building the task-oriented dialog systems. Specifically, the DSTC2 is a human-machine dataset in the single domain of restaurant searching. The Maluuba is a very complex human-human dataset in travel booking domain which contains more slots and values than DSTC2. Detailed slot information in each dataset is shown in Table TABREF34. Experiments ::: Baselines For comparison, we choose two state-of-the-art baselines and their variants. E2ECM BIBREF11: In dialogue policy maker, it adopts a classic classification for skeletal sentence template. In our implement, we construct multiple binary classifications for each act to search the sentence template according to the work proposed by BIBREF11. CDM BIBREF10: This approach designs a group of classifications (two multi-class classifications and some binary classifications) to model the dialogue policy. E2ECM+RL: It fine tunes the classification parameters of the dialogue policy by REINFORCE BIBREF18. CDM+RL: It fine tunes the classification of the act and corresponding parameters by REINFORCE BIBREF18. In order to verify the performance of the dialogue policy maker, the utterance encoder and dialogue belief tracker of our proposed model and baselines is the same, only dialogue policy maker is different. Experiments ::: Parameters settings For all models, the hidden size of dialogue belief tracker and utterance encoder is 350, and the embedding size $d_{emb}$ is set to 300. For our proposed model, the hidden size of decoder in dialogue policy maker is 150. The vocabulary size $|V|$ is 540 for DSTC2 and 4712 for Maluuba. And the size of $k_t$ is set to 20. An Adam optimizer BIBREF22 is used for training our models and baselines, with a learning rate of 0.001 for supervised training and 0.0001 for reinforcement learning. In reinforcement learning, the decay parameter $\lambda $ is set to 0.8. The weight decay is set to 0.001. And early stopping is performed on developing set. Experiments ::: Experimental Results The experimental results of the proposed model and baselines will be analyzed from the following aspects. BPRA Results: As shown in Table TABREF35, most of the models have similar performance on BPRA on these two datasets, which can guarantee a consistent impact on the dialogue policy maker. All the models perform very well in BPRA on DSTC2 dataset. On Maluuba dataset, the BPRA decreases because of the complex domains. We can notice that BPRA of CDM is slightly worse than other models on Maluuba dataset, the reason is that the CDM's dialogue policy maker contains lots of classifications and has the bigger loss than other models because of complex domains, which affects the training of the dialogue belief tracker. APRA Results: Compared with baselines, GDP achieves the best performance in APRA on two datasets. It can be noted that we do not compare with the E2ECM baseline in APRA. E2ECM only uses a simple classifier to recognize the label of the acts and ignores the parameters information. In our experiment, APRA of E2ECM is slightly better than our method. Considering the lack of parameters of the acts, it's unfair for our GDP method. Furthermore, the CDM baseline considers the parameters of the act. But GDP is far better than CDM in supervised learning and reinforcement learning. BLEU Results: GDP significantly outperforms the baselines on BLEU. As mentioned above, E2ECM is actually slightly better than GDP in APRA. But in fact, we can find that the language quality of the response generated by GDP is still better than E2ECM, which proves that lack of enough parameters information makes it difficult to find the appropriate sentence template in NLG. It can be found that the BLEU of all models is very poor on Maluuba dataset. The reason is that Maluuba is a human-human task-oriented dialogue dataset, the utterances are very flexible, the natural language generator for all methods is difficult to generate an accurate utterance based on the context. And DSTC2 is a human-machine dialog dataset. The response is very regular so the effectiveness of NLG will be better than that of Maluuba. But from the results, the GDP is still better than the baselines on Maluuba dataset, which also verifies that our proposed method is more accurate in modeling dialogue policy on complex domains than the classification-based methods. Time and Model Size: In order to obtain more accurate and complete dialogue policy for task-oriented dialogue systems, the proposed model has more parameters on the dialogue policy maker than baselines. As shown in Figure FIGREF44, E2ECM has the minimal dialogue policy parameters because of the simple classification. It needs minimum training time, but the performance of E2ECM is bad. The number of parameters in the CDM model is slightly larger than E2ECM. However, because both of them are classification methods, they all lose some important information about dialogue policy. Therefore, we can see from the experimental results that the quality of CDM's dialogue policy is as bad as E2ECM. The number of dialogue policy maker's parameters in GDP model is much larger than baselines. Although the proposed model need more time to be optimized by supervised learning and reinforcement learning, the performance is much better than all baselines. Experiments ::: Case Study Table TABREF43 illustrates an example of our proposed model and baselines on DSTC2 dataset. In this example, a user's goal is to find a cheap restaurant in the east part of the town. In the current turn, the user wants to get the address of the restaurant. E2ECM chooses the inform and offer acts accurately, but the lack of the inform's parameters makes the final output deviate from the user's goal. CDM generates the parameters of offer successfully, but the lack of the information of inform also leads to a bad result. By contrast, the proposed model GDP can generate all the acts and their corresponding parameters as the dialogue action. Interestingly, the final result of GDP is exactly the same as the ground truth, which verifies that the proposed model is better than the state-of-the-art baselines. Conclusion In this paper, we propose a novel model named GDP. Our proposed model treats the dialogue policy modeling as the generative task instead of the discriminative task which can hold more information for dialogue policy modeling. We evaluate the GDP on two benchmark task-oriented dialogue datasets. Extensive experiments show that GDP outperforms the existing classification-based methods on both action accuracy and BLEU.
BPRA, APRA, BLEU
c165ea43256d7ee1b1fb6f5c0c8af5f7b585e60d
c165ea43256d7ee1b1fb6f5c0c8af5f7b585e60d_0
Q: How much is proposed model better than baselines in performed experiments? Text: Introduction Task-oriented dialogue system is an important tool to build personal virtual assistants, which can help users to complete most of the daily tasks by interacting with devices via natural language. It's attracting increasing attention of researchers, and lots of works have been proposed in this area BIBREF0, BIBREF1, BIBREF2, BIBREF3, BIBREF4, BIBREF5, BIBREF6, BIBREF7. The existing task-oriented dialogue systems usually consist of four components: (1) natural language understanding (NLU), it tries to identify the intent of a user; (2) dialogue state tracker (DST), it keeps the track of user goals and constraints in every turn; (3) dialogue policy maker (DP), it aims to generate the next available dialogue action; and (4) natural language generator (NLG), it generates a natural language response based on the dialogue action. Among the four components, dialogue policy maker plays a key role in order to complete dialogues effectively, because it decides the next dialogue action to be executed. As far as we know, the dialogue policy makers in most existing task-oriented dialogue systems just use the classifiers of the predefined acts to obtain dialogue policy BIBREF0, BIBREF2, BIBREF4, BIBREF8, BIBREF9. The classification-based dialogue policy learning methods can assign either only a dialogue act and its corresponding parameters BIBREF10, BIBREF2, BIBREF0 or multiple dialogue acts without their corresponding parameters for a dialogue action BIBREF11. However, all these existing methods cannot obtain multiple dialogue acts and their corresponding parameters for a dialogue action at the same time. Intuitively, it will be more reasonable to construct multiple dialogue acts and their corresponding parameters for a dialogue action at the same time. For example, it can be shown that there are 49.4% of turns in the DSTC2 dataset and 61.5% of turns in the Maluuba dataset have multiple dialogue acts and their corresponding parameters as the dialogue action. If multiple dialogue acts and their corresponding parameters can be obtained at the same time, the final response of task-oriented dialogue systems will become more accurate and effective. For example, as shown in Figure FIGREF3, a user wants to get the name of a cheap french restaurant. The correct dialogue policy should generate three acts in current dialogue turn: offer(name=name_slot), inform(food=french) and inform(food=cheap). Thus, the user's real thought may be: “name_slot is a cheap french restaurant". If losing the act offer, the system may generate a response like “There are some french restaurants", which will be far from the user's goal. To address this challenge, we propose a Generative Dialogue Policy model (GDP) by casting the dialogue policy learning problem as a sequence optimization problem. The proposed model generates a series of acts and their corresponding parameters by the learned dialogue policy. Specifically, our proposed model uses a recurrent neural network (RNN) as action decoder to construct dialogue policy maker instead of traditional classifiers. Attention mechanism is used to help the decoder decode dialogue acts and their corresponding parameters, and then the template-based natural language generator uses the results of the dialogue policy maker to choose an appropriate sentence template as the final response to the user. Extensive experiments conducted on two benchmark datasets verify the effectiveness of our proposed method. Our contributions in this work are three-fold. The existing methods cannot construct multiple dialogue acts and their corresponding parameters at the same time. In this paper, We propose a novel generative dialogue policy model to solve the problem. The extensive experiments demonstrate that the proposed model significantly outperforms the state-of-the-art baselines on two benchmarks. We publicly release the source code. Related Work Usually, the existing task-oriented dialogue systems use a pipeline of four separate modules: natural language understanding, dialogue belief tracker, dialogue policy and natural language generator. Among these four modules, dialogue policy maker plays a key role in task-oriented dialogue systems, which generates the next dialogue action. As far as we know, nearly all the existing approaches obtain the dialogue policy by using the classifiers of all predefined dialogue acts BIBREF12, BIBREF13. There are usually two kinds of dialogue policy learning methods. One constructs a dialogue act and its corresponding parameters for a dialogue action. For example, BIBREF0 constructs a simple classifier for all the predefined dialogue acts. BIBREF2 build a complex classifier for some predefined dialogue acts, addtionally BIBREF2 adds two acts for each parameter: one to inform its value and the other to request it. The other obtains the dialogue policy by using multi-label classification to consider multiple dialogue acts without their parameters. BIBREF11 performs multi-label multi-class classification for dialogue policy learning and then the multiple acts can be decided based on a threshold. Based on these classifiers, the reinforcement learning can be used to further update the dialogue policy of task-oriented dialogue systems BIBREF3, BIBREF14, BIBREF9. In the real scene, an correct dialogue action usually consists of multiple dialogue acts and their corresponding parameters. However, it is very hard for existing classification-based dialogue policy maker to achieve this goal. Thus, in this paper we propose a novel generative dialogue policy maker to address this issue by casting the dialogue policy learning problem as a sequence optimization problem. Technical Background ::: Encoder-Decoder Seq2Seq Models Seq2Seq model was first introduced by BIBREF15 for statistical machine translation. It uses two recurrent neural networks (RNN) to solve the sequence-to-sequence mapping problem. One called encoder encodes the user utterance into a dense vector representing its semantics, the other called decoder decodes this vector to the target sentence. Now Seq2Seq framework has already been used in task-oriented dialog systems such as BIBREF4 and BIBREF1, and shows the challenging performance. In the Seq2Seq model, given the user utterance $Q=(x_1, x_2, ..., x_n)$, the encoder squeezes it into a context vector $C$ and then used by decoder to generate the response $R=(y_1, y_2, ..., y_m)$ word by word by maximizing the generation probability of $R$ conditioned on $Q$. The objective function of Seq2Seq can be written as: In particular, the encoder RNN produces the context vector $C$ by doing calculation below: The $h_t$ is the hidden state of the encoder RNN at time step $t$ and $f$ is the non-linear transformation which can be a long-short term memory unit LSTM BIBREF16 or a gated recurrent unit GRU BIBREF15. In this paper, we implement $f$ by using GRU. The decoder RNN generates each word in reply conditioned on the context vector $C$. The probability distribution of candidate words at every time step $t$ is calculated as: The $s_t$ is the hidden state of decoder RNN at time step $t$ and $y_{t-1}$ is the generated word in the reply at time $t-1$ calculated by softmax operations. Technical Background ::: Attention Mechanism Attention mechanisms BIBREF17 have been proved to improved effectively the generation quality for the Seq2Seq framework. In Seq2Seq with attention, each $y_i$ corresponds to a context vector $C_i$ which is calculated dynamically. It is a weighted average of all hidden states of the encoder RNN. Formally, $C_i$ is defined as $C_i=\sum _{j=1}^{n} \alpha _{ij}h_j$, where $\alpha _{ij}$ is given by: where $s_{i-1}$ is the last hidden state of the decoder, the $\eta $ is often implemented as a multi-layer-perceptron (MLP) with tanh as the activation function. Generative Dialogue Policy Figure FIGREF13 shows the overall system architecture of the proposed GDP model. Our model contains five main components: (1) utterance encoder; (2) dialogue belief tracker; (3) dialogue policy maker; (4) knowledge base; (5) template-based natural language generator. Next, we will describe each component of our proposed GDP model in detail. Generative Dialogue Policy ::: Notations and Task Formulation Given the user utterance $U_t$ at turn $t$ and the dialogue context $C_{t-1}$ which contains the result of the dialogue belief tracker at turn $t-1$, the task-oriented dialog system needs to generate user's intents $C_t$ by dialogue belief tracker and then uses this information to get the knowledge base query result $k_t \in \mathbb {R}^k$. Then the model needs to generate the next dialogue action $A_t$ based on $k_t$, $U_t$ and $C_t$. The natural language generator provides the template-based response $R_t$ as the final reply by using $A_t$. The $U_t$ and $C_t$ are the sequences, $k_t$ is a one-hot vector representing the number of the query results. For baselines, in this paper, the $A_t$ is the classification result of the next dialogue action, but in our proposed model it's a sequence which contains multiple acts and their corresponding parameters. Generative Dialogue Policy ::: Utterance Encoder A bidirectional GRU is used to encode the user utterance $U_t$, the last turn response $R_{t-1}$ made by the system and the dialogue context $C_{t-1}$ into a continuous representation. The vector is generated by concatenating the last forward and backward GRU states. $U_t = (w_1, w_2, ..., w_{T_m})$ is the user utterance at turn $t$. $C_{t-1}=(c_1, c_2, ..., c_{T_n})$ is the dialogue context made by dialogue belief tracker at $t-1$ turn. $R_{t-1}$ is the response made by our task-oriented dialogue system at last turn. Then the words of $[C_{t-1}, R_{t-1}, U_t]$ are firstly mapped into an embedding space and further serve as the inputs of each step to the bidirectional GRU. Let $n$ denotes the number of words in the sequence $[C_{t-1}, R_{t-1}, U_t]$. The $\overrightarrow{h_{t^{\prime }}^u}$ and $\overleftarrow{h_{t^{\prime }}^u}$ represent the forward and backward GRU state outputs at time step $t^{\prime }$. The encoder output of timestep $i$ denote as $\overline{h_i^u}$. where $e([C_{t-1}, R_{t-1}, U_t])$ is the embedding of the input sequence, $d_h$ is the hidden size of the GRU. $H_u$ contains the encoder hidden state of each timestep, which will be used by attention mechanism in dialogue policy maker. Generative Dialogue Policy ::: Dialogue State Tracker Dialogue state tracker maintains the state of a conversation and collects the user's goals during the dialogue. Recent work successfully represents this component as discriminative classifiers. BIBREF5 verified that the generation is a better way to model the dialogue state tracker. Specifically, we use a GRU as the generator to decode the $C_t$ of current turn. In order to capture user intent information accurately, the basic attention mechanism is calculated when the decoder decodes the $C_t$ at each step, which is the same as the Eq. (DISPLAY_FORM12). where $m$ is the length of $C_t$, $e(y_i)$ is the embedding of the token, $d_h$ is the hidden size of the GRU and the hidden state at $i$ timestep of the RNN in dialogue state tracker denote as $h_i^d$. The decoded token at step $i$ denotes as $y_i^d$. Generative Dialogue Policy ::: Knowledge Base Knowledge base is a database that stores information about the related task. For example, in the restaurant reservation, a knowledge base stores the information of all the restaurants, such as location and price. After dialogue belief tracker, the $C_t$ will be used as the constraints to search the results in knowledge base. Then the one-hot vector $k_t$ will be produced when the system gets the number of the results. The search result $k_t$ has a great influence on dialogue policy. For example, if the result has multiple matches, the system should request more constraints of the user. In practice, let $k_t$ be an one-hot vector of 20 dimensions to represent the number of query results. Then $k_t$ will be used as the cue for dialogue policy maker. Generative Dialogue Policy ::: Dialogue Policy Maker In task-oriented dialogue systems, supervised classification is a straightforward solution for dialogue policy modeling. However, we observe that classification cannot hold enough information for dialogue policy modeling. The generative approach is another way to model the dialogue policy maker for task-oriented dialogue systems, which generates the next dialogue acts and their corresponding parameters based on the dialogue context word by word. Thus the generative approach converts the dialogue policy learning problem into a sequence optimization problem. The dialogue policy maker generates the next dialogue action $A_t$ based on $k_t$ and $[H_u, H_d]$. Our proposed model uses the GRU as the action decoder to decode the acts and their parameters for the response. Particularly, at step $i$, for decoding $y_i^p$ of $A_t$, the decoder GRU takes the embedding of $y_{i-1}^p$ to generate a hidden vector $h_i^p$. Basic attention mechanism is calculated. where $e$ is the embedding of the token, $c_u$ is the context vector of the input utterance and $c_d$ is the context vector of the dialogue state tracker. $h_i^p$ is the hidden state of the GRU in dialogue policy maker at $i$ timestep. where $y_i^p$ is the token decoded at $i$ timestep. And the final results of dialogue policy maker denote as $A_t$, and the $k$ is the length of it. In our proposed model, the dialogue policy maker can be viewed as a decoder of the seq2seq model conditioned on $[C_{t-1},R_{t-1},U_t]$ and $k_t$. Generative Dialogue Policy ::: Nature Language Generator After getting the dialogue action $A_t$ by the learned dialogue policy maker, the task-oriented dialogue system needs to generate an appropriate response $R_t$ for users. We construct the natural language generator by using template sentences. For each dataset, we extract all the system responses, then we manually modify responses to construct the sentence templates for task-oriented dialogue systems. In our proposed model, the sequence of the acts and parameters $A_t$ will be used for searching appropriate template. However, the classification-based baselines use the categories of acts and their corresponding parameters to search the corresponding template. Generative Dialogue Policy ::: Training In supervised learning, because our proposed model is built in a seq2seq way, the standard cross entropy is adopted as our objective function to train dialogue belief tracker and dialogue policy maker. After supervised learning, the dialogue policy can be further updated by using reinforcement learning. In the context of reinforcement learning, the decoder of dialogue policy maker can be viewed as a policy network, denoted as $\pi _{\theta }(y_j)$ for decoding $y_j$, $\theta $ is the parameters of the decoder. Accordingly, the hidden state created by GRU is the corresponding state, and the choice of the current token $y_j$ is an action. Reward function is also very important for reinforcement learning when decoding every token. To encourage our policy maker to generate correct acts and their corresponding parameters, we set the reward function as follows: once the dialogue acts and their parameters are decoded correctly, the reward is 2; otherwise, the reward is -5; only the label of the dialogue act is decoded correctly but parameters is wrong, the reward is 1; $\lambda $ is a decay parameter. More details are shown in Sec SECREF41. In our proposed model, rewards can only be obtained at the end of decoding $A_t$. In order to get the rewards at each decoding step, we sample some results $A_t$ after choosing $y_j$, and the reward of $y_j$ is set as the average of all the sampled results' rewards. In order to ensure that the model's performance is stable during the fine-tuning phase of reinforcement learning, we freeze the parameters of user utterance and dialogue belief tracker, only the parameters of the dialogue policy maker will be optimized by reinforcement learning. Policy gradient algorithm REINFORCE BIBREF18 is used for pretrained dialogue policy maker: where the $m$ is the length of the decoded action. The objective function $J$ can be optimized by gradient descent. Experiments We evaluate the performance of the proposed model in three aspects: (1) the accuracy of the dialogue state tracker, it aims to show the impact of the dialogue state tracker on the dialogue policy maker; (2) the accuracy of dialogue policy maker, it aims to explain the performance of different methods of constructing dialogue policy; (3) the quality of the final response, it aims to explain the impact of the dialogue policy on the final dialogue response. The evaluation metrics are listed as follows: BPRA: Belief Per-Response Accuracy (BPRA) tests the ability to generate the correct user intents during the dialogue. This metric is used to evaluate the accuracy of dialogue belief tracker BIBREF1. APRA: Action Per-Response Accuracy (APRA) evaluates the per-turn accuracy of the dialogue actions generated by dialogue policy maker. For baselines, APRA evaluates the classification accuracy of the dialogue policy maker. But our model actually generates each individual token of actions, and we consider a prediction to be correct only if every token of the model output matches the corresponding token in the ground truth. BLEU BIBREF19: The metric evaluates the quality of the final response generated by natural language generator. The metric is usually used to measure the performance of the task-oriented dialogue system. We also choose the following metrics to evaluate the efficiency of training the model: $\mathbf {Time_{full}}$: The time for training the whole model, which is important for industry settings. $\mathbf {Time_{DP}}$: The time for training the dialogue policy maker in a task-oriented dialogue system. Experiments ::: Datasets We adopt the DSTC2 BIBREF20 dataset and Maluuba BIBREF21 dataset to evaluate our proposed model. Both of them are the benchmark datasets for building the task-oriented dialog systems. Specifically, the DSTC2 is a human-machine dataset in the single domain of restaurant searching. The Maluuba is a very complex human-human dataset in travel booking domain which contains more slots and values than DSTC2. Detailed slot information in each dataset is shown in Table TABREF34. Experiments ::: Baselines For comparison, we choose two state-of-the-art baselines and their variants. E2ECM BIBREF11: In dialogue policy maker, it adopts a classic classification for skeletal sentence template. In our implement, we construct multiple binary classifications for each act to search the sentence template according to the work proposed by BIBREF11. CDM BIBREF10: This approach designs a group of classifications (two multi-class classifications and some binary classifications) to model the dialogue policy. E2ECM+RL: It fine tunes the classification parameters of the dialogue policy by REINFORCE BIBREF18. CDM+RL: It fine tunes the classification of the act and corresponding parameters by REINFORCE BIBREF18. In order to verify the performance of the dialogue policy maker, the utterance encoder and dialogue belief tracker of our proposed model and baselines is the same, only dialogue policy maker is different. Experiments ::: Parameters settings For all models, the hidden size of dialogue belief tracker and utterance encoder is 350, and the embedding size $d_{emb}$ is set to 300. For our proposed model, the hidden size of decoder in dialogue policy maker is 150. The vocabulary size $|V|$ is 540 for DSTC2 and 4712 for Maluuba. And the size of $k_t$ is set to 20. An Adam optimizer BIBREF22 is used for training our models and baselines, with a learning rate of 0.001 for supervised training and 0.0001 for reinforcement learning. In reinforcement learning, the decay parameter $\lambda $ is set to 0.8. The weight decay is set to 0.001. And early stopping is performed on developing set. Experiments ::: Experimental Results The experimental results of the proposed model and baselines will be analyzed from the following aspects. BPRA Results: As shown in Table TABREF35, most of the models have similar performance on BPRA on these two datasets, which can guarantee a consistent impact on the dialogue policy maker. All the models perform very well in BPRA on DSTC2 dataset. On Maluuba dataset, the BPRA decreases because of the complex domains. We can notice that BPRA of CDM is slightly worse than other models on Maluuba dataset, the reason is that the CDM's dialogue policy maker contains lots of classifications and has the bigger loss than other models because of complex domains, which affects the training of the dialogue belief tracker. APRA Results: Compared with baselines, GDP achieves the best performance in APRA on two datasets. It can be noted that we do not compare with the E2ECM baseline in APRA. E2ECM only uses a simple classifier to recognize the label of the acts and ignores the parameters information. In our experiment, APRA of E2ECM is slightly better than our method. Considering the lack of parameters of the acts, it's unfair for our GDP method. Furthermore, the CDM baseline considers the parameters of the act. But GDP is far better than CDM in supervised learning and reinforcement learning. BLEU Results: GDP significantly outperforms the baselines on BLEU. As mentioned above, E2ECM is actually slightly better than GDP in APRA. But in fact, we can find that the language quality of the response generated by GDP is still better than E2ECM, which proves that lack of enough parameters information makes it difficult to find the appropriate sentence template in NLG. It can be found that the BLEU of all models is very poor on Maluuba dataset. The reason is that Maluuba is a human-human task-oriented dialogue dataset, the utterances are very flexible, the natural language generator for all methods is difficult to generate an accurate utterance based on the context. And DSTC2 is a human-machine dialog dataset. The response is very regular so the effectiveness of NLG will be better than that of Maluuba. But from the results, the GDP is still better than the baselines on Maluuba dataset, which also verifies that our proposed method is more accurate in modeling dialogue policy on complex domains than the classification-based methods. Time and Model Size: In order to obtain more accurate and complete dialogue policy for task-oriented dialogue systems, the proposed model has more parameters on the dialogue policy maker than baselines. As shown in Figure FIGREF44, E2ECM has the minimal dialogue policy parameters because of the simple classification. It needs minimum training time, but the performance of E2ECM is bad. The number of parameters in the CDM model is slightly larger than E2ECM. However, because both of them are classification methods, they all lose some important information about dialogue policy. Therefore, we can see from the experimental results that the quality of CDM's dialogue policy is as bad as E2ECM. The number of dialogue policy maker's parameters in GDP model is much larger than baselines. Although the proposed model need more time to be optimized by supervised learning and reinforcement learning, the performance is much better than all baselines. Experiments ::: Case Study Table TABREF43 illustrates an example of our proposed model and baselines on DSTC2 dataset. In this example, a user's goal is to find a cheap restaurant in the east part of the town. In the current turn, the user wants to get the address of the restaurant. E2ECM chooses the inform and offer acts accurately, but the lack of the inform's parameters makes the final output deviate from the user's goal. CDM generates the parameters of offer successfully, but the lack of the information of inform also leads to a bad result. By contrast, the proposed model GDP can generate all the acts and their corresponding parameters as the dialogue action. Interestingly, the final result of GDP is exactly the same as the ground truth, which verifies that the proposed model is better than the state-of-the-art baselines. Conclusion In this paper, we propose a novel model named GDP. Our proposed model treats the dialogue policy modeling as the generative task instead of the discriminative task which can hold more information for dialogue policy modeling. We evaluate the GDP on two benchmark task-oriented dialogue datasets. Extensive experiments show that GDP outperforms the existing classification-based methods on both action accuracy and BLEU.
most of the models have similar performance on BPRA: DSTC2 (+0.0015), Maluuba (+0.0729) GDP achieves the best performance in APRA: DSTC2 (+0.2893), Maluuba (+0.2896) GDP significantly outperforms the baselines on BLEU: DSTC2 (+0.0791), Maluuba (+0.0492)
e72a672f8008bbc52b93d8037a5fedf8956136af
e72a672f8008bbc52b93d8037a5fedf8956136af_0
Q: What are state-of-the-art baselines? Text: Introduction Task-oriented dialogue system is an important tool to build personal virtual assistants, which can help users to complete most of the daily tasks by interacting with devices via natural language. It's attracting increasing attention of researchers, and lots of works have been proposed in this area BIBREF0, BIBREF1, BIBREF2, BIBREF3, BIBREF4, BIBREF5, BIBREF6, BIBREF7. The existing task-oriented dialogue systems usually consist of four components: (1) natural language understanding (NLU), it tries to identify the intent of a user; (2) dialogue state tracker (DST), it keeps the track of user goals and constraints in every turn; (3) dialogue policy maker (DP), it aims to generate the next available dialogue action; and (4) natural language generator (NLG), it generates a natural language response based on the dialogue action. Among the four components, dialogue policy maker plays a key role in order to complete dialogues effectively, because it decides the next dialogue action to be executed. As far as we know, the dialogue policy makers in most existing task-oriented dialogue systems just use the classifiers of the predefined acts to obtain dialogue policy BIBREF0, BIBREF2, BIBREF4, BIBREF8, BIBREF9. The classification-based dialogue policy learning methods can assign either only a dialogue act and its corresponding parameters BIBREF10, BIBREF2, BIBREF0 or multiple dialogue acts without their corresponding parameters for a dialogue action BIBREF11. However, all these existing methods cannot obtain multiple dialogue acts and their corresponding parameters for a dialogue action at the same time. Intuitively, it will be more reasonable to construct multiple dialogue acts and their corresponding parameters for a dialogue action at the same time. For example, it can be shown that there are 49.4% of turns in the DSTC2 dataset and 61.5% of turns in the Maluuba dataset have multiple dialogue acts and their corresponding parameters as the dialogue action. If multiple dialogue acts and their corresponding parameters can be obtained at the same time, the final response of task-oriented dialogue systems will become more accurate and effective. For example, as shown in Figure FIGREF3, a user wants to get the name of a cheap french restaurant. The correct dialogue policy should generate three acts in current dialogue turn: offer(name=name_slot), inform(food=french) and inform(food=cheap). Thus, the user's real thought may be: “name_slot is a cheap french restaurant". If losing the act offer, the system may generate a response like “There are some french restaurants", which will be far from the user's goal. To address this challenge, we propose a Generative Dialogue Policy model (GDP) by casting the dialogue policy learning problem as a sequence optimization problem. The proposed model generates a series of acts and their corresponding parameters by the learned dialogue policy. Specifically, our proposed model uses a recurrent neural network (RNN) as action decoder to construct dialogue policy maker instead of traditional classifiers. Attention mechanism is used to help the decoder decode dialogue acts and their corresponding parameters, and then the template-based natural language generator uses the results of the dialogue policy maker to choose an appropriate sentence template as the final response to the user. Extensive experiments conducted on two benchmark datasets verify the effectiveness of our proposed method. Our contributions in this work are three-fold. The existing methods cannot construct multiple dialogue acts and their corresponding parameters at the same time. In this paper, We propose a novel generative dialogue policy model to solve the problem. The extensive experiments demonstrate that the proposed model significantly outperforms the state-of-the-art baselines on two benchmarks. We publicly release the source code. Related Work Usually, the existing task-oriented dialogue systems use a pipeline of four separate modules: natural language understanding, dialogue belief tracker, dialogue policy and natural language generator. Among these four modules, dialogue policy maker plays a key role in task-oriented dialogue systems, which generates the next dialogue action. As far as we know, nearly all the existing approaches obtain the dialogue policy by using the classifiers of all predefined dialogue acts BIBREF12, BIBREF13. There are usually two kinds of dialogue policy learning methods. One constructs a dialogue act and its corresponding parameters for a dialogue action. For example, BIBREF0 constructs a simple classifier for all the predefined dialogue acts. BIBREF2 build a complex classifier for some predefined dialogue acts, addtionally BIBREF2 adds two acts for each parameter: one to inform its value and the other to request it. The other obtains the dialogue policy by using multi-label classification to consider multiple dialogue acts without their parameters. BIBREF11 performs multi-label multi-class classification for dialogue policy learning and then the multiple acts can be decided based on a threshold. Based on these classifiers, the reinforcement learning can be used to further update the dialogue policy of task-oriented dialogue systems BIBREF3, BIBREF14, BIBREF9. In the real scene, an correct dialogue action usually consists of multiple dialogue acts and their corresponding parameters. However, it is very hard for existing classification-based dialogue policy maker to achieve this goal. Thus, in this paper we propose a novel generative dialogue policy maker to address this issue by casting the dialogue policy learning problem as a sequence optimization problem. Technical Background ::: Encoder-Decoder Seq2Seq Models Seq2Seq model was first introduced by BIBREF15 for statistical machine translation. It uses two recurrent neural networks (RNN) to solve the sequence-to-sequence mapping problem. One called encoder encodes the user utterance into a dense vector representing its semantics, the other called decoder decodes this vector to the target sentence. Now Seq2Seq framework has already been used in task-oriented dialog systems such as BIBREF4 and BIBREF1, and shows the challenging performance. In the Seq2Seq model, given the user utterance $Q=(x_1, x_2, ..., x_n)$, the encoder squeezes it into a context vector $C$ and then used by decoder to generate the response $R=(y_1, y_2, ..., y_m)$ word by word by maximizing the generation probability of $R$ conditioned on $Q$. The objective function of Seq2Seq can be written as: In particular, the encoder RNN produces the context vector $C$ by doing calculation below: The $h_t$ is the hidden state of the encoder RNN at time step $t$ and $f$ is the non-linear transformation which can be a long-short term memory unit LSTM BIBREF16 or a gated recurrent unit GRU BIBREF15. In this paper, we implement $f$ by using GRU. The decoder RNN generates each word in reply conditioned on the context vector $C$. The probability distribution of candidate words at every time step $t$ is calculated as: The $s_t$ is the hidden state of decoder RNN at time step $t$ and $y_{t-1}$ is the generated word in the reply at time $t-1$ calculated by softmax operations. Technical Background ::: Attention Mechanism Attention mechanisms BIBREF17 have been proved to improved effectively the generation quality for the Seq2Seq framework. In Seq2Seq with attention, each $y_i$ corresponds to a context vector $C_i$ which is calculated dynamically. It is a weighted average of all hidden states of the encoder RNN. Formally, $C_i$ is defined as $C_i=\sum _{j=1}^{n} \alpha _{ij}h_j$, where $\alpha _{ij}$ is given by: where $s_{i-1}$ is the last hidden state of the decoder, the $\eta $ is often implemented as a multi-layer-perceptron (MLP) with tanh as the activation function. Generative Dialogue Policy Figure FIGREF13 shows the overall system architecture of the proposed GDP model. Our model contains five main components: (1) utterance encoder; (2) dialogue belief tracker; (3) dialogue policy maker; (4) knowledge base; (5) template-based natural language generator. Next, we will describe each component of our proposed GDP model in detail. Generative Dialogue Policy ::: Notations and Task Formulation Given the user utterance $U_t$ at turn $t$ and the dialogue context $C_{t-1}$ which contains the result of the dialogue belief tracker at turn $t-1$, the task-oriented dialog system needs to generate user's intents $C_t$ by dialogue belief tracker and then uses this information to get the knowledge base query result $k_t \in \mathbb {R}^k$. Then the model needs to generate the next dialogue action $A_t$ based on $k_t$, $U_t$ and $C_t$. The natural language generator provides the template-based response $R_t$ as the final reply by using $A_t$. The $U_t$ and $C_t$ are the sequences, $k_t$ is a one-hot vector representing the number of the query results. For baselines, in this paper, the $A_t$ is the classification result of the next dialogue action, but in our proposed model it's a sequence which contains multiple acts and their corresponding parameters. Generative Dialogue Policy ::: Utterance Encoder A bidirectional GRU is used to encode the user utterance $U_t$, the last turn response $R_{t-1}$ made by the system and the dialogue context $C_{t-1}$ into a continuous representation. The vector is generated by concatenating the last forward and backward GRU states. $U_t = (w_1, w_2, ..., w_{T_m})$ is the user utterance at turn $t$. $C_{t-1}=(c_1, c_2, ..., c_{T_n})$ is the dialogue context made by dialogue belief tracker at $t-1$ turn. $R_{t-1}$ is the response made by our task-oriented dialogue system at last turn. Then the words of $[C_{t-1}, R_{t-1}, U_t]$ are firstly mapped into an embedding space and further serve as the inputs of each step to the bidirectional GRU. Let $n$ denotes the number of words in the sequence $[C_{t-1}, R_{t-1}, U_t]$. The $\overrightarrow{h_{t^{\prime }}^u}$ and $\overleftarrow{h_{t^{\prime }}^u}$ represent the forward and backward GRU state outputs at time step $t^{\prime }$. The encoder output of timestep $i$ denote as $\overline{h_i^u}$. where $e([C_{t-1}, R_{t-1}, U_t])$ is the embedding of the input sequence, $d_h$ is the hidden size of the GRU. $H_u$ contains the encoder hidden state of each timestep, which will be used by attention mechanism in dialogue policy maker. Generative Dialogue Policy ::: Dialogue State Tracker Dialogue state tracker maintains the state of a conversation and collects the user's goals during the dialogue. Recent work successfully represents this component as discriminative classifiers. BIBREF5 verified that the generation is a better way to model the dialogue state tracker. Specifically, we use a GRU as the generator to decode the $C_t$ of current turn. In order to capture user intent information accurately, the basic attention mechanism is calculated when the decoder decodes the $C_t$ at each step, which is the same as the Eq. (DISPLAY_FORM12). where $m$ is the length of $C_t$, $e(y_i)$ is the embedding of the token, $d_h$ is the hidden size of the GRU and the hidden state at $i$ timestep of the RNN in dialogue state tracker denote as $h_i^d$. The decoded token at step $i$ denotes as $y_i^d$. Generative Dialogue Policy ::: Knowledge Base Knowledge base is a database that stores information about the related task. For example, in the restaurant reservation, a knowledge base stores the information of all the restaurants, such as location and price. After dialogue belief tracker, the $C_t$ will be used as the constraints to search the results in knowledge base. Then the one-hot vector $k_t$ will be produced when the system gets the number of the results. The search result $k_t$ has a great influence on dialogue policy. For example, if the result has multiple matches, the system should request more constraints of the user. In practice, let $k_t$ be an one-hot vector of 20 dimensions to represent the number of query results. Then $k_t$ will be used as the cue for dialogue policy maker. Generative Dialogue Policy ::: Dialogue Policy Maker In task-oriented dialogue systems, supervised classification is a straightforward solution for dialogue policy modeling. However, we observe that classification cannot hold enough information for dialogue policy modeling. The generative approach is another way to model the dialogue policy maker for task-oriented dialogue systems, which generates the next dialogue acts and their corresponding parameters based on the dialogue context word by word. Thus the generative approach converts the dialogue policy learning problem into a sequence optimization problem. The dialogue policy maker generates the next dialogue action $A_t$ based on $k_t$ and $[H_u, H_d]$. Our proposed model uses the GRU as the action decoder to decode the acts and their parameters for the response. Particularly, at step $i$, for decoding $y_i^p$ of $A_t$, the decoder GRU takes the embedding of $y_{i-1}^p$ to generate a hidden vector $h_i^p$. Basic attention mechanism is calculated. where $e$ is the embedding of the token, $c_u$ is the context vector of the input utterance and $c_d$ is the context vector of the dialogue state tracker. $h_i^p$ is the hidden state of the GRU in dialogue policy maker at $i$ timestep. where $y_i^p$ is the token decoded at $i$ timestep. And the final results of dialogue policy maker denote as $A_t$, and the $k$ is the length of it. In our proposed model, the dialogue policy maker can be viewed as a decoder of the seq2seq model conditioned on $[C_{t-1},R_{t-1},U_t]$ and $k_t$. Generative Dialogue Policy ::: Nature Language Generator After getting the dialogue action $A_t$ by the learned dialogue policy maker, the task-oriented dialogue system needs to generate an appropriate response $R_t$ for users. We construct the natural language generator by using template sentences. For each dataset, we extract all the system responses, then we manually modify responses to construct the sentence templates for task-oriented dialogue systems. In our proposed model, the sequence of the acts and parameters $A_t$ will be used for searching appropriate template. However, the classification-based baselines use the categories of acts and their corresponding parameters to search the corresponding template. Generative Dialogue Policy ::: Training In supervised learning, because our proposed model is built in a seq2seq way, the standard cross entropy is adopted as our objective function to train dialogue belief tracker and dialogue policy maker. After supervised learning, the dialogue policy can be further updated by using reinforcement learning. In the context of reinforcement learning, the decoder of dialogue policy maker can be viewed as a policy network, denoted as $\pi _{\theta }(y_j)$ for decoding $y_j$, $\theta $ is the parameters of the decoder. Accordingly, the hidden state created by GRU is the corresponding state, and the choice of the current token $y_j$ is an action. Reward function is also very important for reinforcement learning when decoding every token. To encourage our policy maker to generate correct acts and their corresponding parameters, we set the reward function as follows: once the dialogue acts and their parameters are decoded correctly, the reward is 2; otherwise, the reward is -5; only the label of the dialogue act is decoded correctly but parameters is wrong, the reward is 1; $\lambda $ is a decay parameter. More details are shown in Sec SECREF41. In our proposed model, rewards can only be obtained at the end of decoding $A_t$. In order to get the rewards at each decoding step, we sample some results $A_t$ after choosing $y_j$, and the reward of $y_j$ is set as the average of all the sampled results' rewards. In order to ensure that the model's performance is stable during the fine-tuning phase of reinforcement learning, we freeze the parameters of user utterance and dialogue belief tracker, only the parameters of the dialogue policy maker will be optimized by reinforcement learning. Policy gradient algorithm REINFORCE BIBREF18 is used for pretrained dialogue policy maker: where the $m$ is the length of the decoded action. The objective function $J$ can be optimized by gradient descent. Experiments We evaluate the performance of the proposed model in three aspects: (1) the accuracy of the dialogue state tracker, it aims to show the impact of the dialogue state tracker on the dialogue policy maker; (2) the accuracy of dialogue policy maker, it aims to explain the performance of different methods of constructing dialogue policy; (3) the quality of the final response, it aims to explain the impact of the dialogue policy on the final dialogue response. The evaluation metrics are listed as follows: BPRA: Belief Per-Response Accuracy (BPRA) tests the ability to generate the correct user intents during the dialogue. This metric is used to evaluate the accuracy of dialogue belief tracker BIBREF1. APRA: Action Per-Response Accuracy (APRA) evaluates the per-turn accuracy of the dialogue actions generated by dialogue policy maker. For baselines, APRA evaluates the classification accuracy of the dialogue policy maker. But our model actually generates each individual token of actions, and we consider a prediction to be correct only if every token of the model output matches the corresponding token in the ground truth. BLEU BIBREF19: The metric evaluates the quality of the final response generated by natural language generator. The metric is usually used to measure the performance of the task-oriented dialogue system. We also choose the following metrics to evaluate the efficiency of training the model: $\mathbf {Time_{full}}$: The time for training the whole model, which is important for industry settings. $\mathbf {Time_{DP}}$: The time for training the dialogue policy maker in a task-oriented dialogue system. Experiments ::: Datasets We adopt the DSTC2 BIBREF20 dataset and Maluuba BIBREF21 dataset to evaluate our proposed model. Both of them are the benchmark datasets for building the task-oriented dialog systems. Specifically, the DSTC2 is a human-machine dataset in the single domain of restaurant searching. The Maluuba is a very complex human-human dataset in travel booking domain which contains more slots and values than DSTC2. Detailed slot information in each dataset is shown in Table TABREF34. Experiments ::: Baselines For comparison, we choose two state-of-the-art baselines and their variants. E2ECM BIBREF11: In dialogue policy maker, it adopts a classic classification for skeletal sentence template. In our implement, we construct multiple binary classifications for each act to search the sentence template according to the work proposed by BIBREF11. CDM BIBREF10: This approach designs a group of classifications (two multi-class classifications and some binary classifications) to model the dialogue policy. E2ECM+RL: It fine tunes the classification parameters of the dialogue policy by REINFORCE BIBREF18. CDM+RL: It fine tunes the classification of the act and corresponding parameters by REINFORCE BIBREF18. In order to verify the performance of the dialogue policy maker, the utterance encoder and dialogue belief tracker of our proposed model and baselines is the same, only dialogue policy maker is different. Experiments ::: Parameters settings For all models, the hidden size of dialogue belief tracker and utterance encoder is 350, and the embedding size $d_{emb}$ is set to 300. For our proposed model, the hidden size of decoder in dialogue policy maker is 150. The vocabulary size $|V|$ is 540 for DSTC2 and 4712 for Maluuba. And the size of $k_t$ is set to 20. An Adam optimizer BIBREF22 is used for training our models and baselines, with a learning rate of 0.001 for supervised training and 0.0001 for reinforcement learning. In reinforcement learning, the decay parameter $\lambda $ is set to 0.8. The weight decay is set to 0.001. And early stopping is performed on developing set. Experiments ::: Experimental Results The experimental results of the proposed model and baselines will be analyzed from the following aspects. BPRA Results: As shown in Table TABREF35, most of the models have similar performance on BPRA on these two datasets, which can guarantee a consistent impact on the dialogue policy maker. All the models perform very well in BPRA on DSTC2 dataset. On Maluuba dataset, the BPRA decreases because of the complex domains. We can notice that BPRA of CDM is slightly worse than other models on Maluuba dataset, the reason is that the CDM's dialogue policy maker contains lots of classifications and has the bigger loss than other models because of complex domains, which affects the training of the dialogue belief tracker. APRA Results: Compared with baselines, GDP achieves the best performance in APRA on two datasets. It can be noted that we do not compare with the E2ECM baseline in APRA. E2ECM only uses a simple classifier to recognize the label of the acts and ignores the parameters information. In our experiment, APRA of E2ECM is slightly better than our method. Considering the lack of parameters of the acts, it's unfair for our GDP method. Furthermore, the CDM baseline considers the parameters of the act. But GDP is far better than CDM in supervised learning and reinforcement learning. BLEU Results: GDP significantly outperforms the baselines on BLEU. As mentioned above, E2ECM is actually slightly better than GDP in APRA. But in fact, we can find that the language quality of the response generated by GDP is still better than E2ECM, which proves that lack of enough parameters information makes it difficult to find the appropriate sentence template in NLG. It can be found that the BLEU of all models is very poor on Maluuba dataset. The reason is that Maluuba is a human-human task-oriented dialogue dataset, the utterances are very flexible, the natural language generator for all methods is difficult to generate an accurate utterance based on the context. And DSTC2 is a human-machine dialog dataset. The response is very regular so the effectiveness of NLG will be better than that of Maluuba. But from the results, the GDP is still better than the baselines on Maluuba dataset, which also verifies that our proposed method is more accurate in modeling dialogue policy on complex domains than the classification-based methods. Time and Model Size: In order to obtain more accurate and complete dialogue policy for task-oriented dialogue systems, the proposed model has more parameters on the dialogue policy maker than baselines. As shown in Figure FIGREF44, E2ECM has the minimal dialogue policy parameters because of the simple classification. It needs minimum training time, but the performance of E2ECM is bad. The number of parameters in the CDM model is slightly larger than E2ECM. However, because both of them are classification methods, they all lose some important information about dialogue policy. Therefore, we can see from the experimental results that the quality of CDM's dialogue policy is as bad as E2ECM. The number of dialogue policy maker's parameters in GDP model is much larger than baselines. Although the proposed model need more time to be optimized by supervised learning and reinforcement learning, the performance is much better than all baselines. Experiments ::: Case Study Table TABREF43 illustrates an example of our proposed model and baselines on DSTC2 dataset. In this example, a user's goal is to find a cheap restaurant in the east part of the town. In the current turn, the user wants to get the address of the restaurant. E2ECM chooses the inform and offer acts accurately, but the lack of the inform's parameters makes the final output deviate from the user's goal. CDM generates the parameters of offer successfully, but the lack of the information of inform also leads to a bad result. By contrast, the proposed model GDP can generate all the acts and their corresponding parameters as the dialogue action. Interestingly, the final result of GDP is exactly the same as the ground truth, which verifies that the proposed model is better than the state-of-the-art baselines. Conclusion In this paper, we propose a novel model named GDP. Our proposed model treats the dialogue policy modeling as the generative task instead of the discriminative task which can hold more information for dialogue policy modeling. We evaluate the GDP on two benchmark task-oriented dialogue datasets. Extensive experiments show that GDP outperforms the existing classification-based methods on both action accuracy and BLEU.
E2ECM, CDM
57586358dd01633aa2ebeef892e96a549b1d1930
57586358dd01633aa2ebeef892e96a549b1d1930_0
Q: What two benchmark datasets are used? Text: Introduction Task-oriented dialogue system is an important tool to build personal virtual assistants, which can help users to complete most of the daily tasks by interacting with devices via natural language. It's attracting increasing attention of researchers, and lots of works have been proposed in this area BIBREF0, BIBREF1, BIBREF2, BIBREF3, BIBREF4, BIBREF5, BIBREF6, BIBREF7. The existing task-oriented dialogue systems usually consist of four components: (1) natural language understanding (NLU), it tries to identify the intent of a user; (2) dialogue state tracker (DST), it keeps the track of user goals and constraints in every turn; (3) dialogue policy maker (DP), it aims to generate the next available dialogue action; and (4) natural language generator (NLG), it generates a natural language response based on the dialogue action. Among the four components, dialogue policy maker plays a key role in order to complete dialogues effectively, because it decides the next dialogue action to be executed. As far as we know, the dialogue policy makers in most existing task-oriented dialogue systems just use the classifiers of the predefined acts to obtain dialogue policy BIBREF0, BIBREF2, BIBREF4, BIBREF8, BIBREF9. The classification-based dialogue policy learning methods can assign either only a dialogue act and its corresponding parameters BIBREF10, BIBREF2, BIBREF0 or multiple dialogue acts without their corresponding parameters for a dialogue action BIBREF11. However, all these existing methods cannot obtain multiple dialogue acts and their corresponding parameters for a dialogue action at the same time. Intuitively, it will be more reasonable to construct multiple dialogue acts and their corresponding parameters for a dialogue action at the same time. For example, it can be shown that there are 49.4% of turns in the DSTC2 dataset and 61.5% of turns in the Maluuba dataset have multiple dialogue acts and their corresponding parameters as the dialogue action. If multiple dialogue acts and their corresponding parameters can be obtained at the same time, the final response of task-oriented dialogue systems will become more accurate and effective. For example, as shown in Figure FIGREF3, a user wants to get the name of a cheap french restaurant. The correct dialogue policy should generate three acts in current dialogue turn: offer(name=name_slot), inform(food=french) and inform(food=cheap). Thus, the user's real thought may be: “name_slot is a cheap french restaurant". If losing the act offer, the system may generate a response like “There are some french restaurants", which will be far from the user's goal. To address this challenge, we propose a Generative Dialogue Policy model (GDP) by casting the dialogue policy learning problem as a sequence optimization problem. The proposed model generates a series of acts and their corresponding parameters by the learned dialogue policy. Specifically, our proposed model uses a recurrent neural network (RNN) as action decoder to construct dialogue policy maker instead of traditional classifiers. Attention mechanism is used to help the decoder decode dialogue acts and their corresponding parameters, and then the template-based natural language generator uses the results of the dialogue policy maker to choose an appropriate sentence template as the final response to the user. Extensive experiments conducted on two benchmark datasets verify the effectiveness of our proposed method. Our contributions in this work are three-fold. The existing methods cannot construct multiple dialogue acts and their corresponding parameters at the same time. In this paper, We propose a novel generative dialogue policy model to solve the problem. The extensive experiments demonstrate that the proposed model significantly outperforms the state-of-the-art baselines on two benchmarks. We publicly release the source code. Related Work Usually, the existing task-oriented dialogue systems use a pipeline of four separate modules: natural language understanding, dialogue belief tracker, dialogue policy and natural language generator. Among these four modules, dialogue policy maker plays a key role in task-oriented dialogue systems, which generates the next dialogue action. As far as we know, nearly all the existing approaches obtain the dialogue policy by using the classifiers of all predefined dialogue acts BIBREF12, BIBREF13. There are usually two kinds of dialogue policy learning methods. One constructs a dialogue act and its corresponding parameters for a dialogue action. For example, BIBREF0 constructs a simple classifier for all the predefined dialogue acts. BIBREF2 build a complex classifier for some predefined dialogue acts, addtionally BIBREF2 adds two acts for each parameter: one to inform its value and the other to request it. The other obtains the dialogue policy by using multi-label classification to consider multiple dialogue acts without their parameters. BIBREF11 performs multi-label multi-class classification for dialogue policy learning and then the multiple acts can be decided based on a threshold. Based on these classifiers, the reinforcement learning can be used to further update the dialogue policy of task-oriented dialogue systems BIBREF3, BIBREF14, BIBREF9. In the real scene, an correct dialogue action usually consists of multiple dialogue acts and their corresponding parameters. However, it is very hard for existing classification-based dialogue policy maker to achieve this goal. Thus, in this paper we propose a novel generative dialogue policy maker to address this issue by casting the dialogue policy learning problem as a sequence optimization problem. Technical Background ::: Encoder-Decoder Seq2Seq Models Seq2Seq model was first introduced by BIBREF15 for statistical machine translation. It uses two recurrent neural networks (RNN) to solve the sequence-to-sequence mapping problem. One called encoder encodes the user utterance into a dense vector representing its semantics, the other called decoder decodes this vector to the target sentence. Now Seq2Seq framework has already been used in task-oriented dialog systems such as BIBREF4 and BIBREF1, and shows the challenging performance. In the Seq2Seq model, given the user utterance $Q=(x_1, x_2, ..., x_n)$, the encoder squeezes it into a context vector $C$ and then used by decoder to generate the response $R=(y_1, y_2, ..., y_m)$ word by word by maximizing the generation probability of $R$ conditioned on $Q$. The objective function of Seq2Seq can be written as: In particular, the encoder RNN produces the context vector $C$ by doing calculation below: The $h_t$ is the hidden state of the encoder RNN at time step $t$ and $f$ is the non-linear transformation which can be a long-short term memory unit LSTM BIBREF16 or a gated recurrent unit GRU BIBREF15. In this paper, we implement $f$ by using GRU. The decoder RNN generates each word in reply conditioned on the context vector $C$. The probability distribution of candidate words at every time step $t$ is calculated as: The $s_t$ is the hidden state of decoder RNN at time step $t$ and $y_{t-1}$ is the generated word in the reply at time $t-1$ calculated by softmax operations. Technical Background ::: Attention Mechanism Attention mechanisms BIBREF17 have been proved to improved effectively the generation quality for the Seq2Seq framework. In Seq2Seq with attention, each $y_i$ corresponds to a context vector $C_i$ which is calculated dynamically. It is a weighted average of all hidden states of the encoder RNN. Formally, $C_i$ is defined as $C_i=\sum _{j=1}^{n} \alpha _{ij}h_j$, where $\alpha _{ij}$ is given by: where $s_{i-1}$ is the last hidden state of the decoder, the $\eta $ is often implemented as a multi-layer-perceptron (MLP) with tanh as the activation function. Generative Dialogue Policy Figure FIGREF13 shows the overall system architecture of the proposed GDP model. Our model contains five main components: (1) utterance encoder; (2) dialogue belief tracker; (3) dialogue policy maker; (4) knowledge base; (5) template-based natural language generator. Next, we will describe each component of our proposed GDP model in detail. Generative Dialogue Policy ::: Notations and Task Formulation Given the user utterance $U_t$ at turn $t$ and the dialogue context $C_{t-1}$ which contains the result of the dialogue belief tracker at turn $t-1$, the task-oriented dialog system needs to generate user's intents $C_t$ by dialogue belief tracker and then uses this information to get the knowledge base query result $k_t \in \mathbb {R}^k$. Then the model needs to generate the next dialogue action $A_t$ based on $k_t$, $U_t$ and $C_t$. The natural language generator provides the template-based response $R_t$ as the final reply by using $A_t$. The $U_t$ and $C_t$ are the sequences, $k_t$ is a one-hot vector representing the number of the query results. For baselines, in this paper, the $A_t$ is the classification result of the next dialogue action, but in our proposed model it's a sequence which contains multiple acts and their corresponding parameters. Generative Dialogue Policy ::: Utterance Encoder A bidirectional GRU is used to encode the user utterance $U_t$, the last turn response $R_{t-1}$ made by the system and the dialogue context $C_{t-1}$ into a continuous representation. The vector is generated by concatenating the last forward and backward GRU states. $U_t = (w_1, w_2, ..., w_{T_m})$ is the user utterance at turn $t$. $C_{t-1}=(c_1, c_2, ..., c_{T_n})$ is the dialogue context made by dialogue belief tracker at $t-1$ turn. $R_{t-1}$ is the response made by our task-oriented dialogue system at last turn. Then the words of $[C_{t-1}, R_{t-1}, U_t]$ are firstly mapped into an embedding space and further serve as the inputs of each step to the bidirectional GRU. Let $n$ denotes the number of words in the sequence $[C_{t-1}, R_{t-1}, U_t]$. The $\overrightarrow{h_{t^{\prime }}^u}$ and $\overleftarrow{h_{t^{\prime }}^u}$ represent the forward and backward GRU state outputs at time step $t^{\prime }$. The encoder output of timestep $i$ denote as $\overline{h_i^u}$. where $e([C_{t-1}, R_{t-1}, U_t])$ is the embedding of the input sequence, $d_h$ is the hidden size of the GRU. $H_u$ contains the encoder hidden state of each timestep, which will be used by attention mechanism in dialogue policy maker. Generative Dialogue Policy ::: Dialogue State Tracker Dialogue state tracker maintains the state of a conversation and collects the user's goals during the dialogue. Recent work successfully represents this component as discriminative classifiers. BIBREF5 verified that the generation is a better way to model the dialogue state tracker. Specifically, we use a GRU as the generator to decode the $C_t$ of current turn. In order to capture user intent information accurately, the basic attention mechanism is calculated when the decoder decodes the $C_t$ at each step, which is the same as the Eq. (DISPLAY_FORM12). where $m$ is the length of $C_t$, $e(y_i)$ is the embedding of the token, $d_h$ is the hidden size of the GRU and the hidden state at $i$ timestep of the RNN in dialogue state tracker denote as $h_i^d$. The decoded token at step $i$ denotes as $y_i^d$. Generative Dialogue Policy ::: Knowledge Base Knowledge base is a database that stores information about the related task. For example, in the restaurant reservation, a knowledge base stores the information of all the restaurants, such as location and price. After dialogue belief tracker, the $C_t$ will be used as the constraints to search the results in knowledge base. Then the one-hot vector $k_t$ will be produced when the system gets the number of the results. The search result $k_t$ has a great influence on dialogue policy. For example, if the result has multiple matches, the system should request more constraints of the user. In practice, let $k_t$ be an one-hot vector of 20 dimensions to represent the number of query results. Then $k_t$ will be used as the cue for dialogue policy maker. Generative Dialogue Policy ::: Dialogue Policy Maker In task-oriented dialogue systems, supervised classification is a straightforward solution for dialogue policy modeling. However, we observe that classification cannot hold enough information for dialogue policy modeling. The generative approach is another way to model the dialogue policy maker for task-oriented dialogue systems, which generates the next dialogue acts and their corresponding parameters based on the dialogue context word by word. Thus the generative approach converts the dialogue policy learning problem into a sequence optimization problem. The dialogue policy maker generates the next dialogue action $A_t$ based on $k_t$ and $[H_u, H_d]$. Our proposed model uses the GRU as the action decoder to decode the acts and their parameters for the response. Particularly, at step $i$, for decoding $y_i^p$ of $A_t$, the decoder GRU takes the embedding of $y_{i-1}^p$ to generate a hidden vector $h_i^p$. Basic attention mechanism is calculated. where $e$ is the embedding of the token, $c_u$ is the context vector of the input utterance and $c_d$ is the context vector of the dialogue state tracker. $h_i^p$ is the hidden state of the GRU in dialogue policy maker at $i$ timestep. where $y_i^p$ is the token decoded at $i$ timestep. And the final results of dialogue policy maker denote as $A_t$, and the $k$ is the length of it. In our proposed model, the dialogue policy maker can be viewed as a decoder of the seq2seq model conditioned on $[C_{t-1},R_{t-1},U_t]$ and $k_t$. Generative Dialogue Policy ::: Nature Language Generator After getting the dialogue action $A_t$ by the learned dialogue policy maker, the task-oriented dialogue system needs to generate an appropriate response $R_t$ for users. We construct the natural language generator by using template sentences. For each dataset, we extract all the system responses, then we manually modify responses to construct the sentence templates for task-oriented dialogue systems. In our proposed model, the sequence of the acts and parameters $A_t$ will be used for searching appropriate template. However, the classification-based baselines use the categories of acts and their corresponding parameters to search the corresponding template. Generative Dialogue Policy ::: Training In supervised learning, because our proposed model is built in a seq2seq way, the standard cross entropy is adopted as our objective function to train dialogue belief tracker and dialogue policy maker. After supervised learning, the dialogue policy can be further updated by using reinforcement learning. In the context of reinforcement learning, the decoder of dialogue policy maker can be viewed as a policy network, denoted as $\pi _{\theta }(y_j)$ for decoding $y_j$, $\theta $ is the parameters of the decoder. Accordingly, the hidden state created by GRU is the corresponding state, and the choice of the current token $y_j$ is an action. Reward function is also very important for reinforcement learning when decoding every token. To encourage our policy maker to generate correct acts and their corresponding parameters, we set the reward function as follows: once the dialogue acts and their parameters are decoded correctly, the reward is 2; otherwise, the reward is -5; only the label of the dialogue act is decoded correctly but parameters is wrong, the reward is 1; $\lambda $ is a decay parameter. More details are shown in Sec SECREF41. In our proposed model, rewards can only be obtained at the end of decoding $A_t$. In order to get the rewards at each decoding step, we sample some results $A_t$ after choosing $y_j$, and the reward of $y_j$ is set as the average of all the sampled results' rewards. In order to ensure that the model's performance is stable during the fine-tuning phase of reinforcement learning, we freeze the parameters of user utterance and dialogue belief tracker, only the parameters of the dialogue policy maker will be optimized by reinforcement learning. Policy gradient algorithm REINFORCE BIBREF18 is used for pretrained dialogue policy maker: where the $m$ is the length of the decoded action. The objective function $J$ can be optimized by gradient descent. Experiments We evaluate the performance of the proposed model in three aspects: (1) the accuracy of the dialogue state tracker, it aims to show the impact of the dialogue state tracker on the dialogue policy maker; (2) the accuracy of dialogue policy maker, it aims to explain the performance of different methods of constructing dialogue policy; (3) the quality of the final response, it aims to explain the impact of the dialogue policy on the final dialogue response. The evaluation metrics are listed as follows: BPRA: Belief Per-Response Accuracy (BPRA) tests the ability to generate the correct user intents during the dialogue. This metric is used to evaluate the accuracy of dialogue belief tracker BIBREF1. APRA: Action Per-Response Accuracy (APRA) evaluates the per-turn accuracy of the dialogue actions generated by dialogue policy maker. For baselines, APRA evaluates the classification accuracy of the dialogue policy maker. But our model actually generates each individual token of actions, and we consider a prediction to be correct only if every token of the model output matches the corresponding token in the ground truth. BLEU BIBREF19: The metric evaluates the quality of the final response generated by natural language generator. The metric is usually used to measure the performance of the task-oriented dialogue system. We also choose the following metrics to evaluate the efficiency of training the model: $\mathbf {Time_{full}}$: The time for training the whole model, which is important for industry settings. $\mathbf {Time_{DP}}$: The time for training the dialogue policy maker in a task-oriented dialogue system. Experiments ::: Datasets We adopt the DSTC2 BIBREF20 dataset and Maluuba BIBREF21 dataset to evaluate our proposed model. Both of them are the benchmark datasets for building the task-oriented dialog systems. Specifically, the DSTC2 is a human-machine dataset in the single domain of restaurant searching. The Maluuba is a very complex human-human dataset in travel booking domain which contains more slots and values than DSTC2. Detailed slot information in each dataset is shown in Table TABREF34. Experiments ::: Baselines For comparison, we choose two state-of-the-art baselines and their variants. E2ECM BIBREF11: In dialogue policy maker, it adopts a classic classification for skeletal sentence template. In our implement, we construct multiple binary classifications for each act to search the sentence template according to the work proposed by BIBREF11. CDM BIBREF10: This approach designs a group of classifications (two multi-class classifications and some binary classifications) to model the dialogue policy. E2ECM+RL: It fine tunes the classification parameters of the dialogue policy by REINFORCE BIBREF18. CDM+RL: It fine tunes the classification of the act and corresponding parameters by REINFORCE BIBREF18. In order to verify the performance of the dialogue policy maker, the utterance encoder and dialogue belief tracker of our proposed model and baselines is the same, only dialogue policy maker is different. Experiments ::: Parameters settings For all models, the hidden size of dialogue belief tracker and utterance encoder is 350, and the embedding size $d_{emb}$ is set to 300. For our proposed model, the hidden size of decoder in dialogue policy maker is 150. The vocabulary size $|V|$ is 540 for DSTC2 and 4712 for Maluuba. And the size of $k_t$ is set to 20. An Adam optimizer BIBREF22 is used for training our models and baselines, with a learning rate of 0.001 for supervised training and 0.0001 for reinforcement learning. In reinforcement learning, the decay parameter $\lambda $ is set to 0.8. The weight decay is set to 0.001. And early stopping is performed on developing set. Experiments ::: Experimental Results The experimental results of the proposed model and baselines will be analyzed from the following aspects. BPRA Results: As shown in Table TABREF35, most of the models have similar performance on BPRA on these two datasets, which can guarantee a consistent impact on the dialogue policy maker. All the models perform very well in BPRA on DSTC2 dataset. On Maluuba dataset, the BPRA decreases because of the complex domains. We can notice that BPRA of CDM is slightly worse than other models on Maluuba dataset, the reason is that the CDM's dialogue policy maker contains lots of classifications and has the bigger loss than other models because of complex domains, which affects the training of the dialogue belief tracker. APRA Results: Compared with baselines, GDP achieves the best performance in APRA on two datasets. It can be noted that we do not compare with the E2ECM baseline in APRA. E2ECM only uses a simple classifier to recognize the label of the acts and ignores the parameters information. In our experiment, APRA of E2ECM is slightly better than our method. Considering the lack of parameters of the acts, it's unfair for our GDP method. Furthermore, the CDM baseline considers the parameters of the act. But GDP is far better than CDM in supervised learning and reinforcement learning. BLEU Results: GDP significantly outperforms the baselines on BLEU. As mentioned above, E2ECM is actually slightly better than GDP in APRA. But in fact, we can find that the language quality of the response generated by GDP is still better than E2ECM, which proves that lack of enough parameters information makes it difficult to find the appropriate sentence template in NLG. It can be found that the BLEU of all models is very poor on Maluuba dataset. The reason is that Maluuba is a human-human task-oriented dialogue dataset, the utterances are very flexible, the natural language generator for all methods is difficult to generate an accurate utterance based on the context. And DSTC2 is a human-machine dialog dataset. The response is very regular so the effectiveness of NLG will be better than that of Maluuba. But from the results, the GDP is still better than the baselines on Maluuba dataset, which also verifies that our proposed method is more accurate in modeling dialogue policy on complex domains than the classification-based methods. Time and Model Size: In order to obtain more accurate and complete dialogue policy for task-oriented dialogue systems, the proposed model has more parameters on the dialogue policy maker than baselines. As shown in Figure FIGREF44, E2ECM has the minimal dialogue policy parameters because of the simple classification. It needs minimum training time, but the performance of E2ECM is bad. The number of parameters in the CDM model is slightly larger than E2ECM. However, because both of them are classification methods, they all lose some important information about dialogue policy. Therefore, we can see from the experimental results that the quality of CDM's dialogue policy is as bad as E2ECM. The number of dialogue policy maker's parameters in GDP model is much larger than baselines. Although the proposed model need more time to be optimized by supervised learning and reinforcement learning, the performance is much better than all baselines. Experiments ::: Case Study Table TABREF43 illustrates an example of our proposed model and baselines on DSTC2 dataset. In this example, a user's goal is to find a cheap restaurant in the east part of the town. In the current turn, the user wants to get the address of the restaurant. E2ECM chooses the inform and offer acts accurately, but the lack of the inform's parameters makes the final output deviate from the user's goal. CDM generates the parameters of offer successfully, but the lack of the information of inform also leads to a bad result. By contrast, the proposed model GDP can generate all the acts and their corresponding parameters as the dialogue action. Interestingly, the final result of GDP is exactly the same as the ground truth, which verifies that the proposed model is better than the state-of-the-art baselines. Conclusion In this paper, we propose a novel model named GDP. Our proposed model treats the dialogue policy modeling as the generative task instead of the discriminative task which can hold more information for dialogue policy modeling. We evaluate the GDP on two benchmark task-oriented dialogue datasets. Extensive experiments show that GDP outperforms the existing classification-based methods on both action accuracy and BLEU.
DSTC2, Maluuba
028910d643c103abd90045ccb07ee8adc5a3e177
028910d643c103abd90045ccb07ee8adc5a3e177_0
Q: What languages are the model evaluated on? Text: Introduction Recently, hierarchical architectures have become ubiquitous in NLP. They have been applied to a wide variety of tasks such as language modeling and generation BIBREF0, BIBREF1, neural machine translation (NMT) BIBREF2, summarization BIBREF3, sentiment and topic classification BIBREF4, BIBREF5, and spoken language understanding BIBREF6, BIBREF7, to cite only a few examples. All hierarchical architectures capitalize on the same intuitive idea that the representation of the input text should be learned in a bottom-up fashion by using a different encoder at each granularity level (e.g., words, sentences, paragraphs), where the encoder at level $l+1$ takes as input the output of the encoder at level $l$. One of the earliest and most influential examples is the Hierarchical Attention Network (HAN) of BIBREF5 (see Fig. FIGREF6 and section SECREF2). It is a two-level architecture, where at level 1, each sentence in the document is separately encoded by the same sentence encoder, resulting in a sequence of sentence vectors. That sequence is then processed at level 2 by the document encoder which returns a single vector representing the entire document. The sentence and document encoders are both self-attentional bidirectional Recurrent Neural Networks (RNNs), with different parameters. Introduction ::: Observed problem HAN was highly successful and established new state of the art on six large-scale sentiment and topic classification datasets. However, it has a major weakness: at level 1, each sentence is encoded in isolation. That is, while producing the representation of a given sentence in the document, HAN completely ignores the other sentences. This lack of communication is obviously suboptimal. For example, in Fig. FIGREF2, the same highly negative feature (“terrible value”) has been repeated at the beginning of each sentence in the document. Because it encodes each sentence independently, HAN has no choice but to spend most of its attentional budget on the most salient feature every time. As a result, HAN neglects the other aspects of the document. On the other hand, CAHAN is informed about the context, and thus quickly stops spending attention weight on the same highly negative pattern, knowing that is has already been covered. CAHAN is then able to cover the other topics in the document (“seafood”,“scallops” and “mussels”; “entree” and “appetizer”; triple negation in the fourth sentence). As another example, consider the edge case of a document containing the same sentence repeated several times, as shown in Fig. FIGREF3. With HAN, the exact same embedding is produced for each instantiation of the sentence, as a result of the context-blind self-attention mechanism always making the same alignment decisions. However, the context-aware sentence encoder of CAHAN allows it to extract complementary, rather than redundant information, from each instantiation of the sentence. This results in better coverage (“reasonably priced”, “arrived late”), in a richer document representation, and ultimately in a more accurate prediction (positive instead of very positive). One may argue that in basic HAN, the document encoder at level 2 already does capture some notion of context, by assigning importance scores to sentences. However, at level 2, the sentence vectors have already been formed, and it is too late to modify them. Since the document encoder can only rank the sentence representations, it cannot address issues like high redundancy. In that case, important subtopics or details in the document will not be covered, no matter sentence scores. Introduction ::: Context-aware HAN In this work, we propose and evaluate several modifications of the HAN architecture that allow the sentence encoder at level 1 to make its attentional decisions based on contextual information, allowing it to learn richer document representations. Another significant contribution is the introduction of a bidirectional version of the document encoder, where one RNN processes the document forwards, using the preceding sentences as context, and another one processes it backwards, using the following sentences as context. The remainder of this paper is structured as follows. We start by formally introducing basic HAN (section SECREF2), we then explain our contributions (section SECREF3), and detail our experimental setup (section SECREF4). Finally, we interpret our results and list areas of future development (sections SECREF5 and SECREF7). Related work is reviewed in section SECREF6. HAN The baseline HAN model as introduced by BIBREF5 is shown in Fig. FIGREF6 along with our modifications (disregard the bold lines for the baseline). The sentence and document encoders, used respectively at level 1 and level 2, have different parameters but share the exact same architecture. Thus, in what follows, we only describe the sentence encoder in detail. HAN ::: Notation Next, we use boldface upper case for tensors, upper case for matrices, boldface lower case for vectors, and lower case for scalars. We define a document $\mathbf {X} \in \mathbb {R}^{N \times T_i \times d}$ as a sequence of $N$ sentences $(S_1, \dots , S_N)$. Each sentence $S_i$ is a sequence of $T_i$ $d$-dimensional word vectors $(\mathbf {x}_{i1}, \dots , \mathbf {x}_{iT_i}) \in \mathbb {R}^{T_i \times d}$. HAN ::: Sentence encoder First, the sentence-level bidirectional RNN $f_s$ processes the input sentence $S_i$ and returns a sequence of $T_i$ $2d_s$-dimensional hidden states $(\mathbf {h}_{i1},\dots , \mathbf {h}_{iT_i}) \in \mathbb {R}^{T_i \times 2d_s}$. $f_s$ is composed of two non-stacking RNNs $\overrightarrow{f_s}$ and $\overleftarrow{f_s}$ with Gated Recurrent Units BIBREF8, respectively parsing $S_i$ from left to right and right to left: $\overrightarrow{f_s}$ and $\overleftarrow{f_s}$ have the same hidden layer dimensionality $d_s$, but different parameters. At each time step $t$, the word annotations they return are concatenated, producing $2d_s$-dimensional annotations that summarize the immediate context surrounding each word: Then, a self-attention mechanism computes the representation $\mathbf {s}_i$ of sentence $S_i$ as a weighted sum of its word annotations: Where the vector of attentional coefficients $\mathbf {\alpha }$ is a softmax-normalized version of the alignment vector $\mathbf {e}$, which itself is obtained by passing the word annotations through a dense layer (parameterized by $W_s \in \mathbb {R}^{2d_s\times 2d_s}$) and comparing the output with a trainable vector $\mathbf {u}_s \in \mathbb {R}^{2d_s}$: $\mathbf {u}_s$ is initialized randomly. It can be interpreted as a “super-word” whose vector contains the ideal combination of latent topics, on average. The closest the annotation of a word is to this ideal representation, the more attention that word will be given. The sentence encoder is applied to all sentences in document $\mathbf {X}$, producing a sequence of $N$ sentence vectors $(\mathbf {s_1},\dots ,\mathbf {s_N}) \in \mathbb {R}^{N\times 2d_s}$. HAN ::: Document encoder The document encoder is a self-attentional bidirectional GRU-RNN, like the sentence encoder, but it has different parameters. The dimensionality of its hidden states is $2d_d$. The document encoder is applied only once, to the sequence of sentence vectors, to produce the sequence of sentence annotations $(\mathbf {h}_{1}, \dots , \mathbf {h}_{N})$. Then, a self-attention layer outputs the final document vector. Proposed architecture: CAHAN As was previously explained, each sentence is encoded independently by HAN, without considering any kind of contextual information. To solve this issue, we inject a context vector $\mathbf {c_i}$ into the self-attention mechanism, to guide the model during the computation of the word alignment coefficients. In effect, Eq. DISPLAY_FORM12 becomes: We propose two approaches for computing $\mathbf {c_i}$, namely CAHAN-SUM and CAHAN-RNN, shown as the two bolded connections in Fig. FIGREF6. Proposed architecture: CAHAN ::: Summed context (CAHAN-SUM) We introduce two settings, (1) left-to-right and bidirectional. Whenever there is no preceding/following sentence, i.e., at the beginning/end of a document, the context vector is initialized with zeroes. Proposed architecture: CAHAN ::: Summed context (CAHAN-SUM) ::: Left-to-right (LR) In the LR case, the context vector is computed as the sum of the preceding sentence representations: Proposed architecture: CAHAN ::: Summed context (CAHAN-SUM) ::: Bidirectional (BI) In the BI case, we compute two context vectors, respectively by summing the representations of the sentences preceding and following the current sentence $S_i$. These two vectors are passed to two identical context-aware self-attention mechanisms (Eq. DISPLAY_FORM14) with different parameters. The resulting forward and backward sentence representations are then processed respectively by the forward and backward RNNs of the document encoder at level 2, and the resulting annotations are concatenated to produce the final sentence annotations. CAHAN-SUM was inspired by the coverage vectors of seq2seq architectures, which have been shown very effective in addressing under(over)-translation in NMT BIBREF9, and repetition in summarization BIBREF10. Such coverage vectors are typically computed as the sum, over all previous decoder steps, of the attention distribution over the source tokens. However, in our case, we cannot keep track of the attention distribution history, since sentences are unique and cannot be aligned. This is why we work with sentence representations instead. Proposed architecture: CAHAN ::: Summed context (CAHAN-SUM) ::: Centroid version (@!START@$\mu $@!END@) $\overrightarrow{\mathbf {c}_i}$, as defined by Eq. DISPLAY_FORM17, grows larger in magnitude as $i$ increases (the sum has more and more terms), which can blur the alignment decisions for the sentences at the end of a document (LR case), or both at the end and beginning of a document, when reading forwards and backwards (BI case). Therefore, we also experiment with a centroid, rather than sum, context vector: Proposed architecture: CAHAN ::: Recurrent Context (CAHAN-RNN) Here, we capitalize on the capability of RNNs, especially when equipped with LSTM or GRU units, to keep track of information over long time periods. We simply use as context vector the document encoder annotation at the preceding/following time step. That is, we have, in the LR case: By design, $\mathbf {h}_{i-1}$ summarizes the entire history $(\mathbf {s_1},\dots ,\mathbf {s_{i-1}})$ of sentence vectors, with a preference for the most recent time steps. If the sequence is very long though, even a GRU-RNN will eventually forget about the first elements. However, for the relatively short documents we experiment with (see Table TABREF29), we can assume the annotations of the document encoder to faithfully represent the entire sequence. Proposed architecture: CAHAN ::: Gated context In NMT, BIBREF11 introduced a gating mechanism to allow the decoder to balance the contribution of the source and target information in generating the next word. The same idea can be found in numerous other NMT studies, e.g., BIBREF2, BIBREF12, BIBREF13. Inspired by this line of research, we propose a modification of Eq. DISPLAY_FORM14 to let our model explicitly decide how much contextual information it should take into account in making its alignment decisions: $\mathbf {\lambda }$ is produced by a trainable mechanism taking as input the word annotations and the context vector: The sigmoid activation ensures that $\mathbf {\lambda }$ plays a filtering role, by squashing all its entries to $[0,1]$. The gate gives more expressiveness to the attention mechanism. Indeed, contextual information should not always be given the same importance, depending on the situation. E.g., when most of the document has been processed, context is likely to be very important, in order to limit redundancy and increase coverage. However, at the beginning of a document, or in the case of a very short or focused sentence, context might not be useful as only one single topic might be extractable from the sentence anyways. From an optimization perspective, $\mathbf {\lambda }$ also has the desirable effect of regulating the magnitude of the context vector, preventing it from pushing the tanh to regions of very small gradient. This is especially useful with CAHAN-SUM, as in that case, $\mathbf {c}_i$ gets large towards the end/beginning of documents (forwards/backwards reading). Proposed architecture: CAHAN ::: Complexity and sequentiality Assuming that $d \sim 2d_s$ and that $d_s \sim d_d$, which holds in practice under reasonable settings, all matrix multiplications in the network have similar complexity, of order of magnitude $\mathcal {O}(d^2)$. Moreover, since we use GRU-RNNs, there are 6 matrix multiplication per encoder. This number is doubled, as we use bidirectional RNNs. Finally, the two self-attention mechanisms, one at each level, add two multiplications. Therefore, in the HAN baseline architecture, there are a total of 26 matrix multiplications (13 at each level). To that, CAHAN-SUM and CAHAN-RNN simply add one matrix multiplication ($W_c\mathbf {c}_i$ in Eq. DISPLAY_FORM14) in the LR case and two in the BI case. This corresponds to negligible 4% and 8% increases in total computational cost. On top of that, gating adds two multiplications in the LR case ($W_{\lambda _1}\mathbf {h}_{it}$ and $W_{\lambda _2}\mathbf {c}_i$ in Eq. DISPLAY_FORM25) and four in the BI case. All in all, this represents three and six extra multiplications compared to basic HAN, resp. in the LR and BI cases. Again, this corresponds to small increases in computational cost, of 11.5% and 23%, respectively. However, with CAHAN-SUM, the representations of the preceding/following sentences are now required before computing the current sentence representation. With CAHAN-RNN, one even has to wait until the level 2 RNN has processed the preceding/following sentence vectors before being able to encode the current sentence. Therefore, the sentence encoding process, which was parallelizable with basic HAN due to independence, has now become a sequential process. This is why in practice, we observe slightly greater runtime increases, in the range 5-22% (see Table TABREF43). Experimental setup ::: Datasets We evaluate the quality of the document embeddings learned by the different variants of CAHAN and the HAN baseline on three of the large-scale document classification datasets introduced by BIBREF14 and used in the original HAN paper BIBREF5. They fall into two categories: topic classification (Yahoo) and fine-grained sentiment analysis (Amazon, Yelp). Dataset statistics are shown in Table TABREF29. Classes are perfectly balanced, for all datasets. Experimental setup ::: Model configuration This subsection describes the preprocessing and hyperparameter setting we used. Experimental setup ::: Model configuration ::: Preprocessing and word embeddings For preprocessing (and the HAN baseline), we used the publicly available implementation of BIBREF15, which closely follows the description and details given in the original HAN paper BIBREF5. More precisely, on each dataset, we randomly split the training set into training (90%) and validation (10%). Documents are then tokenized into sentences and sentences are tokenized into tokens. The tokens appearing less than 5 times in the corpus are replaced with a special UNK token. Finally, we pre-train our own word vectors with word2vec BIBREF16 on the training and validation splits. Experimental setup ::: Model configuration ::: Hyperparameters We do not tune any hyperparameter except the learning rate (see subsection SECREF35). We set the hidden layer dimensionality of the two RNN encoders to $d_s=50$ and $d_d=50$. Thus, the word annotations, sentence vectors, sentence annotations and document vector all have size 100. With regularization in mind, we set the dimensionality of the word embeddings to $d=200$ on the very large datasets (Amazon and Yahoo!) and to $d=100$ on Yelp, as shown in Table TABREF29. We also use a greater batch size of 128 on the large datasets, versus 64 on Yelp. Experimental setup ::: Training details We zero-pad sentences and documents. Like in BIBREF5, to make the most out of each batch, we ensure they are as dense as possible by using a bucketing strategy. More precisely, we build each batch so that it contains documents of approximately the same size, in number of sentences. For regularization, we use dropout BIBREF17 with a rate of 0.5 at each layer. For classification, the document vectors are passed to a dense layer with softmax activation, whose dimensionality is equal to the number of categories to be predicted. Initialization has a significant impact on performance. To make sure the differences we measure are due to differences in the models and not in initial condition, we use the same initialization weights for each model. Experimental setup ::: Training details ::: SGD with cyclical learning rate To minimize the categorical cross-entropy loss, we use the stochastic gradient descent optimizer with a triangular cyclical learning rate schedule and opposite triangular momentum schedule BIBREF18, BIBREF19. Following the authors' recommendations, we use a fixed $[0.85,0.95]$ momentum range, while for the learning rate, we perform a range test on the validation set, for each model, searching the $[0.001,3]$ range. With a triangular schedule, the learning rate linearly increases for a certain number of iterations (half-cycle), and then linearly decreases back to its initial value during the second half of the cycle. Cycles are repeated until training ends. High learning rate values make training faster, by allowing large updates and the use of greater batch sizes while keeping the amount of regularization constant. Also, the cyclical schedule injects beneficial stochastic noise to the gradient updates, which improves generalization BIBREF20. We use cycles of 12 epochs, and an early stopping strategy, monitoring the test loss, with a patience of slightly more than one cycle. We set the maximum number of epochs for all models to 50. Results As can be seen in Table TABREF37, the best version of CAHAN (SUM-BI-$\Sigma $) consistently outperforms the HAN baseline, which shows that taking contextual information into account helps producing better document representations. Also, the two unidirectional variants (LR) slightly underperform the baseline and are clearly inferior to BI, which illustrates the value added by processing the document forwards and backwards, using preceding and following sentences as context. Results ::: Summing vs. averaging In the unidirectional case, it is surprising to note that CAHAN-SUM-LR-$\mu $ is slightly better than CAHAN-SUM-LR-$\Sigma $, i.e., the centroid-based context vector (Eq. DISPLAY_FORM20) is better than the sum-based one (Eq. DISPLAY_FORM17). Indeed, from an information theory standpoint, it should be the opposite, as summing keeps track of all information whereas averaging is lossy. We hypothesize that towards the end of a document, the sum-based context vector grows large in magnitude, which perturbs the alignment decisions and deteriorates the quality of the sentence vectors. On the other hand, the centroid-based vector, which has constant magnitude, does not suffer from this issue. We further hypothesize that this issue is attenuated in the bidirectional case (CAHAN-SUM-BI-$\mu $ and CAHAN-SUM-BI-$\Sigma $ are on par) due to a counterbalancing phenomenon. Indeed, the last sentences processed by the left-to-right encoder are the first ones processed by the right-to-left encoder. Therefore, through concatenation, the overall quality of the sentence embeddings stays constant. Results ::: Gating As expected, gating improves performance, especially for the $\Sigma $ variants of CAHAN-SUM (and especially the LR ones). To be noted are significant boosts of 0.45 and 0.24 in accuracy respectively for CAHAN-SUM-LR-$\Sigma $ and CAHAN-SUM-BI-$\Sigma $ on Yelp. On Amazon, gating also offers CAHAN-SUM-LR-$\Sigma $ a nice 0.27 improvement. These positive results give a clue that regulating the magnitude of the context vector $\mathbf {c}_i$ is indeed beneficial. Nevertheless, gating also improves the performance of the $\mu $ variants of CAHAN, which do not suffer from the context vector magnitude issue. This shows that gating is also helpful via giving more expressiveness to the model. For instance, on Amazon, gating boosts the performance of CAHAN-SUM-BI-$\mu $ by 0.12. It is interesting to note that overall, gating is mostly effective on Yelp and Amazon. We attribute this to the difference in task. Sentiment analysis may rely more on contextual information than topic classification. Results ::: CAHAN-RNN-BI The consistently bad performance of CAHAN-RNN-BI is to be noted. This was unexpected, as an equivalent approach was used by BIBREF6 for dialogue act classification, with significant improvements. We hypothesize that in our case, CAHAN-RNN-BI is not effective because, unlike utterances in a speech transcription, sentences in a document are not ordered in a temporal fashion. In other words, sentences far away from the current sentence are not necessarily less relevant than closer sentences. Thus, considering each sentence equally is better than imposing an implicit time-decay via a RNN. Results ::: Runtimes We compare the average runtime per iteration of some variants of CAHAN to that of basic HAN in Table TABREF43. For CAHAN-SUM-$\Sigma $, we observe that the unidirectional variant (LR) is 5.7% slower than basic HAN (37 vs. 35ms per iteration), whereas the bidirectional variant (BI) is 23% slower (43 vs. 35 ms). When gating, these number increase to 14.3% and 37% (40 and 48ms vs. 35ms). These differences are not far from our theoretical expectations (see subsection SECREF26), especially for LR. Indeed, recall that based on matrix multiplication counts, we had forecasted increases of 4% and 8% (11.5% and 23% when using gating), respectively for LR and BI. The gap for BI can be explained by a probable bottleneck in the implementation. CAHAN-RNN adds the same number of matrix multiplications as CAHAN-SUM, so we should in principle observe the same increases. However, as was explained in subsection SECREF26, with CAHAN-RNN we have to wait until the level 2 RNN has processed the preceding or preceding/following sentence vectors (LR or BI case) before being able to encode the current sentence. This explains the extra-time needed (40 vs. 37ms and 49 vs. 43ms). Related work In what follows, we provide a review of the relevant literature. One should note that by context, in this paper, we do not refer to the intra-sentence or internal context vector of seq2seq encoders BIBREF21, BIBREF11, BIBREF13. Rather, we refer to the cross-sentence, external, or document-level context. A few studies only have focused on developing models that take that type of context into account. Most of these studies originate from NMT. We briefly describe them next. BIBREF2 obtain a global context vector by feeding a fixed number of the previous source sentences to HAN. They then compare two ways of injecting it into the encoder-decoder model. First, they propose a warm-start approach, in which the encoder and/or decoder hidden states are initialized with the context vector. Second, they experiment with an auxiliary strategy in which the intra-sentence context vector of the encoder is concatenated with the global context vector and passed either (i) directly to the decoder, or (ii) after going through a filtering gate. However, unlike our mechanism and that of BIBREF11, BIBREF12, BIBREF13, which all feature two coupled gates, the mechanism of BIBREF2 has only one gate. All strategies proposed by BIBREF2 significantly improve performance, but first place is reached by a combination of the warm-start and gated techniques. BIBREF22 use an approach similar to the auxiliary approach of BIBREF2, but they compute the context vector only from the sentence immediately preceding the current source sentence. They then pass it to a dedicated encoder featuring a customized attention mechanism. BIBREF12 and BIBREF23 both extend the Transformer architecture BIBREF24 with a context encoder featuring self-attentional and feed-forward layers. Then, BIBREF12 combine the context representation with the source representation produced by the basic Transformer encoder via a gating mechanism. They do not modify the decoder part of the Transformer. BIBREF23 go one step further by passing the contextual information both to the encoder and the decoder. In both cases, they add a self-attention mechanism over the context representation. For the decoder though, they also replace the residual connection after the context self-attention with a gating mechanism, to limit the influence of the context information on the source information. One piece of work closely related to our study is BIBREF3. The authors also use a hierarchical attention architecture, where at level 1, each paragraph of a document is encoded by a dedicated encoder. All encoders share the same stacking bi-RNN architecture. Moreover, they communicate at each layer to produce context-aware annotations of the words in their paragraphs. More precisely, at a given layer of the stacking RNN, a given encoder is passed the average of the representations learned by the other encoders at the corresponding layer (like with CAHAN-SUM-$\mu $). This context vector is then combined with the hidden states and passed as input to the upper RNN layer. At level 2, the top RNN layer annotations are passed to a word attention mechanism followed by a paragraph attention mechanism. A major difference with our work is that the authors combine the encoder with a decoder, to perform abstractive summarization of long documents, whereas we only focus on the encoding part. The word and paragraph attentional decisions at level 2 are thus made by the decoder. Another significant difference is that the authors use reinforcement learning for training, instead of SGD. Context-aware models have also been proposed in other NLP domains. E.g., for spoken language understanding, BIBREF7 prepend and append the current utterance with two special word vectors respectively summarizing the $C$ preceding and following utterances (respectively), where $C$ is a hyperparameter. This indirectly initializes the hidden states of the left-to-right and right-to-left components of a bidirectional RNN, like with the warm-start approach of BIBREF2. On the other hand, BIBREF6 rely on a mechanism equivalent to LR-CAHAN-RNN. They find that it significantly boosts dialogue act classification accuracy. As discussed in section SECREF5, we hypothesize that CAHAN-RNN is not effective in our application because sentences in a document are not ordered in a temporal manner. Discussion and next steps While bidirectional CAHAN-SUM systematically outperforms HAN, margins are modest. We attribute this to the fact that the datasets used in our experiments contain short documents (see Table TABREF29) featuring simple sentences. Thus, the superior expressiveness of CAHAN is not able to show. To address this issue, we plan in future work to experiment on datasets featuring long documents containing complex sentences. Moreover, the tasks of sentiment and topic classification do not require a deep understanding of the input documents. Even if a given document contains some complex sentences with multiple clauses and subtopics, capturing the polarity of only one simple, unambiguous sentence or pattern may be enough to accurately predict the category of the entire document (e.g., “hated”, “loved”, “definitely recommends”, “will never come back”, etc.). Thus, we hypothesize that when trained to solve such tasks, CAHAN does not learn to use its context-aware capabilities to the fullest extent. One solution, and promising area of future work, would consist in explicitly giving CAHAN knowledge about coverage, diversity, and redundancy. This could be done by modifying the sentence attention mechanism and/or by adding a term to the loss. Another natural next step is to experiment on tasks requiring a deeper understanding of text, such as end-to-end abstractive summarization. Some other ideas for improvement include combining CAHAN-SUM with CAHAN-RNN, and/or the mean and centroid vectors; for CAHAN-SUM, obtaining the centroid vector through a trainable mechanism rather than via pooling; and experimenting with a trainable matrix (instead of vector) in the self-attention at both level 1 and level 2, like in BIBREF25. Finally, the context vector could be seen as an external, general summary of the document, and be pre-computed offline by a dedicated encoder. Conclusion In this paper, we proposed several modifications of the HAN architecture that make the sentence encoder context-aware (CAHAN). Results show that taking context into account is beneficial. Specifically, the bidirectional version of the document encoder, that processes the documents forwards and backwards, using the preceding and following sentences as context, outperforms the HAN baseline on all datasets and is superior to the undirectional variant. Moreover, the computational overhead is small. Experiments on tasks requiring a deeper understanding of the input documents should better highlight the superiority of CAHAN. Acknowledgments We thank Xiang Zhang for sharing the datasets. We are grateful to the NVidia corporation for the donation of a GPU as part of their GPU grant program. This research was supported in part by the open-source project LinTo.
Unanswerable
975085e3b6679cc644fdd6ad11b7c2d1261a2dc6
975085e3b6679cc644fdd6ad11b7c2d1261a2dc6_0
Q: Do they compare to other models appart from HAN? Text: Introduction Recently, hierarchical architectures have become ubiquitous in NLP. They have been applied to a wide variety of tasks such as language modeling and generation BIBREF0, BIBREF1, neural machine translation (NMT) BIBREF2, summarization BIBREF3, sentiment and topic classification BIBREF4, BIBREF5, and spoken language understanding BIBREF6, BIBREF7, to cite only a few examples. All hierarchical architectures capitalize on the same intuitive idea that the representation of the input text should be learned in a bottom-up fashion by using a different encoder at each granularity level (e.g., words, sentences, paragraphs), where the encoder at level $l+1$ takes as input the output of the encoder at level $l$. One of the earliest and most influential examples is the Hierarchical Attention Network (HAN) of BIBREF5 (see Fig. FIGREF6 and section SECREF2). It is a two-level architecture, where at level 1, each sentence in the document is separately encoded by the same sentence encoder, resulting in a sequence of sentence vectors. That sequence is then processed at level 2 by the document encoder which returns a single vector representing the entire document. The sentence and document encoders are both self-attentional bidirectional Recurrent Neural Networks (RNNs), with different parameters. Introduction ::: Observed problem HAN was highly successful and established new state of the art on six large-scale sentiment and topic classification datasets. However, it has a major weakness: at level 1, each sentence is encoded in isolation. That is, while producing the representation of a given sentence in the document, HAN completely ignores the other sentences. This lack of communication is obviously suboptimal. For example, in Fig. FIGREF2, the same highly negative feature (“terrible value”) has been repeated at the beginning of each sentence in the document. Because it encodes each sentence independently, HAN has no choice but to spend most of its attentional budget on the most salient feature every time. As a result, HAN neglects the other aspects of the document. On the other hand, CAHAN is informed about the context, and thus quickly stops spending attention weight on the same highly negative pattern, knowing that is has already been covered. CAHAN is then able to cover the other topics in the document (“seafood”,“scallops” and “mussels”; “entree” and “appetizer”; triple negation in the fourth sentence). As another example, consider the edge case of a document containing the same sentence repeated several times, as shown in Fig. FIGREF3. With HAN, the exact same embedding is produced for each instantiation of the sentence, as a result of the context-blind self-attention mechanism always making the same alignment decisions. However, the context-aware sentence encoder of CAHAN allows it to extract complementary, rather than redundant information, from each instantiation of the sentence. This results in better coverage (“reasonably priced”, “arrived late”), in a richer document representation, and ultimately in a more accurate prediction (positive instead of very positive). One may argue that in basic HAN, the document encoder at level 2 already does capture some notion of context, by assigning importance scores to sentences. However, at level 2, the sentence vectors have already been formed, and it is too late to modify them. Since the document encoder can only rank the sentence representations, it cannot address issues like high redundancy. In that case, important subtopics or details in the document will not be covered, no matter sentence scores. Introduction ::: Context-aware HAN In this work, we propose and evaluate several modifications of the HAN architecture that allow the sentence encoder at level 1 to make its attentional decisions based on contextual information, allowing it to learn richer document representations. Another significant contribution is the introduction of a bidirectional version of the document encoder, where one RNN processes the document forwards, using the preceding sentences as context, and another one processes it backwards, using the following sentences as context. The remainder of this paper is structured as follows. We start by formally introducing basic HAN (section SECREF2), we then explain our contributions (section SECREF3), and detail our experimental setup (section SECREF4). Finally, we interpret our results and list areas of future development (sections SECREF5 and SECREF7). Related work is reviewed in section SECREF6. HAN The baseline HAN model as introduced by BIBREF5 is shown in Fig. FIGREF6 along with our modifications (disregard the bold lines for the baseline). The sentence and document encoders, used respectively at level 1 and level 2, have different parameters but share the exact same architecture. Thus, in what follows, we only describe the sentence encoder in detail. HAN ::: Notation Next, we use boldface upper case for tensors, upper case for matrices, boldface lower case for vectors, and lower case for scalars. We define a document $\mathbf {X} \in \mathbb {R}^{N \times T_i \times d}$ as a sequence of $N$ sentences $(S_1, \dots , S_N)$. Each sentence $S_i$ is a sequence of $T_i$ $d$-dimensional word vectors $(\mathbf {x}_{i1}, \dots , \mathbf {x}_{iT_i}) \in \mathbb {R}^{T_i \times d}$. HAN ::: Sentence encoder First, the sentence-level bidirectional RNN $f_s$ processes the input sentence $S_i$ and returns a sequence of $T_i$ $2d_s$-dimensional hidden states $(\mathbf {h}_{i1},\dots , \mathbf {h}_{iT_i}) \in \mathbb {R}^{T_i \times 2d_s}$. $f_s$ is composed of two non-stacking RNNs $\overrightarrow{f_s}$ and $\overleftarrow{f_s}$ with Gated Recurrent Units BIBREF8, respectively parsing $S_i$ from left to right and right to left: $\overrightarrow{f_s}$ and $\overleftarrow{f_s}$ have the same hidden layer dimensionality $d_s$, but different parameters. At each time step $t$, the word annotations they return are concatenated, producing $2d_s$-dimensional annotations that summarize the immediate context surrounding each word: Then, a self-attention mechanism computes the representation $\mathbf {s}_i$ of sentence $S_i$ as a weighted sum of its word annotations: Where the vector of attentional coefficients $\mathbf {\alpha }$ is a softmax-normalized version of the alignment vector $\mathbf {e}$, which itself is obtained by passing the word annotations through a dense layer (parameterized by $W_s \in \mathbb {R}^{2d_s\times 2d_s}$) and comparing the output with a trainable vector $\mathbf {u}_s \in \mathbb {R}^{2d_s}$: $\mathbf {u}_s$ is initialized randomly. It can be interpreted as a “super-word” whose vector contains the ideal combination of latent topics, on average. The closest the annotation of a word is to this ideal representation, the more attention that word will be given. The sentence encoder is applied to all sentences in document $\mathbf {X}$, producing a sequence of $N$ sentence vectors $(\mathbf {s_1},\dots ,\mathbf {s_N}) \in \mathbb {R}^{N\times 2d_s}$. HAN ::: Document encoder The document encoder is a self-attentional bidirectional GRU-RNN, like the sentence encoder, but it has different parameters. The dimensionality of its hidden states is $2d_d$. The document encoder is applied only once, to the sequence of sentence vectors, to produce the sequence of sentence annotations $(\mathbf {h}_{1}, \dots , \mathbf {h}_{N})$. Then, a self-attention layer outputs the final document vector. Proposed architecture: CAHAN As was previously explained, each sentence is encoded independently by HAN, without considering any kind of contextual information. To solve this issue, we inject a context vector $\mathbf {c_i}$ into the self-attention mechanism, to guide the model during the computation of the word alignment coefficients. In effect, Eq. DISPLAY_FORM12 becomes: We propose two approaches for computing $\mathbf {c_i}$, namely CAHAN-SUM and CAHAN-RNN, shown as the two bolded connections in Fig. FIGREF6. Proposed architecture: CAHAN ::: Summed context (CAHAN-SUM) We introduce two settings, (1) left-to-right and bidirectional. Whenever there is no preceding/following sentence, i.e., at the beginning/end of a document, the context vector is initialized with zeroes. Proposed architecture: CAHAN ::: Summed context (CAHAN-SUM) ::: Left-to-right (LR) In the LR case, the context vector is computed as the sum of the preceding sentence representations: Proposed architecture: CAHAN ::: Summed context (CAHAN-SUM) ::: Bidirectional (BI) In the BI case, we compute two context vectors, respectively by summing the representations of the sentences preceding and following the current sentence $S_i$. These two vectors are passed to two identical context-aware self-attention mechanisms (Eq. DISPLAY_FORM14) with different parameters. The resulting forward and backward sentence representations are then processed respectively by the forward and backward RNNs of the document encoder at level 2, and the resulting annotations are concatenated to produce the final sentence annotations. CAHAN-SUM was inspired by the coverage vectors of seq2seq architectures, which have been shown very effective in addressing under(over)-translation in NMT BIBREF9, and repetition in summarization BIBREF10. Such coverage vectors are typically computed as the sum, over all previous decoder steps, of the attention distribution over the source tokens. However, in our case, we cannot keep track of the attention distribution history, since sentences are unique and cannot be aligned. This is why we work with sentence representations instead. Proposed architecture: CAHAN ::: Summed context (CAHAN-SUM) ::: Centroid version (@!START@$\mu $@!END@) $\overrightarrow{\mathbf {c}_i}$, as defined by Eq. DISPLAY_FORM17, grows larger in magnitude as $i$ increases (the sum has more and more terms), which can blur the alignment decisions for the sentences at the end of a document (LR case), or both at the end and beginning of a document, when reading forwards and backwards (BI case). Therefore, we also experiment with a centroid, rather than sum, context vector: Proposed architecture: CAHAN ::: Recurrent Context (CAHAN-RNN) Here, we capitalize on the capability of RNNs, especially when equipped with LSTM or GRU units, to keep track of information over long time periods. We simply use as context vector the document encoder annotation at the preceding/following time step. That is, we have, in the LR case: By design, $\mathbf {h}_{i-1}$ summarizes the entire history $(\mathbf {s_1},\dots ,\mathbf {s_{i-1}})$ of sentence vectors, with a preference for the most recent time steps. If the sequence is very long though, even a GRU-RNN will eventually forget about the first elements. However, for the relatively short documents we experiment with (see Table TABREF29), we can assume the annotations of the document encoder to faithfully represent the entire sequence. Proposed architecture: CAHAN ::: Gated context In NMT, BIBREF11 introduced a gating mechanism to allow the decoder to balance the contribution of the source and target information in generating the next word. The same idea can be found in numerous other NMT studies, e.g., BIBREF2, BIBREF12, BIBREF13. Inspired by this line of research, we propose a modification of Eq. DISPLAY_FORM14 to let our model explicitly decide how much contextual information it should take into account in making its alignment decisions: $\mathbf {\lambda }$ is produced by a trainable mechanism taking as input the word annotations and the context vector: The sigmoid activation ensures that $\mathbf {\lambda }$ plays a filtering role, by squashing all its entries to $[0,1]$. The gate gives more expressiveness to the attention mechanism. Indeed, contextual information should not always be given the same importance, depending on the situation. E.g., when most of the document has been processed, context is likely to be very important, in order to limit redundancy and increase coverage. However, at the beginning of a document, or in the case of a very short or focused sentence, context might not be useful as only one single topic might be extractable from the sentence anyways. From an optimization perspective, $\mathbf {\lambda }$ also has the desirable effect of regulating the magnitude of the context vector, preventing it from pushing the tanh to regions of very small gradient. This is especially useful with CAHAN-SUM, as in that case, $\mathbf {c}_i$ gets large towards the end/beginning of documents (forwards/backwards reading). Proposed architecture: CAHAN ::: Complexity and sequentiality Assuming that $d \sim 2d_s$ and that $d_s \sim d_d$, which holds in practice under reasonable settings, all matrix multiplications in the network have similar complexity, of order of magnitude $\mathcal {O}(d^2)$. Moreover, since we use GRU-RNNs, there are 6 matrix multiplication per encoder. This number is doubled, as we use bidirectional RNNs. Finally, the two self-attention mechanisms, one at each level, add two multiplications. Therefore, in the HAN baseline architecture, there are a total of 26 matrix multiplications (13 at each level). To that, CAHAN-SUM and CAHAN-RNN simply add one matrix multiplication ($W_c\mathbf {c}_i$ in Eq. DISPLAY_FORM14) in the LR case and two in the BI case. This corresponds to negligible 4% and 8% increases in total computational cost. On top of that, gating adds two multiplications in the LR case ($W_{\lambda _1}\mathbf {h}_{it}$ and $W_{\lambda _2}\mathbf {c}_i$ in Eq. DISPLAY_FORM25) and four in the BI case. All in all, this represents three and six extra multiplications compared to basic HAN, resp. in the LR and BI cases. Again, this corresponds to small increases in computational cost, of 11.5% and 23%, respectively. However, with CAHAN-SUM, the representations of the preceding/following sentences are now required before computing the current sentence representation. With CAHAN-RNN, one even has to wait until the level 2 RNN has processed the preceding/following sentence vectors before being able to encode the current sentence. Therefore, the sentence encoding process, which was parallelizable with basic HAN due to independence, has now become a sequential process. This is why in practice, we observe slightly greater runtime increases, in the range 5-22% (see Table TABREF43). Experimental setup ::: Datasets We evaluate the quality of the document embeddings learned by the different variants of CAHAN and the HAN baseline on three of the large-scale document classification datasets introduced by BIBREF14 and used in the original HAN paper BIBREF5. They fall into two categories: topic classification (Yahoo) and fine-grained sentiment analysis (Amazon, Yelp). Dataset statistics are shown in Table TABREF29. Classes are perfectly balanced, for all datasets. Experimental setup ::: Model configuration This subsection describes the preprocessing and hyperparameter setting we used. Experimental setup ::: Model configuration ::: Preprocessing and word embeddings For preprocessing (and the HAN baseline), we used the publicly available implementation of BIBREF15, which closely follows the description and details given in the original HAN paper BIBREF5. More precisely, on each dataset, we randomly split the training set into training (90%) and validation (10%). Documents are then tokenized into sentences and sentences are tokenized into tokens. The tokens appearing less than 5 times in the corpus are replaced with a special UNK token. Finally, we pre-train our own word vectors with word2vec BIBREF16 on the training and validation splits. Experimental setup ::: Model configuration ::: Hyperparameters We do not tune any hyperparameter except the learning rate (see subsection SECREF35). We set the hidden layer dimensionality of the two RNN encoders to $d_s=50$ and $d_d=50$. Thus, the word annotations, sentence vectors, sentence annotations and document vector all have size 100. With regularization in mind, we set the dimensionality of the word embeddings to $d=200$ on the very large datasets (Amazon and Yahoo!) and to $d=100$ on Yelp, as shown in Table TABREF29. We also use a greater batch size of 128 on the large datasets, versus 64 on Yelp. Experimental setup ::: Training details We zero-pad sentences and documents. Like in BIBREF5, to make the most out of each batch, we ensure they are as dense as possible by using a bucketing strategy. More precisely, we build each batch so that it contains documents of approximately the same size, in number of sentences. For regularization, we use dropout BIBREF17 with a rate of 0.5 at each layer. For classification, the document vectors are passed to a dense layer with softmax activation, whose dimensionality is equal to the number of categories to be predicted. Initialization has a significant impact on performance. To make sure the differences we measure are due to differences in the models and not in initial condition, we use the same initialization weights for each model. Experimental setup ::: Training details ::: SGD with cyclical learning rate To minimize the categorical cross-entropy loss, we use the stochastic gradient descent optimizer with a triangular cyclical learning rate schedule and opposite triangular momentum schedule BIBREF18, BIBREF19. Following the authors' recommendations, we use a fixed $[0.85,0.95]$ momentum range, while for the learning rate, we perform a range test on the validation set, for each model, searching the $[0.001,3]$ range. With a triangular schedule, the learning rate linearly increases for a certain number of iterations (half-cycle), and then linearly decreases back to its initial value during the second half of the cycle. Cycles are repeated until training ends. High learning rate values make training faster, by allowing large updates and the use of greater batch sizes while keeping the amount of regularization constant. Also, the cyclical schedule injects beneficial stochastic noise to the gradient updates, which improves generalization BIBREF20. We use cycles of 12 epochs, and an early stopping strategy, monitoring the test loss, with a patience of slightly more than one cycle. We set the maximum number of epochs for all models to 50. Results As can be seen in Table TABREF37, the best version of CAHAN (SUM-BI-$\Sigma $) consistently outperforms the HAN baseline, which shows that taking contextual information into account helps producing better document representations. Also, the two unidirectional variants (LR) slightly underperform the baseline and are clearly inferior to BI, which illustrates the value added by processing the document forwards and backwards, using preceding and following sentences as context. Results ::: Summing vs. averaging In the unidirectional case, it is surprising to note that CAHAN-SUM-LR-$\mu $ is slightly better than CAHAN-SUM-LR-$\Sigma $, i.e., the centroid-based context vector (Eq. DISPLAY_FORM20) is better than the sum-based one (Eq. DISPLAY_FORM17). Indeed, from an information theory standpoint, it should be the opposite, as summing keeps track of all information whereas averaging is lossy. We hypothesize that towards the end of a document, the sum-based context vector grows large in magnitude, which perturbs the alignment decisions and deteriorates the quality of the sentence vectors. On the other hand, the centroid-based vector, which has constant magnitude, does not suffer from this issue. We further hypothesize that this issue is attenuated in the bidirectional case (CAHAN-SUM-BI-$\mu $ and CAHAN-SUM-BI-$\Sigma $ are on par) due to a counterbalancing phenomenon. Indeed, the last sentences processed by the left-to-right encoder are the first ones processed by the right-to-left encoder. Therefore, through concatenation, the overall quality of the sentence embeddings stays constant. Results ::: Gating As expected, gating improves performance, especially for the $\Sigma $ variants of CAHAN-SUM (and especially the LR ones). To be noted are significant boosts of 0.45 and 0.24 in accuracy respectively for CAHAN-SUM-LR-$\Sigma $ and CAHAN-SUM-BI-$\Sigma $ on Yelp. On Amazon, gating also offers CAHAN-SUM-LR-$\Sigma $ a nice 0.27 improvement. These positive results give a clue that regulating the magnitude of the context vector $\mathbf {c}_i$ is indeed beneficial. Nevertheless, gating also improves the performance of the $\mu $ variants of CAHAN, which do not suffer from the context vector magnitude issue. This shows that gating is also helpful via giving more expressiveness to the model. For instance, on Amazon, gating boosts the performance of CAHAN-SUM-BI-$\mu $ by 0.12. It is interesting to note that overall, gating is mostly effective on Yelp and Amazon. We attribute this to the difference in task. Sentiment analysis may rely more on contextual information than topic classification. Results ::: CAHAN-RNN-BI The consistently bad performance of CAHAN-RNN-BI is to be noted. This was unexpected, as an equivalent approach was used by BIBREF6 for dialogue act classification, with significant improvements. We hypothesize that in our case, CAHAN-RNN-BI is not effective because, unlike utterances in a speech transcription, sentences in a document are not ordered in a temporal fashion. In other words, sentences far away from the current sentence are not necessarily less relevant than closer sentences. Thus, considering each sentence equally is better than imposing an implicit time-decay via a RNN. Results ::: Runtimes We compare the average runtime per iteration of some variants of CAHAN to that of basic HAN in Table TABREF43. For CAHAN-SUM-$\Sigma $, we observe that the unidirectional variant (LR) is 5.7% slower than basic HAN (37 vs. 35ms per iteration), whereas the bidirectional variant (BI) is 23% slower (43 vs. 35 ms). When gating, these number increase to 14.3% and 37% (40 and 48ms vs. 35ms). These differences are not far from our theoretical expectations (see subsection SECREF26), especially for LR. Indeed, recall that based on matrix multiplication counts, we had forecasted increases of 4% and 8% (11.5% and 23% when using gating), respectively for LR and BI. The gap for BI can be explained by a probable bottleneck in the implementation. CAHAN-RNN adds the same number of matrix multiplications as CAHAN-SUM, so we should in principle observe the same increases. However, as was explained in subsection SECREF26, with CAHAN-RNN we have to wait until the level 2 RNN has processed the preceding or preceding/following sentence vectors (LR or BI case) before being able to encode the current sentence. This explains the extra-time needed (40 vs. 37ms and 49 vs. 43ms). Related work In what follows, we provide a review of the relevant literature. One should note that by context, in this paper, we do not refer to the intra-sentence or internal context vector of seq2seq encoders BIBREF21, BIBREF11, BIBREF13. Rather, we refer to the cross-sentence, external, or document-level context. A few studies only have focused on developing models that take that type of context into account. Most of these studies originate from NMT. We briefly describe them next. BIBREF2 obtain a global context vector by feeding a fixed number of the previous source sentences to HAN. They then compare two ways of injecting it into the encoder-decoder model. First, they propose a warm-start approach, in which the encoder and/or decoder hidden states are initialized with the context vector. Second, they experiment with an auxiliary strategy in which the intra-sentence context vector of the encoder is concatenated with the global context vector and passed either (i) directly to the decoder, or (ii) after going through a filtering gate. However, unlike our mechanism and that of BIBREF11, BIBREF12, BIBREF13, which all feature two coupled gates, the mechanism of BIBREF2 has only one gate. All strategies proposed by BIBREF2 significantly improve performance, but first place is reached by a combination of the warm-start and gated techniques. BIBREF22 use an approach similar to the auxiliary approach of BIBREF2, but they compute the context vector only from the sentence immediately preceding the current source sentence. They then pass it to a dedicated encoder featuring a customized attention mechanism. BIBREF12 and BIBREF23 both extend the Transformer architecture BIBREF24 with a context encoder featuring self-attentional and feed-forward layers. Then, BIBREF12 combine the context representation with the source representation produced by the basic Transformer encoder via a gating mechanism. They do not modify the decoder part of the Transformer. BIBREF23 go one step further by passing the contextual information both to the encoder and the decoder. In both cases, they add a self-attention mechanism over the context representation. For the decoder though, they also replace the residual connection after the context self-attention with a gating mechanism, to limit the influence of the context information on the source information. One piece of work closely related to our study is BIBREF3. The authors also use a hierarchical attention architecture, where at level 1, each paragraph of a document is encoded by a dedicated encoder. All encoders share the same stacking bi-RNN architecture. Moreover, they communicate at each layer to produce context-aware annotations of the words in their paragraphs. More precisely, at a given layer of the stacking RNN, a given encoder is passed the average of the representations learned by the other encoders at the corresponding layer (like with CAHAN-SUM-$\mu $). This context vector is then combined with the hidden states and passed as input to the upper RNN layer. At level 2, the top RNN layer annotations are passed to a word attention mechanism followed by a paragraph attention mechanism. A major difference with our work is that the authors combine the encoder with a decoder, to perform abstractive summarization of long documents, whereas we only focus on the encoding part. The word and paragraph attentional decisions at level 2 are thus made by the decoder. Another significant difference is that the authors use reinforcement learning for training, instead of SGD. Context-aware models have also been proposed in other NLP domains. E.g., for spoken language understanding, BIBREF7 prepend and append the current utterance with two special word vectors respectively summarizing the $C$ preceding and following utterances (respectively), where $C$ is a hyperparameter. This indirectly initializes the hidden states of the left-to-right and right-to-left components of a bidirectional RNN, like with the warm-start approach of BIBREF2. On the other hand, BIBREF6 rely on a mechanism equivalent to LR-CAHAN-RNN. They find that it significantly boosts dialogue act classification accuracy. As discussed in section SECREF5, we hypothesize that CAHAN-RNN is not effective in our application because sentences in a document are not ordered in a temporal manner. Discussion and next steps While bidirectional CAHAN-SUM systematically outperforms HAN, margins are modest. We attribute this to the fact that the datasets used in our experiments contain short documents (see Table TABREF29) featuring simple sentences. Thus, the superior expressiveness of CAHAN is not able to show. To address this issue, we plan in future work to experiment on datasets featuring long documents containing complex sentences. Moreover, the tasks of sentiment and topic classification do not require a deep understanding of the input documents. Even if a given document contains some complex sentences with multiple clauses and subtopics, capturing the polarity of only one simple, unambiguous sentence or pattern may be enough to accurately predict the category of the entire document (e.g., “hated”, “loved”, “definitely recommends”, “will never come back”, etc.). Thus, we hypothesize that when trained to solve such tasks, CAHAN does not learn to use its context-aware capabilities to the fullest extent. One solution, and promising area of future work, would consist in explicitly giving CAHAN knowledge about coverage, diversity, and redundancy. This could be done by modifying the sentence attention mechanism and/or by adding a term to the loss. Another natural next step is to experiment on tasks requiring a deeper understanding of text, such as end-to-end abstractive summarization. Some other ideas for improvement include combining CAHAN-SUM with CAHAN-RNN, and/or the mean and centroid vectors; for CAHAN-SUM, obtaining the centroid vector through a trainable mechanism rather than via pooling; and experimenting with a trainable matrix (instead of vector) in the self-attention at both level 1 and level 2, like in BIBREF25. Finally, the context vector could be seen as an external, general summary of the document, and be pre-computed offline by a dedicated encoder. Conclusion In this paper, we proposed several modifications of the HAN architecture that make the sentence encoder context-aware (CAHAN). Results show that taking context into account is beneficial. Specifically, the bidirectional version of the document encoder, that processes the documents forwards and backwards, using the preceding and following sentences as context, outperforms the HAN baseline on all datasets and is superior to the undirectional variant. Moreover, the computational overhead is small. Experiments on tasks requiring a deeper understanding of the input documents should better highlight the superiority of CAHAN. Acknowledgments We thank Xiang Zhang for sharing the datasets. We are grateful to the NVidia corporation for the donation of a GPU as part of their GPU grant program. This research was supported in part by the open-source project LinTo.
No
609fbe627309775de415682f48588937d5dd8748
609fbe627309775de415682f48588937d5dd8748_0
Q: What are the datasets used Text: Introduction Recently, hierarchical architectures have become ubiquitous in NLP. They have been applied to a wide variety of tasks such as language modeling and generation BIBREF0, BIBREF1, neural machine translation (NMT) BIBREF2, summarization BIBREF3, sentiment and topic classification BIBREF4, BIBREF5, and spoken language understanding BIBREF6, BIBREF7, to cite only a few examples. All hierarchical architectures capitalize on the same intuitive idea that the representation of the input text should be learned in a bottom-up fashion by using a different encoder at each granularity level (e.g., words, sentences, paragraphs), where the encoder at level $l+1$ takes as input the output of the encoder at level $l$. One of the earliest and most influential examples is the Hierarchical Attention Network (HAN) of BIBREF5 (see Fig. FIGREF6 and section SECREF2). It is a two-level architecture, where at level 1, each sentence in the document is separately encoded by the same sentence encoder, resulting in a sequence of sentence vectors. That sequence is then processed at level 2 by the document encoder which returns a single vector representing the entire document. The sentence and document encoders are both self-attentional bidirectional Recurrent Neural Networks (RNNs), with different parameters. Introduction ::: Observed problem HAN was highly successful and established new state of the art on six large-scale sentiment and topic classification datasets. However, it has a major weakness: at level 1, each sentence is encoded in isolation. That is, while producing the representation of a given sentence in the document, HAN completely ignores the other sentences. This lack of communication is obviously suboptimal. For example, in Fig. FIGREF2, the same highly negative feature (“terrible value”) has been repeated at the beginning of each sentence in the document. Because it encodes each sentence independently, HAN has no choice but to spend most of its attentional budget on the most salient feature every time. As a result, HAN neglects the other aspects of the document. On the other hand, CAHAN is informed about the context, and thus quickly stops spending attention weight on the same highly negative pattern, knowing that is has already been covered. CAHAN is then able to cover the other topics in the document (“seafood”,“scallops” and “mussels”; “entree” and “appetizer”; triple negation in the fourth sentence). As another example, consider the edge case of a document containing the same sentence repeated several times, as shown in Fig. FIGREF3. With HAN, the exact same embedding is produced for each instantiation of the sentence, as a result of the context-blind self-attention mechanism always making the same alignment decisions. However, the context-aware sentence encoder of CAHAN allows it to extract complementary, rather than redundant information, from each instantiation of the sentence. This results in better coverage (“reasonably priced”, “arrived late”), in a richer document representation, and ultimately in a more accurate prediction (positive instead of very positive). One may argue that in basic HAN, the document encoder at level 2 already does capture some notion of context, by assigning importance scores to sentences. However, at level 2, the sentence vectors have already been formed, and it is too late to modify them. Since the document encoder can only rank the sentence representations, it cannot address issues like high redundancy. In that case, important subtopics or details in the document will not be covered, no matter sentence scores. Introduction ::: Context-aware HAN In this work, we propose and evaluate several modifications of the HAN architecture that allow the sentence encoder at level 1 to make its attentional decisions based on contextual information, allowing it to learn richer document representations. Another significant contribution is the introduction of a bidirectional version of the document encoder, where one RNN processes the document forwards, using the preceding sentences as context, and another one processes it backwards, using the following sentences as context. The remainder of this paper is structured as follows. We start by formally introducing basic HAN (section SECREF2), we then explain our contributions (section SECREF3), and detail our experimental setup (section SECREF4). Finally, we interpret our results and list areas of future development (sections SECREF5 and SECREF7). Related work is reviewed in section SECREF6. HAN The baseline HAN model as introduced by BIBREF5 is shown in Fig. FIGREF6 along with our modifications (disregard the bold lines for the baseline). The sentence and document encoders, used respectively at level 1 and level 2, have different parameters but share the exact same architecture. Thus, in what follows, we only describe the sentence encoder in detail. HAN ::: Notation Next, we use boldface upper case for tensors, upper case for matrices, boldface lower case for vectors, and lower case for scalars. We define a document $\mathbf {X} \in \mathbb {R}^{N \times T_i \times d}$ as a sequence of $N$ sentences $(S_1, \dots , S_N)$. Each sentence $S_i$ is a sequence of $T_i$ $d$-dimensional word vectors $(\mathbf {x}_{i1}, \dots , \mathbf {x}_{iT_i}) \in \mathbb {R}^{T_i \times d}$. HAN ::: Sentence encoder First, the sentence-level bidirectional RNN $f_s$ processes the input sentence $S_i$ and returns a sequence of $T_i$ $2d_s$-dimensional hidden states $(\mathbf {h}_{i1},\dots , \mathbf {h}_{iT_i}) \in \mathbb {R}^{T_i \times 2d_s}$. $f_s$ is composed of two non-stacking RNNs $\overrightarrow{f_s}$ and $\overleftarrow{f_s}$ with Gated Recurrent Units BIBREF8, respectively parsing $S_i$ from left to right and right to left: $\overrightarrow{f_s}$ and $\overleftarrow{f_s}$ have the same hidden layer dimensionality $d_s$, but different parameters. At each time step $t$, the word annotations they return are concatenated, producing $2d_s$-dimensional annotations that summarize the immediate context surrounding each word: Then, a self-attention mechanism computes the representation $\mathbf {s}_i$ of sentence $S_i$ as a weighted sum of its word annotations: Where the vector of attentional coefficients $\mathbf {\alpha }$ is a softmax-normalized version of the alignment vector $\mathbf {e}$, which itself is obtained by passing the word annotations through a dense layer (parameterized by $W_s \in \mathbb {R}^{2d_s\times 2d_s}$) and comparing the output with a trainable vector $\mathbf {u}_s \in \mathbb {R}^{2d_s}$: $\mathbf {u}_s$ is initialized randomly. It can be interpreted as a “super-word” whose vector contains the ideal combination of latent topics, on average. The closest the annotation of a word is to this ideal representation, the more attention that word will be given. The sentence encoder is applied to all sentences in document $\mathbf {X}$, producing a sequence of $N$ sentence vectors $(\mathbf {s_1},\dots ,\mathbf {s_N}) \in \mathbb {R}^{N\times 2d_s}$. HAN ::: Document encoder The document encoder is a self-attentional bidirectional GRU-RNN, like the sentence encoder, but it has different parameters. The dimensionality of its hidden states is $2d_d$. The document encoder is applied only once, to the sequence of sentence vectors, to produce the sequence of sentence annotations $(\mathbf {h}_{1}, \dots , \mathbf {h}_{N})$. Then, a self-attention layer outputs the final document vector. Proposed architecture: CAHAN As was previously explained, each sentence is encoded independently by HAN, without considering any kind of contextual information. To solve this issue, we inject a context vector $\mathbf {c_i}$ into the self-attention mechanism, to guide the model during the computation of the word alignment coefficients. In effect, Eq. DISPLAY_FORM12 becomes: We propose two approaches for computing $\mathbf {c_i}$, namely CAHAN-SUM and CAHAN-RNN, shown as the two bolded connections in Fig. FIGREF6. Proposed architecture: CAHAN ::: Summed context (CAHAN-SUM) We introduce two settings, (1) left-to-right and bidirectional. Whenever there is no preceding/following sentence, i.e., at the beginning/end of a document, the context vector is initialized with zeroes. Proposed architecture: CAHAN ::: Summed context (CAHAN-SUM) ::: Left-to-right (LR) In the LR case, the context vector is computed as the sum of the preceding sentence representations: Proposed architecture: CAHAN ::: Summed context (CAHAN-SUM) ::: Bidirectional (BI) In the BI case, we compute two context vectors, respectively by summing the representations of the sentences preceding and following the current sentence $S_i$. These two vectors are passed to two identical context-aware self-attention mechanisms (Eq. DISPLAY_FORM14) with different parameters. The resulting forward and backward sentence representations are then processed respectively by the forward and backward RNNs of the document encoder at level 2, and the resulting annotations are concatenated to produce the final sentence annotations. CAHAN-SUM was inspired by the coverage vectors of seq2seq architectures, which have been shown very effective in addressing under(over)-translation in NMT BIBREF9, and repetition in summarization BIBREF10. Such coverage vectors are typically computed as the sum, over all previous decoder steps, of the attention distribution over the source tokens. However, in our case, we cannot keep track of the attention distribution history, since sentences are unique and cannot be aligned. This is why we work with sentence representations instead. Proposed architecture: CAHAN ::: Summed context (CAHAN-SUM) ::: Centroid version (@!START@$\mu $@!END@) $\overrightarrow{\mathbf {c}_i}$, as defined by Eq. DISPLAY_FORM17, grows larger in magnitude as $i$ increases (the sum has more and more terms), which can blur the alignment decisions for the sentences at the end of a document (LR case), or both at the end and beginning of a document, when reading forwards and backwards (BI case). Therefore, we also experiment with a centroid, rather than sum, context vector: Proposed architecture: CAHAN ::: Recurrent Context (CAHAN-RNN) Here, we capitalize on the capability of RNNs, especially when equipped with LSTM or GRU units, to keep track of information over long time periods. We simply use as context vector the document encoder annotation at the preceding/following time step. That is, we have, in the LR case: By design, $\mathbf {h}_{i-1}$ summarizes the entire history $(\mathbf {s_1},\dots ,\mathbf {s_{i-1}})$ of sentence vectors, with a preference for the most recent time steps. If the sequence is very long though, even a GRU-RNN will eventually forget about the first elements. However, for the relatively short documents we experiment with (see Table TABREF29), we can assume the annotations of the document encoder to faithfully represent the entire sequence. Proposed architecture: CAHAN ::: Gated context In NMT, BIBREF11 introduced a gating mechanism to allow the decoder to balance the contribution of the source and target information in generating the next word. The same idea can be found in numerous other NMT studies, e.g., BIBREF2, BIBREF12, BIBREF13. Inspired by this line of research, we propose a modification of Eq. DISPLAY_FORM14 to let our model explicitly decide how much contextual information it should take into account in making its alignment decisions: $\mathbf {\lambda }$ is produced by a trainable mechanism taking as input the word annotations and the context vector: The sigmoid activation ensures that $\mathbf {\lambda }$ plays a filtering role, by squashing all its entries to $[0,1]$. The gate gives more expressiveness to the attention mechanism. Indeed, contextual information should not always be given the same importance, depending on the situation. E.g., when most of the document has been processed, context is likely to be very important, in order to limit redundancy and increase coverage. However, at the beginning of a document, or in the case of a very short or focused sentence, context might not be useful as only one single topic might be extractable from the sentence anyways. From an optimization perspective, $\mathbf {\lambda }$ also has the desirable effect of regulating the magnitude of the context vector, preventing it from pushing the tanh to regions of very small gradient. This is especially useful with CAHAN-SUM, as in that case, $\mathbf {c}_i$ gets large towards the end/beginning of documents (forwards/backwards reading). Proposed architecture: CAHAN ::: Complexity and sequentiality Assuming that $d \sim 2d_s$ and that $d_s \sim d_d$, which holds in practice under reasonable settings, all matrix multiplications in the network have similar complexity, of order of magnitude $\mathcal {O}(d^2)$. Moreover, since we use GRU-RNNs, there are 6 matrix multiplication per encoder. This number is doubled, as we use bidirectional RNNs. Finally, the two self-attention mechanisms, one at each level, add two multiplications. Therefore, in the HAN baseline architecture, there are a total of 26 matrix multiplications (13 at each level). To that, CAHAN-SUM and CAHAN-RNN simply add one matrix multiplication ($W_c\mathbf {c}_i$ in Eq. DISPLAY_FORM14) in the LR case and two in the BI case. This corresponds to negligible 4% and 8% increases in total computational cost. On top of that, gating adds two multiplications in the LR case ($W_{\lambda _1}\mathbf {h}_{it}$ and $W_{\lambda _2}\mathbf {c}_i$ in Eq. DISPLAY_FORM25) and four in the BI case. All in all, this represents three and six extra multiplications compared to basic HAN, resp. in the LR and BI cases. Again, this corresponds to small increases in computational cost, of 11.5% and 23%, respectively. However, with CAHAN-SUM, the representations of the preceding/following sentences are now required before computing the current sentence representation. With CAHAN-RNN, one even has to wait until the level 2 RNN has processed the preceding/following sentence vectors before being able to encode the current sentence. Therefore, the sentence encoding process, which was parallelizable with basic HAN due to independence, has now become a sequential process. This is why in practice, we observe slightly greater runtime increases, in the range 5-22% (see Table TABREF43). Experimental setup ::: Datasets We evaluate the quality of the document embeddings learned by the different variants of CAHAN and the HAN baseline on three of the large-scale document classification datasets introduced by BIBREF14 and used in the original HAN paper BIBREF5. They fall into two categories: topic classification (Yahoo) and fine-grained sentiment analysis (Amazon, Yelp). Dataset statistics are shown in Table TABREF29. Classes are perfectly balanced, for all datasets. Experimental setup ::: Model configuration This subsection describes the preprocessing and hyperparameter setting we used. Experimental setup ::: Model configuration ::: Preprocessing and word embeddings For preprocessing (and the HAN baseline), we used the publicly available implementation of BIBREF15, which closely follows the description and details given in the original HAN paper BIBREF5. More precisely, on each dataset, we randomly split the training set into training (90%) and validation (10%). Documents are then tokenized into sentences and sentences are tokenized into tokens. The tokens appearing less than 5 times in the corpus are replaced with a special UNK token. Finally, we pre-train our own word vectors with word2vec BIBREF16 on the training and validation splits. Experimental setup ::: Model configuration ::: Hyperparameters We do not tune any hyperparameter except the learning rate (see subsection SECREF35). We set the hidden layer dimensionality of the two RNN encoders to $d_s=50$ and $d_d=50$. Thus, the word annotations, sentence vectors, sentence annotations and document vector all have size 100. With regularization in mind, we set the dimensionality of the word embeddings to $d=200$ on the very large datasets (Amazon and Yahoo!) and to $d=100$ on Yelp, as shown in Table TABREF29. We also use a greater batch size of 128 on the large datasets, versus 64 on Yelp. Experimental setup ::: Training details We zero-pad sentences and documents. Like in BIBREF5, to make the most out of each batch, we ensure they are as dense as possible by using a bucketing strategy. More precisely, we build each batch so that it contains documents of approximately the same size, in number of sentences. For regularization, we use dropout BIBREF17 with a rate of 0.5 at each layer. For classification, the document vectors are passed to a dense layer with softmax activation, whose dimensionality is equal to the number of categories to be predicted. Initialization has a significant impact on performance. To make sure the differences we measure are due to differences in the models and not in initial condition, we use the same initialization weights for each model. Experimental setup ::: Training details ::: SGD with cyclical learning rate To minimize the categorical cross-entropy loss, we use the stochastic gradient descent optimizer with a triangular cyclical learning rate schedule and opposite triangular momentum schedule BIBREF18, BIBREF19. Following the authors' recommendations, we use a fixed $[0.85,0.95]$ momentum range, while for the learning rate, we perform a range test on the validation set, for each model, searching the $[0.001,3]$ range. With a triangular schedule, the learning rate linearly increases for a certain number of iterations (half-cycle), and then linearly decreases back to its initial value during the second half of the cycle. Cycles are repeated until training ends. High learning rate values make training faster, by allowing large updates and the use of greater batch sizes while keeping the amount of regularization constant. Also, the cyclical schedule injects beneficial stochastic noise to the gradient updates, which improves generalization BIBREF20. We use cycles of 12 epochs, and an early stopping strategy, monitoring the test loss, with a patience of slightly more than one cycle. We set the maximum number of epochs for all models to 50. Results As can be seen in Table TABREF37, the best version of CAHAN (SUM-BI-$\Sigma $) consistently outperforms the HAN baseline, which shows that taking contextual information into account helps producing better document representations. Also, the two unidirectional variants (LR) slightly underperform the baseline and are clearly inferior to BI, which illustrates the value added by processing the document forwards and backwards, using preceding and following sentences as context. Results ::: Summing vs. averaging In the unidirectional case, it is surprising to note that CAHAN-SUM-LR-$\mu $ is slightly better than CAHAN-SUM-LR-$\Sigma $, i.e., the centroid-based context vector (Eq. DISPLAY_FORM20) is better than the sum-based one (Eq. DISPLAY_FORM17). Indeed, from an information theory standpoint, it should be the opposite, as summing keeps track of all information whereas averaging is lossy. We hypothesize that towards the end of a document, the sum-based context vector grows large in magnitude, which perturbs the alignment decisions and deteriorates the quality of the sentence vectors. On the other hand, the centroid-based vector, which has constant magnitude, does not suffer from this issue. We further hypothesize that this issue is attenuated in the bidirectional case (CAHAN-SUM-BI-$\mu $ and CAHAN-SUM-BI-$\Sigma $ are on par) due to a counterbalancing phenomenon. Indeed, the last sentences processed by the left-to-right encoder are the first ones processed by the right-to-left encoder. Therefore, through concatenation, the overall quality of the sentence embeddings stays constant. Results ::: Gating As expected, gating improves performance, especially for the $\Sigma $ variants of CAHAN-SUM (and especially the LR ones). To be noted are significant boosts of 0.45 and 0.24 in accuracy respectively for CAHAN-SUM-LR-$\Sigma $ and CAHAN-SUM-BI-$\Sigma $ on Yelp. On Amazon, gating also offers CAHAN-SUM-LR-$\Sigma $ a nice 0.27 improvement. These positive results give a clue that regulating the magnitude of the context vector $\mathbf {c}_i$ is indeed beneficial. Nevertheless, gating also improves the performance of the $\mu $ variants of CAHAN, which do not suffer from the context vector magnitude issue. This shows that gating is also helpful via giving more expressiveness to the model. For instance, on Amazon, gating boosts the performance of CAHAN-SUM-BI-$\mu $ by 0.12. It is interesting to note that overall, gating is mostly effective on Yelp and Amazon. We attribute this to the difference in task. Sentiment analysis may rely more on contextual information than topic classification. Results ::: CAHAN-RNN-BI The consistently bad performance of CAHAN-RNN-BI is to be noted. This was unexpected, as an equivalent approach was used by BIBREF6 for dialogue act classification, with significant improvements. We hypothesize that in our case, CAHAN-RNN-BI is not effective because, unlike utterances in a speech transcription, sentences in a document are not ordered in a temporal fashion. In other words, sentences far away from the current sentence are not necessarily less relevant than closer sentences. Thus, considering each sentence equally is better than imposing an implicit time-decay via a RNN. Results ::: Runtimes We compare the average runtime per iteration of some variants of CAHAN to that of basic HAN in Table TABREF43. For CAHAN-SUM-$\Sigma $, we observe that the unidirectional variant (LR) is 5.7% slower than basic HAN (37 vs. 35ms per iteration), whereas the bidirectional variant (BI) is 23% slower (43 vs. 35 ms). When gating, these number increase to 14.3% and 37% (40 and 48ms vs. 35ms). These differences are not far from our theoretical expectations (see subsection SECREF26), especially for LR. Indeed, recall that based on matrix multiplication counts, we had forecasted increases of 4% and 8% (11.5% and 23% when using gating), respectively for LR and BI. The gap for BI can be explained by a probable bottleneck in the implementation. CAHAN-RNN adds the same number of matrix multiplications as CAHAN-SUM, so we should in principle observe the same increases. However, as was explained in subsection SECREF26, with CAHAN-RNN we have to wait until the level 2 RNN has processed the preceding or preceding/following sentence vectors (LR or BI case) before being able to encode the current sentence. This explains the extra-time needed (40 vs. 37ms and 49 vs. 43ms). Related work In what follows, we provide a review of the relevant literature. One should note that by context, in this paper, we do not refer to the intra-sentence or internal context vector of seq2seq encoders BIBREF21, BIBREF11, BIBREF13. Rather, we refer to the cross-sentence, external, or document-level context. A few studies only have focused on developing models that take that type of context into account. Most of these studies originate from NMT. We briefly describe them next. BIBREF2 obtain a global context vector by feeding a fixed number of the previous source sentences to HAN. They then compare two ways of injecting it into the encoder-decoder model. First, they propose a warm-start approach, in which the encoder and/or decoder hidden states are initialized with the context vector. Second, they experiment with an auxiliary strategy in which the intra-sentence context vector of the encoder is concatenated with the global context vector and passed either (i) directly to the decoder, or (ii) after going through a filtering gate. However, unlike our mechanism and that of BIBREF11, BIBREF12, BIBREF13, which all feature two coupled gates, the mechanism of BIBREF2 has only one gate. All strategies proposed by BIBREF2 significantly improve performance, but first place is reached by a combination of the warm-start and gated techniques. BIBREF22 use an approach similar to the auxiliary approach of BIBREF2, but they compute the context vector only from the sentence immediately preceding the current source sentence. They then pass it to a dedicated encoder featuring a customized attention mechanism. BIBREF12 and BIBREF23 both extend the Transformer architecture BIBREF24 with a context encoder featuring self-attentional and feed-forward layers. Then, BIBREF12 combine the context representation with the source representation produced by the basic Transformer encoder via a gating mechanism. They do not modify the decoder part of the Transformer. BIBREF23 go one step further by passing the contextual information both to the encoder and the decoder. In both cases, they add a self-attention mechanism over the context representation. For the decoder though, they also replace the residual connection after the context self-attention with a gating mechanism, to limit the influence of the context information on the source information. One piece of work closely related to our study is BIBREF3. The authors also use a hierarchical attention architecture, where at level 1, each paragraph of a document is encoded by a dedicated encoder. All encoders share the same stacking bi-RNN architecture. Moreover, they communicate at each layer to produce context-aware annotations of the words in their paragraphs. More precisely, at a given layer of the stacking RNN, a given encoder is passed the average of the representations learned by the other encoders at the corresponding layer (like with CAHAN-SUM-$\mu $). This context vector is then combined with the hidden states and passed as input to the upper RNN layer. At level 2, the top RNN layer annotations are passed to a word attention mechanism followed by a paragraph attention mechanism. A major difference with our work is that the authors combine the encoder with a decoder, to perform abstractive summarization of long documents, whereas we only focus on the encoding part. The word and paragraph attentional decisions at level 2 are thus made by the decoder. Another significant difference is that the authors use reinforcement learning for training, instead of SGD. Context-aware models have also been proposed in other NLP domains. E.g., for spoken language understanding, BIBREF7 prepend and append the current utterance with two special word vectors respectively summarizing the $C$ preceding and following utterances (respectively), where $C$ is a hyperparameter. This indirectly initializes the hidden states of the left-to-right and right-to-left components of a bidirectional RNN, like with the warm-start approach of BIBREF2. On the other hand, BIBREF6 rely on a mechanism equivalent to LR-CAHAN-RNN. They find that it significantly boosts dialogue act classification accuracy. As discussed in section SECREF5, we hypothesize that CAHAN-RNN is not effective in our application because sentences in a document are not ordered in a temporal manner. Discussion and next steps While bidirectional CAHAN-SUM systematically outperforms HAN, margins are modest. We attribute this to the fact that the datasets used in our experiments contain short documents (see Table TABREF29) featuring simple sentences. Thus, the superior expressiveness of CAHAN is not able to show. To address this issue, we plan in future work to experiment on datasets featuring long documents containing complex sentences. Moreover, the tasks of sentiment and topic classification do not require a deep understanding of the input documents. Even if a given document contains some complex sentences with multiple clauses and subtopics, capturing the polarity of only one simple, unambiguous sentence or pattern may be enough to accurately predict the category of the entire document (e.g., “hated”, “loved”, “definitely recommends”, “will never come back”, etc.). Thus, we hypothesize that when trained to solve such tasks, CAHAN does not learn to use its context-aware capabilities to the fullest extent. One solution, and promising area of future work, would consist in explicitly giving CAHAN knowledge about coverage, diversity, and redundancy. This could be done by modifying the sentence attention mechanism and/or by adding a term to the loss. Another natural next step is to experiment on tasks requiring a deeper understanding of text, such as end-to-end abstractive summarization. Some other ideas for improvement include combining CAHAN-SUM with CAHAN-RNN, and/or the mean and centroid vectors; for CAHAN-SUM, obtaining the centroid vector through a trainable mechanism rather than via pooling; and experimenting with a trainable matrix (instead of vector) in the self-attention at both level 1 and level 2, like in BIBREF25. Finally, the context vector could be seen as an external, general summary of the document, and be pre-computed offline by a dedicated encoder. Conclusion In this paper, we proposed several modifications of the HAN architecture that make the sentence encoder context-aware (CAHAN). Results show that taking context into account is beneficial. Specifically, the bidirectional version of the document encoder, that processes the documents forwards and backwards, using the preceding and following sentences as context, outperforms the HAN baseline on all datasets and is superior to the undirectional variant. Moreover, the computational overhead is small. Experiments on tasks requiring a deeper understanding of the input documents should better highlight the superiority of CAHAN. Acknowledgments We thank Xiang Zhang for sharing the datasets. We are grateful to the NVidia corporation for the donation of a GPU as part of their GPU grant program. This research was supported in part by the open-source project LinTo.
large-scale document classification datasets introduced by BIBREF14
4a8bceb3b6d45f14c4749115d6aa83912f0b0a6e
4a8bceb3b6d45f14c4749115d6aa83912f0b0a6e_0
Q: Do they evaluate only on English datasets? Text: Introduction The rapid growth of social media platforms such as Twitter provides rich multimedia data in large scales for various research opportunities, such as sentiment analysis which focuses on automatically sentiment (positive and negative) prediction on given contents. Sentiment analysis has been widely used in real world applications by analyzing the online user-generated data, such as election prediction, opinion mining and business-related activity analysis. Emojis, which consist of various symbols ranging from cartoon facial expressions to figures such as flags and sports, are widely used in daily communications to express people's feelings . Since their first release in 2010, emojis have taken the place of emoticons (such as “:- INLINEFORM0 ” and “:-P”) BIBREF0 to create a new form of language for social media users BIBREF1 . According to recent science reports, there are 2,823 emojis in unicode standard in Emoji 11.0 , with over 50% of the Instagram posts containing one or more emojis BIBREF2 and 92% of the online population using emojis BIBREF3 . The extensive use of emojis has drawn a growing attention from researchers BIBREF4 , BIBREF5 because the emojis convey fruitful semantical and sentimental information to visually complement the textual information which is significantly useful in understanding the embedded emotional signals in texts BIBREF6 . For example, emoji embeddings have been proposed to understand the semantics behind the emojis BIBREF7 , BIBREF8 , and the embedding vectors can be used to visualize and predict emoji usages given their corresponding contexts. Previous work also shows that, it is useful to pre-train a deep neural network on an emoji prediction task with pre-trained emoji embeddings to learn the emotional signals of emojis for other tasks including sentiment, emotion and sarcasm prediction BIBREF9 . However, the previous literatures lack in considerations of the linguistic complexities and diversity of emoji. Therefore, previous emoji embedding methods fail to handle the situation when the semantics or sentiments of the learned emoji embeddings contradict the information from the corresponding contexts BIBREF5 , or when the emojis convey multiple senses of semantics and sentiments such as ( and ). In practice, emojis can either summarize and emphasis the original tune of their contexts, or express more complex semantics such as irony and sarcasm by being combined with contexts of contradictory semantics or sentiments. For the examples shown in Table TABREF3 , the emoji () is of consistent sentiment with text to emphasis the sentiment, but is of the opposite sentiment (positive) to the text sentiment (negative) example 3 and 4 to deliver a sense of sarcasm. Conventional emoji analysis can only extract single embedding of each emoji, and such embeddings will confuse the following sentiment analysis model by inconsistent sentiment signals from the input texts and emojis. Moreover, we consider the emoji effect modeling different from the conventional multimodal sentiment analysis which usually includes images and texts in that, image sentiment and text sentiment are usually assumed to be consistent BIBREF10 while it carries no such assumption for texts and emojis. To tackle such limitations, we propose a novel scheme that consists of an attention-based recurrent neural network (RNN) with robust bi-sense emoji embeddings. Inspired by the word sense embedding task in natural language processing (NLP) BIBREF11 , BIBREF12 , BIBREF13 where each sense of an ambiguous word responds to one unique embedding vector, the proposed bi-sense embedding is a more robust and fine-grained representation of the complicated semantics for emojis where each emoji is embedded into two distinct vectors, namely positive-sense and negative-sense vector, respectively. For our specific task which is Twitter sentiment analysis BIBREF14 , BIBREF15 , we initialize the bi-sense embedding vectors together with word embedding vectors using word embedding algorithm fasttext BIBREF16 by extracting two distinct embeddings for each emoji according to the sentiment of its corresponding textual contexts, namely bi-sense embedding. A long short-term memory (LSTM) based recurrent neural network is then used for predicting sentiments which is integrated with the pre-trained emoji embedding features by a context-guide and self-selected attention mechanism. Because most of the previous Twitter sentiment datasets exclude emojis and there exists little resource that contains sufficient emoji-tweets with sentiment labels, we construct our own emoji-tweets dataset by automatically generating weak labels using a rule-based sentiment analysis algorithm Vader BIBREF17 for pre-traning the networks, and manually labeling a subset of tweets for fine tuning and testing purposes. The experimental results demonstrate that the bi-sense emoji embedding is capable of extracting more distinguished information from emojis and outperforms the state-of-the-art sentiment analysis models with the proposed attention-based LSTM networks. We further visualize the bi-sense emoji embedding to obtain the sentiments and semantics learned by the proposed approach. The main contributions of this paper are summarized as follows. Sentiment Analysis Sentiment analysis is to extract and quantify subjective information including the status of attitudes, emotions and opinions from a variety of contents such as texts, images and audios BIBREF18 . Sentiment analysis has been drawing great attentions because of its wide applications in business and government intelligence, political science, sociology and psychology BIBREF19 , BIBREF20 , BIBREF21 , BIBREF22 . From a technical perspective, textual sentiment analysis is first explored by researchers as an NLP task. Methods range from lexical-based approaches using features including keywords BIBREF23 , BIBREF24 where each word corresponds to a sentiment vector with entries representing the possibility of the word and each sentiment and phase-level features (n-grams and unigrams) BIBREF25 , BIBREF26 , to deep neural network based embedding approaches including skip-grams, continuous bag-of-words (CBoW) and skip-thoughts BIBREF27 , BIBREF28 , BIBREF16 , BIBREF29 . It was until recent years when researchers start focusing on image and multimodal sentiments BIBREF30 , BIBREF31 and analyzing how to take advantage of the cross-modality resources BIBREF10 , BIBREF32 . For multimodal sentiment analysis, an underlying assumption is that both modalities express similar sentiment and such similarity is enforced in order to train a robust sentiment inference model BIBREF10 . However, the same assumption does not stand in modeling textual tweets and emojis because the complexities of natural language exist extensively, such as the use of irony, jokes, sarcasm, etc. BIBREF9 . Models We set up the baselines and proposed models as follows: LSTM with text embedding: CNNs and LSTMs are widely used to encode textual contents for sentiment analysis in BIBREF45 , BIBREF46 and many online tutorials. Here we select the standard LSTM with pre-trained word embedding as input, and add one fully-connected layer with sigmoid activation top of the LSTM encoder (same as all other models), denoted as T-LSTM. LSTM with emoji embedding: We consider the emoji as one special word and input both pre-trained text and emoji embeddings into the same LSTM network, namely E-LSTM. Similarly, we concatenate the pre-trained bi-sense emoji embedding as one special word to feed into the LSTM network. This model is called BiE-LSTM. Attention-based LSTM with emojis:We also use the word-emoji embedding to calculate the emoji-word attention following Equation EQREF20 and EQREF21 , and the only difference is that we replace the attention-derived senti-emoji embedding with the pre-trained word-emoji embedding by fasttext, denoted as ATT-E-LSTM. LSTM with bi-sense emoji embedding (proposed): As we have introduced in Section SECREF13 , we propose two attention-based LSTM networks based on bi-sense emoji embedding, denoted as MATT-BiE-LSTM and WATT-BiE-LSTM. Evaluation We evaluate the baseline and proposed models on sentiment analysis by F1 scores and accuracies based on the auto-annotated testing set (AA-Sentiment) and human-annotated testing set (HA-Sentiment), as shown in Table TABREF25 . We only test the models after fine-tuning with a subset of the samples with human annotations because training exclusively on the samples with auto-generated weak labels results in relatively poor performances when tested with human annotated data indicating the models after fine-tuning are more robust. The F1 scores and accuracies are overall higher with the AA-Sentiment than the results with HA-sentiment, indicating that the HA-Sentiment is a more challenging task and the sentiments involved are more difficult to identify supported by their relatively lower sentiment scores returned from Vader. We still, however, observe competitive results from HA-Sentiment showing that the models are well-trained and robust to noisy labels with the help of fine-tuning with human annotated data. The T-LSTM baseline achieves decent performance in both experiments with accuracies of 86.6% and 70.7% showing that LSTM is an effective encoder for sentiment analysis as suggested by the references. The models with proposed bi-sense emoji embedding obtain accuracies over 82.4% and we observe improvements on the performance with the attention-based LSTM from our proposed model MATT-BiE-LSTM and WATT-BiE-LSTM, which is consistent with that ATT-E-LSTM ([email protected]%, [email protected]% on HA-Sentiment) outperforms significantly T-LSTM and E-LSTM. Emoji information is useful in sentiment analysis. Most models outperforms the baseline T-LSTM in both dataset suggesting that the emoji information is useful for sentiment analysis as a complement to the textual contents, even with the naive use of emoji embeddings (E-LSTM) when tested with HA-Sentiment. We observe that E-LSTM obtains similar performance to T-LSTM with AA-Sentiment but a significant gain over the T-LSTM when tested with HA-Sentiment indicating that sentiment information is helpful and necessary when the hidden sentiment is relatively subtle and the task is more challenging. Bi-sense emoji embedding helps. All the models using bi-sense emoji embedding perform significantly better than the baseline models without emoji feature or with word-emoji embedding. BiE-LSTM outperforms T-LSTM and E-LSTM significantly with the same utilization of emoji embedding indicates that the proposed bi-sense emoji embedding is capable of extracting more informative and distinguishable vectors over the use of conventional word embedding algorithms, which is consistent based on the comparisons between the proposed models (MATT-BiE-LSTM and WATT-BiE-LSTM) with bi-sense emoji embedding and the baseline model ATT-E-LSTM with word-emoji embedding and attention. Attention mechanism aligns and performs well with bi-sense embedding. MATT-BiE-LSTM and WATT-BiE-LSTM obtain similar performances when tested on both Vader and human annotated samples, though their ways of computing the attention (weights and vectors) are different that WATT computes attention weights and the senti-emoji embeddings guided by each word, and MATT obtains the senti-emoji embedding based on the LSTM encoder on the whole contexts and computes the attention weights of the senti-emoji embedding across all words. Both models outperforms the state-of-the-art baseline models including ATT-E-LSTM. The proposed attention-based LSTM can be further extended to handle tasks involving multi-sense embedding as inputs, such as the word-sense embedding in NLP, by using context-guide attention to self-select how much to attend on each sense of the embeddings each of which correspond to a distinct sense of semantics or sentiments. In this way we are able to take advantage of the more robust and fine-grained embeddings. Emojis and Sentiment Analysis With the overwhelming development of Internet of Things (IOT), the growing accessibility and popularity of subjective contents have provided new opportunities and challenges for sentiment analysis BIBREF33 . For example, social medias such as Twitter and Instagram have been explored because the massive user-generated contents with rich user sentiments BIBREF25 , BIBREF34 , BIBREF35 where emojis (and emoticons) are extensively used. Non-verbal cues of sentiment, such as emoticon which is considered as the previous generation of emoji, has been studied for their sentiment effect before emojis take over BIBREF36 , BIBREF37 , BIBREF38 . For instance, BIBREF36 , BIBREF38 pre-define sentiment labels to emoticons and construct a emoticon-sentiment dictionary. BIBREF37 applies emoticons for smoothing noisy sentiment labels. Similar work from BIBREF39 first considers emoji as a component in extracting the lexical feature for further sentiment analysis. BIBREF40 constructs an emoji sentiment ranking based on the occurrences of emojis and the human-annotated sentiments of the corresponding tweets where each emoji is assigned with a sentiment score from negative to positive , similar to the SentiWordNet BIBREF41 . However, the relatively intuitive use of emojis by lexical- and dictionary-based approaches lacks insightful understanding of the complexed semantics of emojis. Therefore, inspired by the success of word semantic embedding algorithms such as BIBREF28 , BIBREF16 , BIBREF7 obtains semantic embeddings of each emoji by averaging the words from its descriptions and shows it is effective to take advantage of the emoji embedding for the task of Twitter sentiment analysis. BIBREF8 proposes a convoluntional neural network to predict the emoji occurrence and jointly learns the emoji embedding via a matching layer based on cosine similarities. Despite the growing popularity of Twitter sentiment analysis, there is a limited number of emoji datasets with sentiment labels available because previous studies usually filter out urls, emojis and sometimes emoticons. However, BIBREF9 shows that it is effective to extract sentiment information from emojis for emotion classification and sarcasm detection tasks in the absence of learning vector-based emoji representations by pre-training a deep neural network to predict the emoji occurrence. Methodology We propose two mechanisms, namely Word-guide Attention-based LSTM and Multi-level Attention-based LSTM, to take advantage of bi-sense emoji embedding for the sentiment analysis task. The frameworks of these two methods are shown in Figure FIGREF10 and Figure FIGREF19 , respectively. Our workflow includes the following steps: initialization of bi-sense emoji embedding, generating senti-emoji embedding based on self-selected attention, and sentiment classification via the proposed attention-based LSTM networks. Bi-sense Embedding Recent research shows great success in word embedding task such as word2vec and fasttext BIBREF27 , BIBREF16 . We use fasttext to initialize emoji embeddings by considering each emoji as a special word, together with word embeddings. The catch is, different from conventional approaches where each emoji responds to one embedding vector (as we call word-emoji embedding), we embed each emoji into two distinct vectors (bi-sense emoji embedding): we first assign two distinct tokens to each emoji, of which one is for the particular emoji used in positive sentimental contexts and the other one is for this emoji used in negative sentimental contexts (text sentiment initialized by Vader BIBREF17 , details will be discussed in Section SECREF23 ), respectively; the same fasttext training process is used to embed each token into a distinct vector, and we thus obtain the positive-sense and negative-sense embeddings for each emoji. The word2vec is based on the skip-gram model whose objective is to maximize the log likelihood calculated by summing the probabilities of current word occurrences given a set of the surrounding words. The fasttext model is different by formatting the problem as a binary classification task to predict the occurrence of each context word, with negative samples being randomly selected from the absent context words. Given an input word sequence INLINEFORM0 , and the context word set INLINEFORM1 and the set of negative word samples INLINEFORM2 of the current word INLINEFORM3 , the objective function is obtained based on binary logistic loss as in Equation EQREF12 : DISPLAYFORM0 where INLINEFORM0 denotes the logistic loss of the score function INLINEFORM1 which is computed by summing up the scalar products between the n-gram embeddings of the current word and the context word embedding which is different from word2vec where the score is the scalar product between the current word and the context word embedding. We select fasttext over word2vec mainly because its computational efficiency. In general, the two models yield competitive performances and the comparison between word embeddings is beyond our discussion. Therefore we only show the results derived by the fasttext initialization within the scope of this work. Word-guide Attention-based LSTM Long short-term memory (LSTM) units have been extensively used to encode textual contents. The basic encoder model consists of a text embedding layer, LSTMs layer, and fully-connected layers for further tasks such as text classifications based on the encoded feature. The operations in an LSTM unit for time step INLINEFORM0 is formulated in Equation EQREF14 : DISPLAYFORM0 where INLINEFORM0 and INLINEFORM1 represent the current and previous hidden states, INLINEFORM2 denotes the current LSTM input and here we use the embedding INLINEFORM3 of the current word INLINEFORM4 , and INLINEFORM5 and INLINEFORM6 denote the weight matrices BIBREF42 . In order to take advantage of the bi-sense emoji embedding, we modify the input layer into the LSTM units. We first obtain the senti-emoji embedding as an weighted average of the bi-sense emoji embedding based on the self-selected attention mechanism. Let INLINEFORM7 represent the INLINEFORM8 -th sense embedding of emoji INLINEFORM9 ( INLINEFORM10 in our bi-sense embedding), and INLINEFORM11 denote the attention function conditioned on the current word embedding, the attention weight INLINEFORM12 and senti-emoji embedding vector INLINEFORM13 is formulated as follows: DISPLAYFORM0 We choose a fully-connected layer with ReLU activation as the attention function, and the attention vector INLINEFORM0 is concatenated with the word embedding as the new input of the LSTM. Thus the input vector INLINEFORM1 in Equation EQREF14 becomes INLINEFORM2 . The output of the final LSTM unit is then fed into a fully-connected layer with INLINEFORM3 activation to output the tweet sentiment and binary cross-entropy loss is used as the objection function (Equation EQREF16 ) where INLINEFORM4 is the total number of samples. The motivation behind this model is that each context word guides the attention weights in order to enforce the model to self-select which embedding sense it should attend on. Therefore we denote this model as the Word-guide Attention-based LSTM with Bi-sense emoji embedding (WATT-BiE-LSTM). DISPLAYFORM0 Multi-level Attention-based LSTM There is another way of formulating the attention mechanism where the attention weights indicate how the image information (which is emoji in our case) is distributed through the context words as proposed in BIBREF43 , BIBREF44 . The modified senti-emoji embedding vector INLINEFORM0 is thus at the tweet-level instead of the word-level in Equation EQREF15 by replacing the INLINEFORM1 with the final state vector INLINEFORM2 outputted from the last LSTM unit, as shown in Equation EQREF18 : DISPLAYFORM0 The derived senti-emoji embedding INLINEFORM0 is then used to calculate an additional layer of attention following BIBREF43 , BIBREF44 . Given the input tweet sequence INLINEFORM1 , the attention weight INLINEFORM2 conditioned on the senti-emoji embedding is formulated as follows: DISPLAYFORM0 Therefore, we construct the new input INLINEFORM0 to each LSTM unit by concatenating the original word embedding and the attention vector in Equation EQREF21 to distribute the senti-emoji information to each step. This model is called Multi-level Attention-based LSTM with Bi-sense Emoji Embedding (MATT-BiE-LSTM). We choose the same binary cross-entropy as the loss function with the same network configuration with WATT-BiE-LSTM. DISPLAYFORM0 Data Collection and Annotation Data Collection We construct our own Twitter sentiment dataset by crawling tweets through the REST API which consists of 350,000 users and is magnitude larger comparing to previous work. We collect up to 3,200 tweets from each user and follow the standard tweet preprocessing procedures to remove the tweets without emojis and tweets containing less than ten words, and contents including the urls, mentions, and emails. Data Annotation For acquiring the sentiment annotations, we first use Vader which is a rule-based sentiment analysis algorithm BIBREF17 for text tweets only to generate weak sentiment labels. The algorithm outputs sentiment scores ranging from -1 (negative) to 1 (positive) with neutral in the middle. We consider the sentiment analysis as a binary classification problem (positive sentiment and negative sentiment), we filter out samples with weak prediction scores within INLINEFORM0 and keep the tweets with strong sentiment signals. Emoji occurrences are calculated separately for positive tweets and negative tweets, and threshold is set to 2,000 to further filter out emojis which are less frequently used in at least one type of sentimental text. In the end, we have constructed a dataset with 1,492,065 tweets and 55 frequently used emojis in total. For the tweets with an absolute sentiment score over 0.70, we keep the auto-generated sentiment label as ground truth because the automatic annotation is reliable with high sentiment scores. On the other hand, we select a subset of the tweets with absolute sentiment scores between INLINEFORM1 for manual labeling by randomly sampling, following the distribution of emoji occurrences where each tweet is labeled by two graduate students. Tweets are discarded if the two annotations disagree with each other or they are labeled as neutral. In the end, we have obtained 4,183 manually labeled tweets among which 60% are used for fine-tuning and 40% are used for testing purposes. The remainder of the tweets with automatic annotations are divided into three sets: 60% are used for pre-training the bi-sense and conventional emoji embedding, 10% for validation and 30% are for testing. We do not include a “neutral” class because it is difficult to obtain valid neutral samples. For auto-generated labels, the neutrals are the samples with low absolute confidence scores and their sentiments are more likely to be model failures other than “true neutrals”. Moreover, based on the human annotations, most of the tweets with emojis convey non-neutral sentiment and only few neutral samples are observed during the manual labeling which are excluded from the manually labeled subset. In order to valid our motivation that emojis are also extensively used in tweets that contain contradictory information to the emoji sentiments, we calculate the emoji usage in Table TABREF22 according to the sentiment labels where Pos-Ratio means the percentage of each emoji occurs in the positive tweets over its total number of occurrences, AA and HA indicate automatic-annotation and human-annotation, respectively. We present the top-10 most frequently used emojis in our dataset and observe a slight difference in the Pos-Ratios between AA and HA dataset because of the randomness involved in the sampling process. Results from both of the datasets show a fair amount of emoji use in both positive and negative tweets. For example, it is interesting to notice that emoji () occurs more in the positive tweets in with the automatic annotations, while emojis with strong positive sentiment have also been used in negative tweets with about 5% occurrences, such as (, , and ). Given the averaged positive ratio among all emojis in the whole dataset is about 74% and that most emojis have been extensively used in tweets containing both positive and negative sentiments, it suggests that distinguishing the emoji occurrences in both sentiments via bi-sense embedding is worth investigating. Additionally, we observe the Pos-Ratios of the AA-sentiment and HA-sentiment have little differences which are due to two main reasons: 1) Some tweets we sampled to construct the HA-sentiment are discarded because the annotators have disagreements and we only keep the samples that we are confident about; 2) Tweets with absolute sentiment scores between (0.60,0.70) are selected for manual labeling as discussed in Section SECREF23 , which are lower than the tweets used to construct the AA-sentiment (0.7 and above). The lower sentiment scores indicate that Vader is less reliable on the samples of HA-sentiment dataset and the sentiments of these tweets are more likely to be affected by emojis. Qualitative Analysis In order to obtain insights about why the more fine-grained bi-sense emoji embedding helps in understanding the complexed sentiments behind tweets, we visualize the attention weights for ATT-E-LSTM and MATT-BiE-LSTM for comparison. The example tweets with corresponding attention weights calculated by word-emoji embedding and senti-emoji embedding are shown in Figure FIGREF27 , where the contexts are presented in the captions. The emojis used are , , and , respectively. In Figure FIGREF27 (a), the ATT-E-LSTM model (baseline) assigns relatively more weights on the word “no” and “pressure”, while MATT-BiE-LSTM attends mostly on the word “happy” and “lovely”. The different attention distributions suggest that the proposed senti-emoji embedding is capable of recognizing words with strong sentiments that are closely related to the true sentiment even with the presence of words with conflicting sentiments, such as “pressure” and “happy”. while ATT-E-LSTM tends to pick up all sentimental words which could raise confusions. The senti-emoji embedding is capable of extracting representations of complexed semantics and sentiments which help guide the attentions even in cases when the word sentiment and emoji sentiment are somewhat contradictory to each other. From Figure FIGREF27 (b) and (c) we can observe that the ATT-E-LSTM assigns more weights on the sentiment-irrelevant words than the MATT-BiE-LSTM such as “hoodies”, “wait” and “after”, indicating that the proposed model is more robust to irrelevant words and concentrates better on important words. Because of the senti-emoji embedding obtained through bi-sense emoji embedding and the sentence-level LSTM encoding on the text input (described in Section SECREF13 ), we are able to construct a more robust embedding based on the semantic and sentiment information from the whole context compared to the word-emoji embedding used in ATT-E-LSTM which takes only word-level information into account. Bi-sense Emoji Embedding Visualization To gain further insights on the bi-sense emoji embedding, we use t-SNE BIBREF47 to project high-dimensional bi-sense embedding vectors into a two-dimensional space and preserving the relative distances between the embedding vectors at the same time. In Figure FIGREF28 we visualize the bi-sense emoji embedding, positive-sense embedding, negative-sense embedding and the subtraction between positive and negative sense embeddings of each emoji, respectively. The subtraction of an emoji between its two sense embeddings indicates the semantic differences between emoji usages in positive and negative sentimental contexts, similarly to the objective of word embeddings BIBREF28 . The positive-sense of emoji ( and ), and the negative-sense of emoji (, and ) are embedded far from the two main clusters as observed in Figure FIGREF28 (a), suggesting that the semantics of these emojis are different from the other popular emojis. The positive-sense embedding and negative-sense embeddings are clustered well with no intersection with each other. Such observation supports our objective of applying bi-sense emoji embedding because there exist such significant differences in the semantics of each emoji when appears in positive and negative sentimental contexts, and it is well-motivated to consider the emoji usages individually according to the sentiment of the contexts to extract the more fine-grained bi-sense embedding. Additionally, we observe consistent patterns in the Figure FIGREF28 (b), (c) and (d) where the sentiments conveyed in the emojis become an important factor. For example, emojis with positive sentiments such as (, and ), and emojis with negative sentiment such as (, and ) are embedded into one clusters in both positive-sense and negative-sense space. The embedding subtractions of emojis in Figure FIGREF28 (d) shows the different usages of emojis across sentiments are similar between emojis and preserve the cluster patterns observed in Figure FIGREF28 (b) and (c). Conclusions In this paper, we present a novel approach to the task of sentiment analysis and achieve the state-of-the-art performance. Different from the previous work, our method combines a more robust and fine-grained bi-sense emoji embedding that effectively represents complex semantic and sentiment information, with attention-based LSTM networks that selectively attend on the correlated sense of the emoji embeddings, and seamlessly fuse the obtained senti-emoji embeddings with the word embeddings for a better understanding of the rich semantics and sentiments involved. In the future, we plan to further extend our attention-based LSTM with bi-embedding work frame to tackle tasks involving multi-sense embedding such as the learning and applications of word-sense embedding. Acknowledgement We would like to thank the support of New York State through the Goergen Institute for Data Science, and NSF Award #1704309.
Yes
7ce213657f7ee792148988c5a3578b24cd2f9c62
7ce213657f7ee792148988c5a3578b24cd2f9c62_0
Q: What evidence does visualizing the attention give to show that it helps to obtain a more robust understanding of semantics and sentiments? Text: Introduction The rapid growth of social media platforms such as Twitter provides rich multimedia data in large scales for various research opportunities, such as sentiment analysis which focuses on automatically sentiment (positive and negative) prediction on given contents. Sentiment analysis has been widely used in real world applications by analyzing the online user-generated data, such as election prediction, opinion mining and business-related activity analysis. Emojis, which consist of various symbols ranging from cartoon facial expressions to figures such as flags and sports, are widely used in daily communications to express people's feelings . Since their first release in 2010, emojis have taken the place of emoticons (such as “:- INLINEFORM0 ” and “:-P”) BIBREF0 to create a new form of language for social media users BIBREF1 . According to recent science reports, there are 2,823 emojis in unicode standard in Emoji 11.0 , with over 50% of the Instagram posts containing one or more emojis BIBREF2 and 92% of the online population using emojis BIBREF3 . The extensive use of emojis has drawn a growing attention from researchers BIBREF4 , BIBREF5 because the emojis convey fruitful semantical and sentimental information to visually complement the textual information which is significantly useful in understanding the embedded emotional signals in texts BIBREF6 . For example, emoji embeddings have been proposed to understand the semantics behind the emojis BIBREF7 , BIBREF8 , and the embedding vectors can be used to visualize and predict emoji usages given their corresponding contexts. Previous work also shows that, it is useful to pre-train a deep neural network on an emoji prediction task with pre-trained emoji embeddings to learn the emotional signals of emojis for other tasks including sentiment, emotion and sarcasm prediction BIBREF9 . However, the previous literatures lack in considerations of the linguistic complexities and diversity of emoji. Therefore, previous emoji embedding methods fail to handle the situation when the semantics or sentiments of the learned emoji embeddings contradict the information from the corresponding contexts BIBREF5 , or when the emojis convey multiple senses of semantics and sentiments such as ( and ). In practice, emojis can either summarize and emphasis the original tune of their contexts, or express more complex semantics such as irony and sarcasm by being combined with contexts of contradictory semantics or sentiments. For the examples shown in Table TABREF3 , the emoji () is of consistent sentiment with text to emphasis the sentiment, but is of the opposite sentiment (positive) to the text sentiment (negative) example 3 and 4 to deliver a sense of sarcasm. Conventional emoji analysis can only extract single embedding of each emoji, and such embeddings will confuse the following sentiment analysis model by inconsistent sentiment signals from the input texts and emojis. Moreover, we consider the emoji effect modeling different from the conventional multimodal sentiment analysis which usually includes images and texts in that, image sentiment and text sentiment are usually assumed to be consistent BIBREF10 while it carries no such assumption for texts and emojis. To tackle such limitations, we propose a novel scheme that consists of an attention-based recurrent neural network (RNN) with robust bi-sense emoji embeddings. Inspired by the word sense embedding task in natural language processing (NLP) BIBREF11 , BIBREF12 , BIBREF13 where each sense of an ambiguous word responds to one unique embedding vector, the proposed bi-sense embedding is a more robust and fine-grained representation of the complicated semantics for emojis where each emoji is embedded into two distinct vectors, namely positive-sense and negative-sense vector, respectively. For our specific task which is Twitter sentiment analysis BIBREF14 , BIBREF15 , we initialize the bi-sense embedding vectors together with word embedding vectors using word embedding algorithm fasttext BIBREF16 by extracting two distinct embeddings for each emoji according to the sentiment of its corresponding textual contexts, namely bi-sense embedding. A long short-term memory (LSTM) based recurrent neural network is then used for predicting sentiments which is integrated with the pre-trained emoji embedding features by a context-guide and self-selected attention mechanism. Because most of the previous Twitter sentiment datasets exclude emojis and there exists little resource that contains sufficient emoji-tweets with sentiment labels, we construct our own emoji-tweets dataset by automatically generating weak labels using a rule-based sentiment analysis algorithm Vader BIBREF17 for pre-traning the networks, and manually labeling a subset of tweets for fine tuning and testing purposes. The experimental results demonstrate that the bi-sense emoji embedding is capable of extracting more distinguished information from emojis and outperforms the state-of-the-art sentiment analysis models with the proposed attention-based LSTM networks. We further visualize the bi-sense emoji embedding to obtain the sentiments and semantics learned by the proposed approach. The main contributions of this paper are summarized as follows. Sentiment Analysis Sentiment analysis is to extract and quantify subjective information including the status of attitudes, emotions and opinions from a variety of contents such as texts, images and audios BIBREF18 . Sentiment analysis has been drawing great attentions because of its wide applications in business and government intelligence, political science, sociology and psychology BIBREF19 , BIBREF20 , BIBREF21 , BIBREF22 . From a technical perspective, textual sentiment analysis is first explored by researchers as an NLP task. Methods range from lexical-based approaches using features including keywords BIBREF23 , BIBREF24 where each word corresponds to a sentiment vector with entries representing the possibility of the word and each sentiment and phase-level features (n-grams and unigrams) BIBREF25 , BIBREF26 , to deep neural network based embedding approaches including skip-grams, continuous bag-of-words (CBoW) and skip-thoughts BIBREF27 , BIBREF28 , BIBREF16 , BIBREF29 . It was until recent years when researchers start focusing on image and multimodal sentiments BIBREF30 , BIBREF31 and analyzing how to take advantage of the cross-modality resources BIBREF10 , BIBREF32 . For multimodal sentiment analysis, an underlying assumption is that both modalities express similar sentiment and such similarity is enforced in order to train a robust sentiment inference model BIBREF10 . However, the same assumption does not stand in modeling textual tweets and emojis because the complexities of natural language exist extensively, such as the use of irony, jokes, sarcasm, etc. BIBREF9 . Models We set up the baselines and proposed models as follows: LSTM with text embedding: CNNs and LSTMs are widely used to encode textual contents for sentiment analysis in BIBREF45 , BIBREF46 and many online tutorials. Here we select the standard LSTM with pre-trained word embedding as input, and add one fully-connected layer with sigmoid activation top of the LSTM encoder (same as all other models), denoted as T-LSTM. LSTM with emoji embedding: We consider the emoji as one special word and input both pre-trained text and emoji embeddings into the same LSTM network, namely E-LSTM. Similarly, we concatenate the pre-trained bi-sense emoji embedding as one special word to feed into the LSTM network. This model is called BiE-LSTM. Attention-based LSTM with emojis:We also use the word-emoji embedding to calculate the emoji-word attention following Equation EQREF20 and EQREF21 , and the only difference is that we replace the attention-derived senti-emoji embedding with the pre-trained word-emoji embedding by fasttext, denoted as ATT-E-LSTM. LSTM with bi-sense emoji embedding (proposed): As we have introduced in Section SECREF13 , we propose two attention-based LSTM networks based on bi-sense emoji embedding, denoted as MATT-BiE-LSTM and WATT-BiE-LSTM. Evaluation We evaluate the baseline and proposed models on sentiment analysis by F1 scores and accuracies based on the auto-annotated testing set (AA-Sentiment) and human-annotated testing set (HA-Sentiment), as shown in Table TABREF25 . We only test the models after fine-tuning with a subset of the samples with human annotations because training exclusively on the samples with auto-generated weak labels results in relatively poor performances when tested with human annotated data indicating the models after fine-tuning are more robust. The F1 scores and accuracies are overall higher with the AA-Sentiment than the results with HA-sentiment, indicating that the HA-Sentiment is a more challenging task and the sentiments involved are more difficult to identify supported by their relatively lower sentiment scores returned from Vader. We still, however, observe competitive results from HA-Sentiment showing that the models are well-trained and robust to noisy labels with the help of fine-tuning with human annotated data. The T-LSTM baseline achieves decent performance in both experiments with accuracies of 86.6% and 70.7% showing that LSTM is an effective encoder for sentiment analysis as suggested by the references. The models with proposed bi-sense emoji embedding obtain accuracies over 82.4% and we observe improvements on the performance with the attention-based LSTM from our proposed model MATT-BiE-LSTM and WATT-BiE-LSTM, which is consistent with that ATT-E-LSTM ([email protected]%, [email protected]% on HA-Sentiment) outperforms significantly T-LSTM and E-LSTM. Emoji information is useful in sentiment analysis. Most models outperforms the baseline T-LSTM in both dataset suggesting that the emoji information is useful for sentiment analysis as a complement to the textual contents, even with the naive use of emoji embeddings (E-LSTM) when tested with HA-Sentiment. We observe that E-LSTM obtains similar performance to T-LSTM with AA-Sentiment but a significant gain over the T-LSTM when tested with HA-Sentiment indicating that sentiment information is helpful and necessary when the hidden sentiment is relatively subtle and the task is more challenging. Bi-sense emoji embedding helps. All the models using bi-sense emoji embedding perform significantly better than the baseline models without emoji feature or with word-emoji embedding. BiE-LSTM outperforms T-LSTM and E-LSTM significantly with the same utilization of emoji embedding indicates that the proposed bi-sense emoji embedding is capable of extracting more informative and distinguishable vectors over the use of conventional word embedding algorithms, which is consistent based on the comparisons between the proposed models (MATT-BiE-LSTM and WATT-BiE-LSTM) with bi-sense emoji embedding and the baseline model ATT-E-LSTM with word-emoji embedding and attention. Attention mechanism aligns and performs well with bi-sense embedding. MATT-BiE-LSTM and WATT-BiE-LSTM obtain similar performances when tested on both Vader and human annotated samples, though their ways of computing the attention (weights and vectors) are different that WATT computes attention weights and the senti-emoji embeddings guided by each word, and MATT obtains the senti-emoji embedding based on the LSTM encoder on the whole contexts and computes the attention weights of the senti-emoji embedding across all words. Both models outperforms the state-of-the-art baseline models including ATT-E-LSTM. The proposed attention-based LSTM can be further extended to handle tasks involving multi-sense embedding as inputs, such as the word-sense embedding in NLP, by using context-guide attention to self-select how much to attend on each sense of the embeddings each of which correspond to a distinct sense of semantics or sentiments. In this way we are able to take advantage of the more robust and fine-grained embeddings. Emojis and Sentiment Analysis With the overwhelming development of Internet of Things (IOT), the growing accessibility and popularity of subjective contents have provided new opportunities and challenges for sentiment analysis BIBREF33 . For example, social medias such as Twitter and Instagram have been explored because the massive user-generated contents with rich user sentiments BIBREF25 , BIBREF34 , BIBREF35 where emojis (and emoticons) are extensively used. Non-verbal cues of sentiment, such as emoticon which is considered as the previous generation of emoji, has been studied for their sentiment effect before emojis take over BIBREF36 , BIBREF37 , BIBREF38 . For instance, BIBREF36 , BIBREF38 pre-define sentiment labels to emoticons and construct a emoticon-sentiment dictionary. BIBREF37 applies emoticons for smoothing noisy sentiment labels. Similar work from BIBREF39 first considers emoji as a component in extracting the lexical feature for further sentiment analysis. BIBREF40 constructs an emoji sentiment ranking based on the occurrences of emojis and the human-annotated sentiments of the corresponding tweets where each emoji is assigned with a sentiment score from negative to positive , similar to the SentiWordNet BIBREF41 . However, the relatively intuitive use of emojis by lexical- and dictionary-based approaches lacks insightful understanding of the complexed semantics of emojis. Therefore, inspired by the success of word semantic embedding algorithms such as BIBREF28 , BIBREF16 , BIBREF7 obtains semantic embeddings of each emoji by averaging the words from its descriptions and shows it is effective to take advantage of the emoji embedding for the task of Twitter sentiment analysis. BIBREF8 proposes a convoluntional neural network to predict the emoji occurrence and jointly learns the emoji embedding via a matching layer based on cosine similarities. Despite the growing popularity of Twitter sentiment analysis, there is a limited number of emoji datasets with sentiment labels available because previous studies usually filter out urls, emojis and sometimes emoticons. However, BIBREF9 shows that it is effective to extract sentiment information from emojis for emotion classification and sarcasm detection tasks in the absence of learning vector-based emoji representations by pre-training a deep neural network to predict the emoji occurrence. Methodology We propose two mechanisms, namely Word-guide Attention-based LSTM and Multi-level Attention-based LSTM, to take advantage of bi-sense emoji embedding for the sentiment analysis task. The frameworks of these two methods are shown in Figure FIGREF10 and Figure FIGREF19 , respectively. Our workflow includes the following steps: initialization of bi-sense emoji embedding, generating senti-emoji embedding based on self-selected attention, and sentiment classification via the proposed attention-based LSTM networks. Bi-sense Embedding Recent research shows great success in word embedding task such as word2vec and fasttext BIBREF27 , BIBREF16 . We use fasttext to initialize emoji embeddings by considering each emoji as a special word, together with word embeddings. The catch is, different from conventional approaches where each emoji responds to one embedding vector (as we call word-emoji embedding), we embed each emoji into two distinct vectors (bi-sense emoji embedding): we first assign two distinct tokens to each emoji, of which one is for the particular emoji used in positive sentimental contexts and the other one is for this emoji used in negative sentimental contexts (text sentiment initialized by Vader BIBREF17 , details will be discussed in Section SECREF23 ), respectively; the same fasttext training process is used to embed each token into a distinct vector, and we thus obtain the positive-sense and negative-sense embeddings for each emoji. The word2vec is based on the skip-gram model whose objective is to maximize the log likelihood calculated by summing the probabilities of current word occurrences given a set of the surrounding words. The fasttext model is different by formatting the problem as a binary classification task to predict the occurrence of each context word, with negative samples being randomly selected from the absent context words. Given an input word sequence INLINEFORM0 , and the context word set INLINEFORM1 and the set of negative word samples INLINEFORM2 of the current word INLINEFORM3 , the objective function is obtained based on binary logistic loss as in Equation EQREF12 : DISPLAYFORM0 where INLINEFORM0 denotes the logistic loss of the score function INLINEFORM1 which is computed by summing up the scalar products between the n-gram embeddings of the current word and the context word embedding which is different from word2vec where the score is the scalar product between the current word and the context word embedding. We select fasttext over word2vec mainly because its computational efficiency. In general, the two models yield competitive performances and the comparison between word embeddings is beyond our discussion. Therefore we only show the results derived by the fasttext initialization within the scope of this work. Word-guide Attention-based LSTM Long short-term memory (LSTM) units have been extensively used to encode textual contents. The basic encoder model consists of a text embedding layer, LSTMs layer, and fully-connected layers for further tasks such as text classifications based on the encoded feature. The operations in an LSTM unit for time step INLINEFORM0 is formulated in Equation EQREF14 : DISPLAYFORM0 where INLINEFORM0 and INLINEFORM1 represent the current and previous hidden states, INLINEFORM2 denotes the current LSTM input and here we use the embedding INLINEFORM3 of the current word INLINEFORM4 , and INLINEFORM5 and INLINEFORM6 denote the weight matrices BIBREF42 . In order to take advantage of the bi-sense emoji embedding, we modify the input layer into the LSTM units. We first obtain the senti-emoji embedding as an weighted average of the bi-sense emoji embedding based on the self-selected attention mechanism. Let INLINEFORM7 represent the INLINEFORM8 -th sense embedding of emoji INLINEFORM9 ( INLINEFORM10 in our bi-sense embedding), and INLINEFORM11 denote the attention function conditioned on the current word embedding, the attention weight INLINEFORM12 and senti-emoji embedding vector INLINEFORM13 is formulated as follows: DISPLAYFORM0 We choose a fully-connected layer with ReLU activation as the attention function, and the attention vector INLINEFORM0 is concatenated with the word embedding as the new input of the LSTM. Thus the input vector INLINEFORM1 in Equation EQREF14 becomes INLINEFORM2 . The output of the final LSTM unit is then fed into a fully-connected layer with INLINEFORM3 activation to output the tweet sentiment and binary cross-entropy loss is used as the objection function (Equation EQREF16 ) where INLINEFORM4 is the total number of samples. The motivation behind this model is that each context word guides the attention weights in order to enforce the model to self-select which embedding sense it should attend on. Therefore we denote this model as the Word-guide Attention-based LSTM with Bi-sense emoji embedding (WATT-BiE-LSTM). DISPLAYFORM0 Multi-level Attention-based LSTM There is another way of formulating the attention mechanism where the attention weights indicate how the image information (which is emoji in our case) is distributed through the context words as proposed in BIBREF43 , BIBREF44 . The modified senti-emoji embedding vector INLINEFORM0 is thus at the tweet-level instead of the word-level in Equation EQREF15 by replacing the INLINEFORM1 with the final state vector INLINEFORM2 outputted from the last LSTM unit, as shown in Equation EQREF18 : DISPLAYFORM0 The derived senti-emoji embedding INLINEFORM0 is then used to calculate an additional layer of attention following BIBREF43 , BIBREF44 . Given the input tweet sequence INLINEFORM1 , the attention weight INLINEFORM2 conditioned on the senti-emoji embedding is formulated as follows: DISPLAYFORM0 Therefore, we construct the new input INLINEFORM0 to each LSTM unit by concatenating the original word embedding and the attention vector in Equation EQREF21 to distribute the senti-emoji information to each step. This model is called Multi-level Attention-based LSTM with Bi-sense Emoji Embedding (MATT-BiE-LSTM). We choose the same binary cross-entropy as the loss function with the same network configuration with WATT-BiE-LSTM. DISPLAYFORM0 Data Collection and Annotation Data Collection We construct our own Twitter sentiment dataset by crawling tweets through the REST API which consists of 350,000 users and is magnitude larger comparing to previous work. We collect up to 3,200 tweets from each user and follow the standard tweet preprocessing procedures to remove the tweets without emojis and tweets containing less than ten words, and contents including the urls, mentions, and emails. Data Annotation For acquiring the sentiment annotations, we first use Vader which is a rule-based sentiment analysis algorithm BIBREF17 for text tweets only to generate weak sentiment labels. The algorithm outputs sentiment scores ranging from -1 (negative) to 1 (positive) with neutral in the middle. We consider the sentiment analysis as a binary classification problem (positive sentiment and negative sentiment), we filter out samples with weak prediction scores within INLINEFORM0 and keep the tweets with strong sentiment signals. Emoji occurrences are calculated separately for positive tweets and negative tweets, and threshold is set to 2,000 to further filter out emojis which are less frequently used in at least one type of sentimental text. In the end, we have constructed a dataset with 1,492,065 tweets and 55 frequently used emojis in total. For the tweets with an absolute sentiment score over 0.70, we keep the auto-generated sentiment label as ground truth because the automatic annotation is reliable with high sentiment scores. On the other hand, we select a subset of the tweets with absolute sentiment scores between INLINEFORM1 for manual labeling by randomly sampling, following the distribution of emoji occurrences where each tweet is labeled by two graduate students. Tweets are discarded if the two annotations disagree with each other or they are labeled as neutral. In the end, we have obtained 4,183 manually labeled tweets among which 60% are used for fine-tuning and 40% are used for testing purposes. The remainder of the tweets with automatic annotations are divided into three sets: 60% are used for pre-training the bi-sense and conventional emoji embedding, 10% for validation and 30% are for testing. We do not include a “neutral” class because it is difficult to obtain valid neutral samples. For auto-generated labels, the neutrals are the samples with low absolute confidence scores and their sentiments are more likely to be model failures other than “true neutrals”. Moreover, based on the human annotations, most of the tweets with emojis convey non-neutral sentiment and only few neutral samples are observed during the manual labeling which are excluded from the manually labeled subset. In order to valid our motivation that emojis are also extensively used in tweets that contain contradictory information to the emoji sentiments, we calculate the emoji usage in Table TABREF22 according to the sentiment labels where Pos-Ratio means the percentage of each emoji occurs in the positive tweets over its total number of occurrences, AA and HA indicate automatic-annotation and human-annotation, respectively. We present the top-10 most frequently used emojis in our dataset and observe a slight difference in the Pos-Ratios between AA and HA dataset because of the randomness involved in the sampling process. Results from both of the datasets show a fair amount of emoji use in both positive and negative tweets. For example, it is interesting to notice that emoji () occurs more in the positive tweets in with the automatic annotations, while emojis with strong positive sentiment have also been used in negative tweets with about 5% occurrences, such as (, , and ). Given the averaged positive ratio among all emojis in the whole dataset is about 74% and that most emojis have been extensively used in tweets containing both positive and negative sentiments, it suggests that distinguishing the emoji occurrences in both sentiments via bi-sense embedding is worth investigating. Additionally, we observe the Pos-Ratios of the AA-sentiment and HA-sentiment have little differences which are due to two main reasons: 1) Some tweets we sampled to construct the HA-sentiment are discarded because the annotators have disagreements and we only keep the samples that we are confident about; 2) Tweets with absolute sentiment scores between (0.60,0.70) are selected for manual labeling as discussed in Section SECREF23 , which are lower than the tweets used to construct the AA-sentiment (0.7 and above). The lower sentiment scores indicate that Vader is less reliable on the samples of HA-sentiment dataset and the sentiments of these tweets are more likely to be affected by emojis. Qualitative Analysis In order to obtain insights about why the more fine-grained bi-sense emoji embedding helps in understanding the complexed sentiments behind tweets, we visualize the attention weights for ATT-E-LSTM and MATT-BiE-LSTM for comparison. The example tweets with corresponding attention weights calculated by word-emoji embedding and senti-emoji embedding are shown in Figure FIGREF27 , where the contexts are presented in the captions. The emojis used are , , and , respectively. In Figure FIGREF27 (a), the ATT-E-LSTM model (baseline) assigns relatively more weights on the word “no” and “pressure”, while MATT-BiE-LSTM attends mostly on the word “happy” and “lovely”. The different attention distributions suggest that the proposed senti-emoji embedding is capable of recognizing words with strong sentiments that are closely related to the true sentiment even with the presence of words with conflicting sentiments, such as “pressure” and “happy”. while ATT-E-LSTM tends to pick up all sentimental words which could raise confusions. The senti-emoji embedding is capable of extracting representations of complexed semantics and sentiments which help guide the attentions even in cases when the word sentiment and emoji sentiment are somewhat contradictory to each other. From Figure FIGREF27 (b) and (c) we can observe that the ATT-E-LSTM assigns more weights on the sentiment-irrelevant words than the MATT-BiE-LSTM such as “hoodies”, “wait” and “after”, indicating that the proposed model is more robust to irrelevant words and concentrates better on important words. Because of the senti-emoji embedding obtained through bi-sense emoji embedding and the sentence-level LSTM encoding on the text input (described in Section SECREF13 ), we are able to construct a more robust embedding based on the semantic and sentiment information from the whole context compared to the word-emoji embedding used in ATT-E-LSTM which takes only word-level information into account. Bi-sense Emoji Embedding Visualization To gain further insights on the bi-sense emoji embedding, we use t-SNE BIBREF47 to project high-dimensional bi-sense embedding vectors into a two-dimensional space and preserving the relative distances between the embedding vectors at the same time. In Figure FIGREF28 we visualize the bi-sense emoji embedding, positive-sense embedding, negative-sense embedding and the subtraction between positive and negative sense embeddings of each emoji, respectively. The subtraction of an emoji between its two sense embeddings indicates the semantic differences between emoji usages in positive and negative sentimental contexts, similarly to the objective of word embeddings BIBREF28 . The positive-sense of emoji ( and ), and the negative-sense of emoji (, and ) are embedded far from the two main clusters as observed in Figure FIGREF28 (a), suggesting that the semantics of these emojis are different from the other popular emojis. The positive-sense embedding and negative-sense embeddings are clustered well with no intersection with each other. Such observation supports our objective of applying bi-sense emoji embedding because there exist such significant differences in the semantics of each emoji when appears in positive and negative sentimental contexts, and it is well-motivated to consider the emoji usages individually according to the sentiment of the contexts to extract the more fine-grained bi-sense embedding. Additionally, we observe consistent patterns in the Figure FIGREF28 (b), (c) and (d) where the sentiments conveyed in the emojis become an important factor. For example, emojis with positive sentiments such as (, and ), and emojis with negative sentiment such as (, and ) are embedded into one clusters in both positive-sense and negative-sense space. The embedding subtractions of emojis in Figure FIGREF28 (d) shows the different usages of emojis across sentiments are similar between emojis and preserve the cluster patterns observed in Figure FIGREF28 (b) and (c). Conclusions In this paper, we present a novel approach to the task of sentiment analysis and achieve the state-of-the-art performance. Different from the previous work, our method combines a more robust and fine-grained bi-sense emoji embedding that effectively represents complex semantic and sentiment information, with attention-based LSTM networks that selectively attend on the correlated sense of the emoji embeddings, and seamlessly fuse the obtained senti-emoji embeddings with the word embeddings for a better understanding of the rich semantics and sentiments involved. In the future, we plan to further extend our attention-based LSTM with bi-embedding work frame to tackle tasks involving multi-sense embedding such as the learning and applications of word-sense embedding. Acknowledgement We would like to thank the support of New York State through the Goergen Institute for Data Science, and NSF Award #1704309.
The different attention distributions suggest that the proposed senti-emoji embedding is capable of recognizing words with strong sentiments that are closely related to the true sentiment even with the presence of words with conflicting sentiments
89ce18ee52c52a78b38c49b14574407b7ea2fb02
89ce18ee52c52a78b38c49b14574407b7ea2fb02_0
Q: Which SOTA models are outperformed? Text: Introduction The rapid growth of social media platforms such as Twitter provides rich multimedia data in large scales for various research opportunities, such as sentiment analysis which focuses on automatically sentiment (positive and negative) prediction on given contents. Sentiment analysis has been widely used in real world applications by analyzing the online user-generated data, such as election prediction, opinion mining and business-related activity analysis. Emojis, which consist of various symbols ranging from cartoon facial expressions to figures such as flags and sports, are widely used in daily communications to express people's feelings . Since their first release in 2010, emojis have taken the place of emoticons (such as “:- INLINEFORM0 ” and “:-P”) BIBREF0 to create a new form of language for social media users BIBREF1 . According to recent science reports, there are 2,823 emojis in unicode standard in Emoji 11.0 , with over 50% of the Instagram posts containing one or more emojis BIBREF2 and 92% of the online population using emojis BIBREF3 . The extensive use of emojis has drawn a growing attention from researchers BIBREF4 , BIBREF5 because the emojis convey fruitful semantical and sentimental information to visually complement the textual information which is significantly useful in understanding the embedded emotional signals in texts BIBREF6 . For example, emoji embeddings have been proposed to understand the semantics behind the emojis BIBREF7 , BIBREF8 , and the embedding vectors can be used to visualize and predict emoji usages given their corresponding contexts. Previous work also shows that, it is useful to pre-train a deep neural network on an emoji prediction task with pre-trained emoji embeddings to learn the emotional signals of emojis for other tasks including sentiment, emotion and sarcasm prediction BIBREF9 . However, the previous literatures lack in considerations of the linguistic complexities and diversity of emoji. Therefore, previous emoji embedding methods fail to handle the situation when the semantics or sentiments of the learned emoji embeddings contradict the information from the corresponding contexts BIBREF5 , or when the emojis convey multiple senses of semantics and sentiments such as ( and ). In practice, emojis can either summarize and emphasis the original tune of their contexts, or express more complex semantics such as irony and sarcasm by being combined with contexts of contradictory semantics or sentiments. For the examples shown in Table TABREF3 , the emoji () is of consistent sentiment with text to emphasis the sentiment, but is of the opposite sentiment (positive) to the text sentiment (negative) example 3 and 4 to deliver a sense of sarcasm. Conventional emoji analysis can only extract single embedding of each emoji, and such embeddings will confuse the following sentiment analysis model by inconsistent sentiment signals from the input texts and emojis. Moreover, we consider the emoji effect modeling different from the conventional multimodal sentiment analysis which usually includes images and texts in that, image sentiment and text sentiment are usually assumed to be consistent BIBREF10 while it carries no such assumption for texts and emojis. To tackle such limitations, we propose a novel scheme that consists of an attention-based recurrent neural network (RNN) with robust bi-sense emoji embeddings. Inspired by the word sense embedding task in natural language processing (NLP) BIBREF11 , BIBREF12 , BIBREF13 where each sense of an ambiguous word responds to one unique embedding vector, the proposed bi-sense embedding is a more robust and fine-grained representation of the complicated semantics for emojis where each emoji is embedded into two distinct vectors, namely positive-sense and negative-sense vector, respectively. For our specific task which is Twitter sentiment analysis BIBREF14 , BIBREF15 , we initialize the bi-sense embedding vectors together with word embedding vectors using word embedding algorithm fasttext BIBREF16 by extracting two distinct embeddings for each emoji according to the sentiment of its corresponding textual contexts, namely bi-sense embedding. A long short-term memory (LSTM) based recurrent neural network is then used for predicting sentiments which is integrated with the pre-trained emoji embedding features by a context-guide and self-selected attention mechanism. Because most of the previous Twitter sentiment datasets exclude emojis and there exists little resource that contains sufficient emoji-tweets with sentiment labels, we construct our own emoji-tweets dataset by automatically generating weak labels using a rule-based sentiment analysis algorithm Vader BIBREF17 for pre-traning the networks, and manually labeling a subset of tweets for fine tuning and testing purposes. The experimental results demonstrate that the bi-sense emoji embedding is capable of extracting more distinguished information from emojis and outperforms the state-of-the-art sentiment analysis models with the proposed attention-based LSTM networks. We further visualize the bi-sense emoji embedding to obtain the sentiments and semantics learned by the proposed approach. The main contributions of this paper are summarized as follows. Sentiment Analysis Sentiment analysis is to extract and quantify subjective information including the status of attitudes, emotions and opinions from a variety of contents such as texts, images and audios BIBREF18 . Sentiment analysis has been drawing great attentions because of its wide applications in business and government intelligence, political science, sociology and psychology BIBREF19 , BIBREF20 , BIBREF21 , BIBREF22 . From a technical perspective, textual sentiment analysis is first explored by researchers as an NLP task. Methods range from lexical-based approaches using features including keywords BIBREF23 , BIBREF24 where each word corresponds to a sentiment vector with entries representing the possibility of the word and each sentiment and phase-level features (n-grams and unigrams) BIBREF25 , BIBREF26 , to deep neural network based embedding approaches including skip-grams, continuous bag-of-words (CBoW) and skip-thoughts BIBREF27 , BIBREF28 , BIBREF16 , BIBREF29 . It was until recent years when researchers start focusing on image and multimodal sentiments BIBREF30 , BIBREF31 and analyzing how to take advantage of the cross-modality resources BIBREF10 , BIBREF32 . For multimodal sentiment analysis, an underlying assumption is that both modalities express similar sentiment and such similarity is enforced in order to train a robust sentiment inference model BIBREF10 . However, the same assumption does not stand in modeling textual tweets and emojis because the complexities of natural language exist extensively, such as the use of irony, jokes, sarcasm, etc. BIBREF9 . Models We set up the baselines and proposed models as follows: LSTM with text embedding: CNNs and LSTMs are widely used to encode textual contents for sentiment analysis in BIBREF45 , BIBREF46 and many online tutorials. Here we select the standard LSTM with pre-trained word embedding as input, and add one fully-connected layer with sigmoid activation top of the LSTM encoder (same as all other models), denoted as T-LSTM. LSTM with emoji embedding: We consider the emoji as one special word and input both pre-trained text and emoji embeddings into the same LSTM network, namely E-LSTM. Similarly, we concatenate the pre-trained bi-sense emoji embedding as one special word to feed into the LSTM network. This model is called BiE-LSTM. Attention-based LSTM with emojis:We also use the word-emoji embedding to calculate the emoji-word attention following Equation EQREF20 and EQREF21 , and the only difference is that we replace the attention-derived senti-emoji embedding with the pre-trained word-emoji embedding by fasttext, denoted as ATT-E-LSTM. LSTM with bi-sense emoji embedding (proposed): As we have introduced in Section SECREF13 , we propose two attention-based LSTM networks based on bi-sense emoji embedding, denoted as MATT-BiE-LSTM and WATT-BiE-LSTM. Evaluation We evaluate the baseline and proposed models on sentiment analysis by F1 scores and accuracies based on the auto-annotated testing set (AA-Sentiment) and human-annotated testing set (HA-Sentiment), as shown in Table TABREF25 . We only test the models after fine-tuning with a subset of the samples with human annotations because training exclusively on the samples with auto-generated weak labels results in relatively poor performances when tested with human annotated data indicating the models after fine-tuning are more robust. The F1 scores and accuracies are overall higher with the AA-Sentiment than the results with HA-sentiment, indicating that the HA-Sentiment is a more challenging task and the sentiments involved are more difficult to identify supported by their relatively lower sentiment scores returned from Vader. We still, however, observe competitive results from HA-Sentiment showing that the models are well-trained and robust to noisy labels with the help of fine-tuning with human annotated data. The T-LSTM baseline achieves decent performance in both experiments with accuracies of 86.6% and 70.7% showing that LSTM is an effective encoder for sentiment analysis as suggested by the references. The models with proposed bi-sense emoji embedding obtain accuracies over 82.4% and we observe improvements on the performance with the attention-based LSTM from our proposed model MATT-BiE-LSTM and WATT-BiE-LSTM, which is consistent with that ATT-E-LSTM ([email protected]%, [email protected]% on HA-Sentiment) outperforms significantly T-LSTM and E-LSTM. Emoji information is useful in sentiment analysis. Most models outperforms the baseline T-LSTM in both dataset suggesting that the emoji information is useful for sentiment analysis as a complement to the textual contents, even with the naive use of emoji embeddings (E-LSTM) when tested with HA-Sentiment. We observe that E-LSTM obtains similar performance to T-LSTM with AA-Sentiment but a significant gain over the T-LSTM when tested with HA-Sentiment indicating that sentiment information is helpful and necessary when the hidden sentiment is relatively subtle and the task is more challenging. Bi-sense emoji embedding helps. All the models using bi-sense emoji embedding perform significantly better than the baseline models without emoji feature or with word-emoji embedding. BiE-LSTM outperforms T-LSTM and E-LSTM significantly with the same utilization of emoji embedding indicates that the proposed bi-sense emoji embedding is capable of extracting more informative and distinguishable vectors over the use of conventional word embedding algorithms, which is consistent based on the comparisons between the proposed models (MATT-BiE-LSTM and WATT-BiE-LSTM) with bi-sense emoji embedding and the baseline model ATT-E-LSTM with word-emoji embedding and attention. Attention mechanism aligns and performs well with bi-sense embedding. MATT-BiE-LSTM and WATT-BiE-LSTM obtain similar performances when tested on both Vader and human annotated samples, though their ways of computing the attention (weights and vectors) are different that WATT computes attention weights and the senti-emoji embeddings guided by each word, and MATT obtains the senti-emoji embedding based on the LSTM encoder on the whole contexts and computes the attention weights of the senti-emoji embedding across all words. Both models outperforms the state-of-the-art baseline models including ATT-E-LSTM. The proposed attention-based LSTM can be further extended to handle tasks involving multi-sense embedding as inputs, such as the word-sense embedding in NLP, by using context-guide attention to self-select how much to attend on each sense of the embeddings each of which correspond to a distinct sense of semantics or sentiments. In this way we are able to take advantage of the more robust and fine-grained embeddings. Emojis and Sentiment Analysis With the overwhelming development of Internet of Things (IOT), the growing accessibility and popularity of subjective contents have provided new opportunities and challenges for sentiment analysis BIBREF33 . For example, social medias such as Twitter and Instagram have been explored because the massive user-generated contents with rich user sentiments BIBREF25 , BIBREF34 , BIBREF35 where emojis (and emoticons) are extensively used. Non-verbal cues of sentiment, such as emoticon which is considered as the previous generation of emoji, has been studied for their sentiment effect before emojis take over BIBREF36 , BIBREF37 , BIBREF38 . For instance, BIBREF36 , BIBREF38 pre-define sentiment labels to emoticons and construct a emoticon-sentiment dictionary. BIBREF37 applies emoticons for smoothing noisy sentiment labels. Similar work from BIBREF39 first considers emoji as a component in extracting the lexical feature for further sentiment analysis. BIBREF40 constructs an emoji sentiment ranking based on the occurrences of emojis and the human-annotated sentiments of the corresponding tweets where each emoji is assigned with a sentiment score from negative to positive , similar to the SentiWordNet BIBREF41 . However, the relatively intuitive use of emojis by lexical- and dictionary-based approaches lacks insightful understanding of the complexed semantics of emojis. Therefore, inspired by the success of word semantic embedding algorithms such as BIBREF28 , BIBREF16 , BIBREF7 obtains semantic embeddings of each emoji by averaging the words from its descriptions and shows it is effective to take advantage of the emoji embedding for the task of Twitter sentiment analysis. BIBREF8 proposes a convoluntional neural network to predict the emoji occurrence and jointly learns the emoji embedding via a matching layer based on cosine similarities. Despite the growing popularity of Twitter sentiment analysis, there is a limited number of emoji datasets with sentiment labels available because previous studies usually filter out urls, emojis and sometimes emoticons. However, BIBREF9 shows that it is effective to extract sentiment information from emojis for emotion classification and sarcasm detection tasks in the absence of learning vector-based emoji representations by pre-training a deep neural network to predict the emoji occurrence. Methodology We propose two mechanisms, namely Word-guide Attention-based LSTM and Multi-level Attention-based LSTM, to take advantage of bi-sense emoji embedding for the sentiment analysis task. The frameworks of these two methods are shown in Figure FIGREF10 and Figure FIGREF19 , respectively. Our workflow includes the following steps: initialization of bi-sense emoji embedding, generating senti-emoji embedding based on self-selected attention, and sentiment classification via the proposed attention-based LSTM networks. Bi-sense Embedding Recent research shows great success in word embedding task such as word2vec and fasttext BIBREF27 , BIBREF16 . We use fasttext to initialize emoji embeddings by considering each emoji as a special word, together with word embeddings. The catch is, different from conventional approaches where each emoji responds to one embedding vector (as we call word-emoji embedding), we embed each emoji into two distinct vectors (bi-sense emoji embedding): we first assign two distinct tokens to each emoji, of which one is for the particular emoji used in positive sentimental contexts and the other one is for this emoji used in negative sentimental contexts (text sentiment initialized by Vader BIBREF17 , details will be discussed in Section SECREF23 ), respectively; the same fasttext training process is used to embed each token into a distinct vector, and we thus obtain the positive-sense and negative-sense embeddings for each emoji. The word2vec is based on the skip-gram model whose objective is to maximize the log likelihood calculated by summing the probabilities of current word occurrences given a set of the surrounding words. The fasttext model is different by formatting the problem as a binary classification task to predict the occurrence of each context word, with negative samples being randomly selected from the absent context words. Given an input word sequence INLINEFORM0 , and the context word set INLINEFORM1 and the set of negative word samples INLINEFORM2 of the current word INLINEFORM3 , the objective function is obtained based on binary logistic loss as in Equation EQREF12 : DISPLAYFORM0 where INLINEFORM0 denotes the logistic loss of the score function INLINEFORM1 which is computed by summing up the scalar products between the n-gram embeddings of the current word and the context word embedding which is different from word2vec where the score is the scalar product between the current word and the context word embedding. We select fasttext over word2vec mainly because its computational efficiency. In general, the two models yield competitive performances and the comparison between word embeddings is beyond our discussion. Therefore we only show the results derived by the fasttext initialization within the scope of this work. Word-guide Attention-based LSTM Long short-term memory (LSTM) units have been extensively used to encode textual contents. The basic encoder model consists of a text embedding layer, LSTMs layer, and fully-connected layers for further tasks such as text classifications based on the encoded feature. The operations in an LSTM unit for time step INLINEFORM0 is formulated in Equation EQREF14 : DISPLAYFORM0 where INLINEFORM0 and INLINEFORM1 represent the current and previous hidden states, INLINEFORM2 denotes the current LSTM input and here we use the embedding INLINEFORM3 of the current word INLINEFORM4 , and INLINEFORM5 and INLINEFORM6 denote the weight matrices BIBREF42 . In order to take advantage of the bi-sense emoji embedding, we modify the input layer into the LSTM units. We first obtain the senti-emoji embedding as an weighted average of the bi-sense emoji embedding based on the self-selected attention mechanism. Let INLINEFORM7 represent the INLINEFORM8 -th sense embedding of emoji INLINEFORM9 ( INLINEFORM10 in our bi-sense embedding), and INLINEFORM11 denote the attention function conditioned on the current word embedding, the attention weight INLINEFORM12 and senti-emoji embedding vector INLINEFORM13 is formulated as follows: DISPLAYFORM0 We choose a fully-connected layer with ReLU activation as the attention function, and the attention vector INLINEFORM0 is concatenated with the word embedding as the new input of the LSTM. Thus the input vector INLINEFORM1 in Equation EQREF14 becomes INLINEFORM2 . The output of the final LSTM unit is then fed into a fully-connected layer with INLINEFORM3 activation to output the tweet sentiment and binary cross-entropy loss is used as the objection function (Equation EQREF16 ) where INLINEFORM4 is the total number of samples. The motivation behind this model is that each context word guides the attention weights in order to enforce the model to self-select which embedding sense it should attend on. Therefore we denote this model as the Word-guide Attention-based LSTM with Bi-sense emoji embedding (WATT-BiE-LSTM). DISPLAYFORM0 Multi-level Attention-based LSTM There is another way of formulating the attention mechanism where the attention weights indicate how the image information (which is emoji in our case) is distributed through the context words as proposed in BIBREF43 , BIBREF44 . The modified senti-emoji embedding vector INLINEFORM0 is thus at the tweet-level instead of the word-level in Equation EQREF15 by replacing the INLINEFORM1 with the final state vector INLINEFORM2 outputted from the last LSTM unit, as shown in Equation EQREF18 : DISPLAYFORM0 The derived senti-emoji embedding INLINEFORM0 is then used to calculate an additional layer of attention following BIBREF43 , BIBREF44 . Given the input tweet sequence INLINEFORM1 , the attention weight INLINEFORM2 conditioned on the senti-emoji embedding is formulated as follows: DISPLAYFORM0 Therefore, we construct the new input INLINEFORM0 to each LSTM unit by concatenating the original word embedding and the attention vector in Equation EQREF21 to distribute the senti-emoji information to each step. This model is called Multi-level Attention-based LSTM with Bi-sense Emoji Embedding (MATT-BiE-LSTM). We choose the same binary cross-entropy as the loss function with the same network configuration with WATT-BiE-LSTM. DISPLAYFORM0 Data Collection and Annotation Data Collection We construct our own Twitter sentiment dataset by crawling tweets through the REST API which consists of 350,000 users and is magnitude larger comparing to previous work. We collect up to 3,200 tweets from each user and follow the standard tweet preprocessing procedures to remove the tweets without emojis and tweets containing less than ten words, and contents including the urls, mentions, and emails. Data Annotation For acquiring the sentiment annotations, we first use Vader which is a rule-based sentiment analysis algorithm BIBREF17 for text tweets only to generate weak sentiment labels. The algorithm outputs sentiment scores ranging from -1 (negative) to 1 (positive) with neutral in the middle. We consider the sentiment analysis as a binary classification problem (positive sentiment and negative sentiment), we filter out samples with weak prediction scores within INLINEFORM0 and keep the tweets with strong sentiment signals. Emoji occurrences are calculated separately for positive tweets and negative tweets, and threshold is set to 2,000 to further filter out emojis which are less frequently used in at least one type of sentimental text. In the end, we have constructed a dataset with 1,492,065 tweets and 55 frequently used emojis in total. For the tweets with an absolute sentiment score over 0.70, we keep the auto-generated sentiment label as ground truth because the automatic annotation is reliable with high sentiment scores. On the other hand, we select a subset of the tweets with absolute sentiment scores between INLINEFORM1 for manual labeling by randomly sampling, following the distribution of emoji occurrences where each tweet is labeled by two graduate students. Tweets are discarded if the two annotations disagree with each other or they are labeled as neutral. In the end, we have obtained 4,183 manually labeled tweets among which 60% are used for fine-tuning and 40% are used for testing purposes. The remainder of the tweets with automatic annotations are divided into three sets: 60% are used for pre-training the bi-sense and conventional emoji embedding, 10% for validation and 30% are for testing. We do not include a “neutral” class because it is difficult to obtain valid neutral samples. For auto-generated labels, the neutrals are the samples with low absolute confidence scores and their sentiments are more likely to be model failures other than “true neutrals”. Moreover, based on the human annotations, most of the tweets with emojis convey non-neutral sentiment and only few neutral samples are observed during the manual labeling which are excluded from the manually labeled subset. In order to valid our motivation that emojis are also extensively used in tweets that contain contradictory information to the emoji sentiments, we calculate the emoji usage in Table TABREF22 according to the sentiment labels where Pos-Ratio means the percentage of each emoji occurs in the positive tweets over its total number of occurrences, AA and HA indicate automatic-annotation and human-annotation, respectively. We present the top-10 most frequently used emojis in our dataset and observe a slight difference in the Pos-Ratios between AA and HA dataset because of the randomness involved in the sampling process. Results from both of the datasets show a fair amount of emoji use in both positive and negative tweets. For example, it is interesting to notice that emoji () occurs more in the positive tweets in with the automatic annotations, while emojis with strong positive sentiment have also been used in negative tweets with about 5% occurrences, such as (, , and ). Given the averaged positive ratio among all emojis in the whole dataset is about 74% and that most emojis have been extensively used in tweets containing both positive and negative sentiments, it suggests that distinguishing the emoji occurrences in both sentiments via bi-sense embedding is worth investigating. Additionally, we observe the Pos-Ratios of the AA-sentiment and HA-sentiment have little differences which are due to two main reasons: 1) Some tweets we sampled to construct the HA-sentiment are discarded because the annotators have disagreements and we only keep the samples that we are confident about; 2) Tweets with absolute sentiment scores between (0.60,0.70) are selected for manual labeling as discussed in Section SECREF23 , which are lower than the tweets used to construct the AA-sentiment (0.7 and above). The lower sentiment scores indicate that Vader is less reliable on the samples of HA-sentiment dataset and the sentiments of these tweets are more likely to be affected by emojis. Qualitative Analysis In order to obtain insights about why the more fine-grained bi-sense emoji embedding helps in understanding the complexed sentiments behind tweets, we visualize the attention weights for ATT-E-LSTM and MATT-BiE-LSTM for comparison. The example tweets with corresponding attention weights calculated by word-emoji embedding and senti-emoji embedding are shown in Figure FIGREF27 , where the contexts are presented in the captions. The emojis used are , , and , respectively. In Figure FIGREF27 (a), the ATT-E-LSTM model (baseline) assigns relatively more weights on the word “no” and “pressure”, while MATT-BiE-LSTM attends mostly on the word “happy” and “lovely”. The different attention distributions suggest that the proposed senti-emoji embedding is capable of recognizing words with strong sentiments that are closely related to the true sentiment even with the presence of words with conflicting sentiments, such as “pressure” and “happy”. while ATT-E-LSTM tends to pick up all sentimental words which could raise confusions. The senti-emoji embedding is capable of extracting representations of complexed semantics and sentiments which help guide the attentions even in cases when the word sentiment and emoji sentiment are somewhat contradictory to each other. From Figure FIGREF27 (b) and (c) we can observe that the ATT-E-LSTM assigns more weights on the sentiment-irrelevant words than the MATT-BiE-LSTM such as “hoodies”, “wait” and “after”, indicating that the proposed model is more robust to irrelevant words and concentrates better on important words. Because of the senti-emoji embedding obtained through bi-sense emoji embedding and the sentence-level LSTM encoding on the text input (described in Section SECREF13 ), we are able to construct a more robust embedding based on the semantic and sentiment information from the whole context compared to the word-emoji embedding used in ATT-E-LSTM which takes only word-level information into account. Bi-sense Emoji Embedding Visualization To gain further insights on the bi-sense emoji embedding, we use t-SNE BIBREF47 to project high-dimensional bi-sense embedding vectors into a two-dimensional space and preserving the relative distances between the embedding vectors at the same time. In Figure FIGREF28 we visualize the bi-sense emoji embedding, positive-sense embedding, negative-sense embedding and the subtraction between positive and negative sense embeddings of each emoji, respectively. The subtraction of an emoji between its two sense embeddings indicates the semantic differences between emoji usages in positive and negative sentimental contexts, similarly to the objective of word embeddings BIBREF28 . The positive-sense of emoji ( and ), and the negative-sense of emoji (, and ) are embedded far from the two main clusters as observed in Figure FIGREF28 (a), suggesting that the semantics of these emojis are different from the other popular emojis. The positive-sense embedding and negative-sense embeddings are clustered well with no intersection with each other. Such observation supports our objective of applying bi-sense emoji embedding because there exist such significant differences in the semantics of each emoji when appears in positive and negative sentimental contexts, and it is well-motivated to consider the emoji usages individually according to the sentiment of the contexts to extract the more fine-grained bi-sense embedding. Additionally, we observe consistent patterns in the Figure FIGREF28 (b), (c) and (d) where the sentiments conveyed in the emojis become an important factor. For example, emojis with positive sentiments such as (, and ), and emojis with negative sentiment such as (, and ) are embedded into one clusters in both positive-sense and negative-sense space. The embedding subtractions of emojis in Figure FIGREF28 (d) shows the different usages of emojis across sentiments are similar between emojis and preserve the cluster patterns observed in Figure FIGREF28 (b) and (c). Conclusions In this paper, we present a novel approach to the task of sentiment analysis and achieve the state-of-the-art performance. Different from the previous work, our method combines a more robust and fine-grained bi-sense emoji embedding that effectively represents complex semantic and sentiment information, with attention-based LSTM networks that selectively attend on the correlated sense of the emoji embeddings, and seamlessly fuse the obtained senti-emoji embeddings with the word embeddings for a better understanding of the rich semantics and sentiments involved. In the future, we plan to further extend our attention-based LSTM with bi-embedding work frame to tackle tasks involving multi-sense embedding such as the learning and applications of word-sense embedding. Acknowledgement We would like to thank the support of New York State through the Goergen Institute for Data Science, and NSF Award #1704309.
Attention-based LSTM with emojis
d3092cd32cd581a57fa4844f80fe18d6b920e903
d3092cd32cd581a57fa4844f80fe18d6b920e903_0
Q: What is the baseline for experiments? Text: Introduction The rapid growth of social media platforms such as Twitter provides rich multimedia data in large scales for various research opportunities, such as sentiment analysis which focuses on automatically sentiment (positive and negative) prediction on given contents. Sentiment analysis has been widely used in real world applications by analyzing the online user-generated data, such as election prediction, opinion mining and business-related activity analysis. Emojis, which consist of various symbols ranging from cartoon facial expressions to figures such as flags and sports, are widely used in daily communications to express people's feelings . Since their first release in 2010, emojis have taken the place of emoticons (such as “:- INLINEFORM0 ” and “:-P”) BIBREF0 to create a new form of language for social media users BIBREF1 . According to recent science reports, there are 2,823 emojis in unicode standard in Emoji 11.0 , with over 50% of the Instagram posts containing one or more emojis BIBREF2 and 92% of the online population using emojis BIBREF3 . The extensive use of emojis has drawn a growing attention from researchers BIBREF4 , BIBREF5 because the emojis convey fruitful semantical and sentimental information to visually complement the textual information which is significantly useful in understanding the embedded emotional signals in texts BIBREF6 . For example, emoji embeddings have been proposed to understand the semantics behind the emojis BIBREF7 , BIBREF8 , and the embedding vectors can be used to visualize and predict emoji usages given their corresponding contexts. Previous work also shows that, it is useful to pre-train a deep neural network on an emoji prediction task with pre-trained emoji embeddings to learn the emotional signals of emojis for other tasks including sentiment, emotion and sarcasm prediction BIBREF9 . However, the previous literatures lack in considerations of the linguistic complexities and diversity of emoji. Therefore, previous emoji embedding methods fail to handle the situation when the semantics or sentiments of the learned emoji embeddings contradict the information from the corresponding contexts BIBREF5 , or when the emojis convey multiple senses of semantics and sentiments such as ( and ). In practice, emojis can either summarize and emphasis the original tune of their contexts, or express more complex semantics such as irony and sarcasm by being combined with contexts of contradictory semantics or sentiments. For the examples shown in Table TABREF3 , the emoji () is of consistent sentiment with text to emphasis the sentiment, but is of the opposite sentiment (positive) to the text sentiment (negative) example 3 and 4 to deliver a sense of sarcasm. Conventional emoji analysis can only extract single embedding of each emoji, and such embeddings will confuse the following sentiment analysis model by inconsistent sentiment signals from the input texts and emojis. Moreover, we consider the emoji effect modeling different from the conventional multimodal sentiment analysis which usually includes images and texts in that, image sentiment and text sentiment are usually assumed to be consistent BIBREF10 while it carries no such assumption for texts and emojis. To tackle such limitations, we propose a novel scheme that consists of an attention-based recurrent neural network (RNN) with robust bi-sense emoji embeddings. Inspired by the word sense embedding task in natural language processing (NLP) BIBREF11 , BIBREF12 , BIBREF13 where each sense of an ambiguous word responds to one unique embedding vector, the proposed bi-sense embedding is a more robust and fine-grained representation of the complicated semantics for emojis where each emoji is embedded into two distinct vectors, namely positive-sense and negative-sense vector, respectively. For our specific task which is Twitter sentiment analysis BIBREF14 , BIBREF15 , we initialize the bi-sense embedding vectors together with word embedding vectors using word embedding algorithm fasttext BIBREF16 by extracting two distinct embeddings for each emoji according to the sentiment of its corresponding textual contexts, namely bi-sense embedding. A long short-term memory (LSTM) based recurrent neural network is then used for predicting sentiments which is integrated with the pre-trained emoji embedding features by a context-guide and self-selected attention mechanism. Because most of the previous Twitter sentiment datasets exclude emojis and there exists little resource that contains sufficient emoji-tweets with sentiment labels, we construct our own emoji-tweets dataset by automatically generating weak labels using a rule-based sentiment analysis algorithm Vader BIBREF17 for pre-traning the networks, and manually labeling a subset of tweets for fine tuning and testing purposes. The experimental results demonstrate that the bi-sense emoji embedding is capable of extracting more distinguished information from emojis and outperforms the state-of-the-art sentiment analysis models with the proposed attention-based LSTM networks. We further visualize the bi-sense emoji embedding to obtain the sentiments and semantics learned by the proposed approach. The main contributions of this paper are summarized as follows. Sentiment Analysis Sentiment analysis is to extract and quantify subjective information including the status of attitudes, emotions and opinions from a variety of contents such as texts, images and audios BIBREF18 . Sentiment analysis has been drawing great attentions because of its wide applications in business and government intelligence, political science, sociology and psychology BIBREF19 , BIBREF20 , BIBREF21 , BIBREF22 . From a technical perspective, textual sentiment analysis is first explored by researchers as an NLP task. Methods range from lexical-based approaches using features including keywords BIBREF23 , BIBREF24 where each word corresponds to a sentiment vector with entries representing the possibility of the word and each sentiment and phase-level features (n-grams and unigrams) BIBREF25 , BIBREF26 , to deep neural network based embedding approaches including skip-grams, continuous bag-of-words (CBoW) and skip-thoughts BIBREF27 , BIBREF28 , BIBREF16 , BIBREF29 . It was until recent years when researchers start focusing on image and multimodal sentiments BIBREF30 , BIBREF31 and analyzing how to take advantage of the cross-modality resources BIBREF10 , BIBREF32 . For multimodal sentiment analysis, an underlying assumption is that both modalities express similar sentiment and such similarity is enforced in order to train a robust sentiment inference model BIBREF10 . However, the same assumption does not stand in modeling textual tweets and emojis because the complexities of natural language exist extensively, such as the use of irony, jokes, sarcasm, etc. BIBREF9 . Models We set up the baselines and proposed models as follows: LSTM with text embedding: CNNs and LSTMs are widely used to encode textual contents for sentiment analysis in BIBREF45 , BIBREF46 and many online tutorials. Here we select the standard LSTM with pre-trained word embedding as input, and add one fully-connected layer with sigmoid activation top of the LSTM encoder (same as all other models), denoted as T-LSTM. LSTM with emoji embedding: We consider the emoji as one special word and input both pre-trained text and emoji embeddings into the same LSTM network, namely E-LSTM. Similarly, we concatenate the pre-trained bi-sense emoji embedding as one special word to feed into the LSTM network. This model is called BiE-LSTM. Attention-based LSTM with emojis:We also use the word-emoji embedding to calculate the emoji-word attention following Equation EQREF20 and EQREF21 , and the only difference is that we replace the attention-derived senti-emoji embedding with the pre-trained word-emoji embedding by fasttext, denoted as ATT-E-LSTM. LSTM with bi-sense emoji embedding (proposed): As we have introduced in Section SECREF13 , we propose two attention-based LSTM networks based on bi-sense emoji embedding, denoted as MATT-BiE-LSTM and WATT-BiE-LSTM. Evaluation We evaluate the baseline and proposed models on sentiment analysis by F1 scores and accuracies based on the auto-annotated testing set (AA-Sentiment) and human-annotated testing set (HA-Sentiment), as shown in Table TABREF25 . We only test the models after fine-tuning with a subset of the samples with human annotations because training exclusively on the samples with auto-generated weak labels results in relatively poor performances when tested with human annotated data indicating the models after fine-tuning are more robust. The F1 scores and accuracies are overall higher with the AA-Sentiment than the results with HA-sentiment, indicating that the HA-Sentiment is a more challenging task and the sentiments involved are more difficult to identify supported by their relatively lower sentiment scores returned from Vader. We still, however, observe competitive results from HA-Sentiment showing that the models are well-trained and robust to noisy labels with the help of fine-tuning with human annotated data. The T-LSTM baseline achieves decent performance in both experiments with accuracies of 86.6% and 70.7% showing that LSTM is an effective encoder for sentiment analysis as suggested by the references. The models with proposed bi-sense emoji embedding obtain accuracies over 82.4% and we observe improvements on the performance with the attention-based LSTM from our proposed model MATT-BiE-LSTM and WATT-BiE-LSTM, which is consistent with that ATT-E-LSTM ([email protected]%, [email protected]% on HA-Sentiment) outperforms significantly T-LSTM and E-LSTM. Emoji information is useful in sentiment analysis. Most models outperforms the baseline T-LSTM in both dataset suggesting that the emoji information is useful for sentiment analysis as a complement to the textual contents, even with the naive use of emoji embeddings (E-LSTM) when tested with HA-Sentiment. We observe that E-LSTM obtains similar performance to T-LSTM with AA-Sentiment but a significant gain over the T-LSTM when tested with HA-Sentiment indicating that sentiment information is helpful and necessary when the hidden sentiment is relatively subtle and the task is more challenging. Bi-sense emoji embedding helps. All the models using bi-sense emoji embedding perform significantly better than the baseline models without emoji feature or with word-emoji embedding. BiE-LSTM outperforms T-LSTM and E-LSTM significantly with the same utilization of emoji embedding indicates that the proposed bi-sense emoji embedding is capable of extracting more informative and distinguishable vectors over the use of conventional word embedding algorithms, which is consistent based on the comparisons between the proposed models (MATT-BiE-LSTM and WATT-BiE-LSTM) with bi-sense emoji embedding and the baseline model ATT-E-LSTM with word-emoji embedding and attention. Attention mechanism aligns and performs well with bi-sense embedding. MATT-BiE-LSTM and WATT-BiE-LSTM obtain similar performances when tested on both Vader and human annotated samples, though their ways of computing the attention (weights and vectors) are different that WATT computes attention weights and the senti-emoji embeddings guided by each word, and MATT obtains the senti-emoji embedding based on the LSTM encoder on the whole contexts and computes the attention weights of the senti-emoji embedding across all words. Both models outperforms the state-of-the-art baseline models including ATT-E-LSTM. The proposed attention-based LSTM can be further extended to handle tasks involving multi-sense embedding as inputs, such as the word-sense embedding in NLP, by using context-guide attention to self-select how much to attend on each sense of the embeddings each of which correspond to a distinct sense of semantics or sentiments. In this way we are able to take advantage of the more robust and fine-grained embeddings. Emojis and Sentiment Analysis With the overwhelming development of Internet of Things (IOT), the growing accessibility and popularity of subjective contents have provided new opportunities and challenges for sentiment analysis BIBREF33 . For example, social medias such as Twitter and Instagram have been explored because the massive user-generated contents with rich user sentiments BIBREF25 , BIBREF34 , BIBREF35 where emojis (and emoticons) are extensively used. Non-verbal cues of sentiment, such as emoticon which is considered as the previous generation of emoji, has been studied for their sentiment effect before emojis take over BIBREF36 , BIBREF37 , BIBREF38 . For instance, BIBREF36 , BIBREF38 pre-define sentiment labels to emoticons and construct a emoticon-sentiment dictionary. BIBREF37 applies emoticons for smoothing noisy sentiment labels. Similar work from BIBREF39 first considers emoji as a component in extracting the lexical feature for further sentiment analysis. BIBREF40 constructs an emoji sentiment ranking based on the occurrences of emojis and the human-annotated sentiments of the corresponding tweets where each emoji is assigned with a sentiment score from negative to positive , similar to the SentiWordNet BIBREF41 . However, the relatively intuitive use of emojis by lexical- and dictionary-based approaches lacks insightful understanding of the complexed semantics of emojis. Therefore, inspired by the success of word semantic embedding algorithms such as BIBREF28 , BIBREF16 , BIBREF7 obtains semantic embeddings of each emoji by averaging the words from its descriptions and shows it is effective to take advantage of the emoji embedding for the task of Twitter sentiment analysis. BIBREF8 proposes a convoluntional neural network to predict the emoji occurrence and jointly learns the emoji embedding via a matching layer based on cosine similarities. Despite the growing popularity of Twitter sentiment analysis, there is a limited number of emoji datasets with sentiment labels available because previous studies usually filter out urls, emojis and sometimes emoticons. However, BIBREF9 shows that it is effective to extract sentiment information from emojis for emotion classification and sarcasm detection tasks in the absence of learning vector-based emoji representations by pre-training a deep neural network to predict the emoji occurrence. Methodology We propose two mechanisms, namely Word-guide Attention-based LSTM and Multi-level Attention-based LSTM, to take advantage of bi-sense emoji embedding for the sentiment analysis task. The frameworks of these two methods are shown in Figure FIGREF10 and Figure FIGREF19 , respectively. Our workflow includes the following steps: initialization of bi-sense emoji embedding, generating senti-emoji embedding based on self-selected attention, and sentiment classification via the proposed attention-based LSTM networks. Bi-sense Embedding Recent research shows great success in word embedding task such as word2vec and fasttext BIBREF27 , BIBREF16 . We use fasttext to initialize emoji embeddings by considering each emoji as a special word, together with word embeddings. The catch is, different from conventional approaches where each emoji responds to one embedding vector (as we call word-emoji embedding), we embed each emoji into two distinct vectors (bi-sense emoji embedding): we first assign two distinct tokens to each emoji, of which one is for the particular emoji used in positive sentimental contexts and the other one is for this emoji used in negative sentimental contexts (text sentiment initialized by Vader BIBREF17 , details will be discussed in Section SECREF23 ), respectively; the same fasttext training process is used to embed each token into a distinct vector, and we thus obtain the positive-sense and negative-sense embeddings for each emoji. The word2vec is based on the skip-gram model whose objective is to maximize the log likelihood calculated by summing the probabilities of current word occurrences given a set of the surrounding words. The fasttext model is different by formatting the problem as a binary classification task to predict the occurrence of each context word, with negative samples being randomly selected from the absent context words. Given an input word sequence INLINEFORM0 , and the context word set INLINEFORM1 and the set of negative word samples INLINEFORM2 of the current word INLINEFORM3 , the objective function is obtained based on binary logistic loss as in Equation EQREF12 : DISPLAYFORM0 where INLINEFORM0 denotes the logistic loss of the score function INLINEFORM1 which is computed by summing up the scalar products between the n-gram embeddings of the current word and the context word embedding which is different from word2vec where the score is the scalar product between the current word and the context word embedding. We select fasttext over word2vec mainly because its computational efficiency. In general, the two models yield competitive performances and the comparison between word embeddings is beyond our discussion. Therefore we only show the results derived by the fasttext initialization within the scope of this work. Word-guide Attention-based LSTM Long short-term memory (LSTM) units have been extensively used to encode textual contents. The basic encoder model consists of a text embedding layer, LSTMs layer, and fully-connected layers for further tasks such as text classifications based on the encoded feature. The operations in an LSTM unit for time step INLINEFORM0 is formulated in Equation EQREF14 : DISPLAYFORM0 where INLINEFORM0 and INLINEFORM1 represent the current and previous hidden states, INLINEFORM2 denotes the current LSTM input and here we use the embedding INLINEFORM3 of the current word INLINEFORM4 , and INLINEFORM5 and INLINEFORM6 denote the weight matrices BIBREF42 . In order to take advantage of the bi-sense emoji embedding, we modify the input layer into the LSTM units. We first obtain the senti-emoji embedding as an weighted average of the bi-sense emoji embedding based on the self-selected attention mechanism. Let INLINEFORM7 represent the INLINEFORM8 -th sense embedding of emoji INLINEFORM9 ( INLINEFORM10 in our bi-sense embedding), and INLINEFORM11 denote the attention function conditioned on the current word embedding, the attention weight INLINEFORM12 and senti-emoji embedding vector INLINEFORM13 is formulated as follows: DISPLAYFORM0 We choose a fully-connected layer with ReLU activation as the attention function, and the attention vector INLINEFORM0 is concatenated with the word embedding as the new input of the LSTM. Thus the input vector INLINEFORM1 in Equation EQREF14 becomes INLINEFORM2 . The output of the final LSTM unit is then fed into a fully-connected layer with INLINEFORM3 activation to output the tweet sentiment and binary cross-entropy loss is used as the objection function (Equation EQREF16 ) where INLINEFORM4 is the total number of samples. The motivation behind this model is that each context word guides the attention weights in order to enforce the model to self-select which embedding sense it should attend on. Therefore we denote this model as the Word-guide Attention-based LSTM with Bi-sense emoji embedding (WATT-BiE-LSTM). DISPLAYFORM0 Multi-level Attention-based LSTM There is another way of formulating the attention mechanism where the attention weights indicate how the image information (which is emoji in our case) is distributed through the context words as proposed in BIBREF43 , BIBREF44 . The modified senti-emoji embedding vector INLINEFORM0 is thus at the tweet-level instead of the word-level in Equation EQREF15 by replacing the INLINEFORM1 with the final state vector INLINEFORM2 outputted from the last LSTM unit, as shown in Equation EQREF18 : DISPLAYFORM0 The derived senti-emoji embedding INLINEFORM0 is then used to calculate an additional layer of attention following BIBREF43 , BIBREF44 . Given the input tweet sequence INLINEFORM1 , the attention weight INLINEFORM2 conditioned on the senti-emoji embedding is formulated as follows: DISPLAYFORM0 Therefore, we construct the new input INLINEFORM0 to each LSTM unit by concatenating the original word embedding and the attention vector in Equation EQREF21 to distribute the senti-emoji information to each step. This model is called Multi-level Attention-based LSTM with Bi-sense Emoji Embedding (MATT-BiE-LSTM). We choose the same binary cross-entropy as the loss function with the same network configuration with WATT-BiE-LSTM. DISPLAYFORM0 Data Collection and Annotation Data Collection We construct our own Twitter sentiment dataset by crawling tweets through the REST API which consists of 350,000 users and is magnitude larger comparing to previous work. We collect up to 3,200 tweets from each user and follow the standard tweet preprocessing procedures to remove the tweets without emojis and tweets containing less than ten words, and contents including the urls, mentions, and emails. Data Annotation For acquiring the sentiment annotations, we first use Vader which is a rule-based sentiment analysis algorithm BIBREF17 for text tweets only to generate weak sentiment labels. The algorithm outputs sentiment scores ranging from -1 (negative) to 1 (positive) with neutral in the middle. We consider the sentiment analysis as a binary classification problem (positive sentiment and negative sentiment), we filter out samples with weak prediction scores within INLINEFORM0 and keep the tweets with strong sentiment signals. Emoji occurrences are calculated separately for positive tweets and negative tweets, and threshold is set to 2,000 to further filter out emojis which are less frequently used in at least one type of sentimental text. In the end, we have constructed a dataset with 1,492,065 tweets and 55 frequently used emojis in total. For the tweets with an absolute sentiment score over 0.70, we keep the auto-generated sentiment label as ground truth because the automatic annotation is reliable with high sentiment scores. On the other hand, we select a subset of the tweets with absolute sentiment scores between INLINEFORM1 for manual labeling by randomly sampling, following the distribution of emoji occurrences where each tweet is labeled by two graduate students. Tweets are discarded if the two annotations disagree with each other or they are labeled as neutral. In the end, we have obtained 4,183 manually labeled tweets among which 60% are used for fine-tuning and 40% are used for testing purposes. The remainder of the tweets with automatic annotations are divided into three sets: 60% are used for pre-training the bi-sense and conventional emoji embedding, 10% for validation and 30% are for testing. We do not include a “neutral” class because it is difficult to obtain valid neutral samples. For auto-generated labels, the neutrals are the samples with low absolute confidence scores and their sentiments are more likely to be model failures other than “true neutrals”. Moreover, based on the human annotations, most of the tweets with emojis convey non-neutral sentiment and only few neutral samples are observed during the manual labeling which are excluded from the manually labeled subset. In order to valid our motivation that emojis are also extensively used in tweets that contain contradictory information to the emoji sentiments, we calculate the emoji usage in Table TABREF22 according to the sentiment labels where Pos-Ratio means the percentage of each emoji occurs in the positive tweets over its total number of occurrences, AA and HA indicate automatic-annotation and human-annotation, respectively. We present the top-10 most frequently used emojis in our dataset and observe a slight difference in the Pos-Ratios between AA and HA dataset because of the randomness involved in the sampling process. Results from both of the datasets show a fair amount of emoji use in both positive and negative tweets. For example, it is interesting to notice that emoji () occurs more in the positive tweets in with the automatic annotations, while emojis with strong positive sentiment have also been used in negative tweets with about 5% occurrences, such as (, , and ). Given the averaged positive ratio among all emojis in the whole dataset is about 74% and that most emojis have been extensively used in tweets containing both positive and negative sentiments, it suggests that distinguishing the emoji occurrences in both sentiments via bi-sense embedding is worth investigating. Additionally, we observe the Pos-Ratios of the AA-sentiment and HA-sentiment have little differences which are due to two main reasons: 1) Some tweets we sampled to construct the HA-sentiment are discarded because the annotators have disagreements and we only keep the samples that we are confident about; 2) Tweets with absolute sentiment scores between (0.60,0.70) are selected for manual labeling as discussed in Section SECREF23 , which are lower than the tweets used to construct the AA-sentiment (0.7 and above). The lower sentiment scores indicate that Vader is less reliable on the samples of HA-sentiment dataset and the sentiments of these tweets are more likely to be affected by emojis. Qualitative Analysis In order to obtain insights about why the more fine-grained bi-sense emoji embedding helps in understanding the complexed sentiments behind tweets, we visualize the attention weights for ATT-E-LSTM and MATT-BiE-LSTM for comparison. The example tweets with corresponding attention weights calculated by word-emoji embedding and senti-emoji embedding are shown in Figure FIGREF27 , where the contexts are presented in the captions. The emojis used are , , and , respectively. In Figure FIGREF27 (a), the ATT-E-LSTM model (baseline) assigns relatively more weights on the word “no” and “pressure”, while MATT-BiE-LSTM attends mostly on the word “happy” and “lovely”. The different attention distributions suggest that the proposed senti-emoji embedding is capable of recognizing words with strong sentiments that are closely related to the true sentiment even with the presence of words with conflicting sentiments, such as “pressure” and “happy”. while ATT-E-LSTM tends to pick up all sentimental words which could raise confusions. The senti-emoji embedding is capable of extracting representations of complexed semantics and sentiments which help guide the attentions even in cases when the word sentiment and emoji sentiment are somewhat contradictory to each other. From Figure FIGREF27 (b) and (c) we can observe that the ATT-E-LSTM assigns more weights on the sentiment-irrelevant words than the MATT-BiE-LSTM such as “hoodies”, “wait” and “after”, indicating that the proposed model is more robust to irrelevant words and concentrates better on important words. Because of the senti-emoji embedding obtained through bi-sense emoji embedding and the sentence-level LSTM encoding on the text input (described in Section SECREF13 ), we are able to construct a more robust embedding based on the semantic and sentiment information from the whole context compared to the word-emoji embedding used in ATT-E-LSTM which takes only word-level information into account. Bi-sense Emoji Embedding Visualization To gain further insights on the bi-sense emoji embedding, we use t-SNE BIBREF47 to project high-dimensional bi-sense embedding vectors into a two-dimensional space and preserving the relative distances between the embedding vectors at the same time. In Figure FIGREF28 we visualize the bi-sense emoji embedding, positive-sense embedding, negative-sense embedding and the subtraction between positive and negative sense embeddings of each emoji, respectively. The subtraction of an emoji between its two sense embeddings indicates the semantic differences between emoji usages in positive and negative sentimental contexts, similarly to the objective of word embeddings BIBREF28 . The positive-sense of emoji ( and ), and the negative-sense of emoji (, and ) are embedded far from the two main clusters as observed in Figure FIGREF28 (a), suggesting that the semantics of these emojis are different from the other popular emojis. The positive-sense embedding and negative-sense embeddings are clustered well with no intersection with each other. Such observation supports our objective of applying bi-sense emoji embedding because there exist such significant differences in the semantics of each emoji when appears in positive and negative sentimental contexts, and it is well-motivated to consider the emoji usages individually according to the sentiment of the contexts to extract the more fine-grained bi-sense embedding. Additionally, we observe consistent patterns in the Figure FIGREF28 (b), (c) and (d) where the sentiments conveyed in the emojis become an important factor. For example, emojis with positive sentiments such as (, and ), and emojis with negative sentiment such as (, and ) are embedded into one clusters in both positive-sense and negative-sense space. The embedding subtractions of emojis in Figure FIGREF28 (d) shows the different usages of emojis across sentiments are similar between emojis and preserve the cluster patterns observed in Figure FIGREF28 (b) and (c). Conclusions In this paper, we present a novel approach to the task of sentiment analysis and achieve the state-of-the-art performance. Different from the previous work, our method combines a more robust and fine-grained bi-sense emoji embedding that effectively represents complex semantic and sentiment information, with attention-based LSTM networks that selectively attend on the correlated sense of the emoji embeddings, and seamlessly fuse the obtained senti-emoji embeddings with the word embeddings for a better understanding of the rich semantics and sentiments involved. In the future, we plan to further extend our attention-based LSTM with bi-embedding work frame to tackle tasks involving multi-sense embedding such as the learning and applications of word-sense embedding. Acknowledgement We would like to thank the support of New York State through the Goergen Institute for Data Science, and NSF Award #1704309.
LSTM with text embedding, LSTM with emoji embedding, Attention-based LSTM with emojis
0b39c20db6e60ce07bf5465bd3c08fedc0587780
0b39c20db6e60ce07bf5465bd3c08fedc0587780_0
Q: What is the motivation for training bi-sense embeddings? Text: Introduction The rapid growth of social media platforms such as Twitter provides rich multimedia data in large scales for various research opportunities, such as sentiment analysis which focuses on automatically sentiment (positive and negative) prediction on given contents. Sentiment analysis has been widely used in real world applications by analyzing the online user-generated data, such as election prediction, opinion mining and business-related activity analysis. Emojis, which consist of various symbols ranging from cartoon facial expressions to figures such as flags and sports, are widely used in daily communications to express people's feelings . Since their first release in 2010, emojis have taken the place of emoticons (such as “:- INLINEFORM0 ” and “:-P”) BIBREF0 to create a new form of language for social media users BIBREF1 . According to recent science reports, there are 2,823 emojis in unicode standard in Emoji 11.0 , with over 50% of the Instagram posts containing one or more emojis BIBREF2 and 92% of the online population using emojis BIBREF3 . The extensive use of emojis has drawn a growing attention from researchers BIBREF4 , BIBREF5 because the emojis convey fruitful semantical and sentimental information to visually complement the textual information which is significantly useful in understanding the embedded emotional signals in texts BIBREF6 . For example, emoji embeddings have been proposed to understand the semantics behind the emojis BIBREF7 , BIBREF8 , and the embedding vectors can be used to visualize and predict emoji usages given their corresponding contexts. Previous work also shows that, it is useful to pre-train a deep neural network on an emoji prediction task with pre-trained emoji embeddings to learn the emotional signals of emojis for other tasks including sentiment, emotion and sarcasm prediction BIBREF9 . However, the previous literatures lack in considerations of the linguistic complexities and diversity of emoji. Therefore, previous emoji embedding methods fail to handle the situation when the semantics or sentiments of the learned emoji embeddings contradict the information from the corresponding contexts BIBREF5 , or when the emojis convey multiple senses of semantics and sentiments such as ( and ). In practice, emojis can either summarize and emphasis the original tune of their contexts, or express more complex semantics such as irony and sarcasm by being combined with contexts of contradictory semantics or sentiments. For the examples shown in Table TABREF3 , the emoji () is of consistent sentiment with text to emphasis the sentiment, but is of the opposite sentiment (positive) to the text sentiment (negative) example 3 and 4 to deliver a sense of sarcasm. Conventional emoji analysis can only extract single embedding of each emoji, and such embeddings will confuse the following sentiment analysis model by inconsistent sentiment signals from the input texts and emojis. Moreover, we consider the emoji effect modeling different from the conventional multimodal sentiment analysis which usually includes images and texts in that, image sentiment and text sentiment are usually assumed to be consistent BIBREF10 while it carries no such assumption for texts and emojis. To tackle such limitations, we propose a novel scheme that consists of an attention-based recurrent neural network (RNN) with robust bi-sense emoji embeddings. Inspired by the word sense embedding task in natural language processing (NLP) BIBREF11 , BIBREF12 , BIBREF13 where each sense of an ambiguous word responds to one unique embedding vector, the proposed bi-sense embedding is a more robust and fine-grained representation of the complicated semantics for emojis where each emoji is embedded into two distinct vectors, namely positive-sense and negative-sense vector, respectively. For our specific task which is Twitter sentiment analysis BIBREF14 , BIBREF15 , we initialize the bi-sense embedding vectors together with word embedding vectors using word embedding algorithm fasttext BIBREF16 by extracting two distinct embeddings for each emoji according to the sentiment of its corresponding textual contexts, namely bi-sense embedding. A long short-term memory (LSTM) based recurrent neural network is then used for predicting sentiments which is integrated with the pre-trained emoji embedding features by a context-guide and self-selected attention mechanism. Because most of the previous Twitter sentiment datasets exclude emojis and there exists little resource that contains sufficient emoji-tweets with sentiment labels, we construct our own emoji-tweets dataset by automatically generating weak labels using a rule-based sentiment analysis algorithm Vader BIBREF17 for pre-traning the networks, and manually labeling a subset of tweets for fine tuning and testing purposes. The experimental results demonstrate that the bi-sense emoji embedding is capable of extracting more distinguished information from emojis and outperforms the state-of-the-art sentiment analysis models with the proposed attention-based LSTM networks. We further visualize the bi-sense emoji embedding to obtain the sentiments and semantics learned by the proposed approach. The main contributions of this paper are summarized as follows. Sentiment Analysis Sentiment analysis is to extract and quantify subjective information including the status of attitudes, emotions and opinions from a variety of contents such as texts, images and audios BIBREF18 . Sentiment analysis has been drawing great attentions because of its wide applications in business and government intelligence, political science, sociology and psychology BIBREF19 , BIBREF20 , BIBREF21 , BIBREF22 . From a technical perspective, textual sentiment analysis is first explored by researchers as an NLP task. Methods range from lexical-based approaches using features including keywords BIBREF23 , BIBREF24 where each word corresponds to a sentiment vector with entries representing the possibility of the word and each sentiment and phase-level features (n-grams and unigrams) BIBREF25 , BIBREF26 , to deep neural network based embedding approaches including skip-grams, continuous bag-of-words (CBoW) and skip-thoughts BIBREF27 , BIBREF28 , BIBREF16 , BIBREF29 . It was until recent years when researchers start focusing on image and multimodal sentiments BIBREF30 , BIBREF31 and analyzing how to take advantage of the cross-modality resources BIBREF10 , BIBREF32 . For multimodal sentiment analysis, an underlying assumption is that both modalities express similar sentiment and such similarity is enforced in order to train a robust sentiment inference model BIBREF10 . However, the same assumption does not stand in modeling textual tweets and emojis because the complexities of natural language exist extensively, such as the use of irony, jokes, sarcasm, etc. BIBREF9 . Models We set up the baselines and proposed models as follows: LSTM with text embedding: CNNs and LSTMs are widely used to encode textual contents for sentiment analysis in BIBREF45 , BIBREF46 and many online tutorials. Here we select the standard LSTM with pre-trained word embedding as input, and add one fully-connected layer with sigmoid activation top of the LSTM encoder (same as all other models), denoted as T-LSTM. LSTM with emoji embedding: We consider the emoji as one special word and input both pre-trained text and emoji embeddings into the same LSTM network, namely E-LSTM. Similarly, we concatenate the pre-trained bi-sense emoji embedding as one special word to feed into the LSTM network. This model is called BiE-LSTM. Attention-based LSTM with emojis:We also use the word-emoji embedding to calculate the emoji-word attention following Equation EQREF20 and EQREF21 , and the only difference is that we replace the attention-derived senti-emoji embedding with the pre-trained word-emoji embedding by fasttext, denoted as ATT-E-LSTM. LSTM with bi-sense emoji embedding (proposed): As we have introduced in Section SECREF13 , we propose two attention-based LSTM networks based on bi-sense emoji embedding, denoted as MATT-BiE-LSTM and WATT-BiE-LSTM. Evaluation We evaluate the baseline and proposed models on sentiment analysis by F1 scores and accuracies based on the auto-annotated testing set (AA-Sentiment) and human-annotated testing set (HA-Sentiment), as shown in Table TABREF25 . We only test the models after fine-tuning with a subset of the samples with human annotations because training exclusively on the samples with auto-generated weak labels results in relatively poor performances when tested with human annotated data indicating the models after fine-tuning are more robust. The F1 scores and accuracies are overall higher with the AA-Sentiment than the results with HA-sentiment, indicating that the HA-Sentiment is a more challenging task and the sentiments involved are more difficult to identify supported by their relatively lower sentiment scores returned from Vader. We still, however, observe competitive results from HA-Sentiment showing that the models are well-trained and robust to noisy labels with the help of fine-tuning with human annotated data. The T-LSTM baseline achieves decent performance in both experiments with accuracies of 86.6% and 70.7% showing that LSTM is an effective encoder for sentiment analysis as suggested by the references. The models with proposed bi-sense emoji embedding obtain accuracies over 82.4% and we observe improvements on the performance with the attention-based LSTM from our proposed model MATT-BiE-LSTM and WATT-BiE-LSTM, which is consistent with that ATT-E-LSTM ([email protected]%, [email protected]% on HA-Sentiment) outperforms significantly T-LSTM and E-LSTM. Emoji information is useful in sentiment analysis. Most models outperforms the baseline T-LSTM in both dataset suggesting that the emoji information is useful for sentiment analysis as a complement to the textual contents, even with the naive use of emoji embeddings (E-LSTM) when tested with HA-Sentiment. We observe that E-LSTM obtains similar performance to T-LSTM with AA-Sentiment but a significant gain over the T-LSTM when tested with HA-Sentiment indicating that sentiment information is helpful and necessary when the hidden sentiment is relatively subtle and the task is more challenging. Bi-sense emoji embedding helps. All the models using bi-sense emoji embedding perform significantly better than the baseline models without emoji feature or with word-emoji embedding. BiE-LSTM outperforms T-LSTM and E-LSTM significantly with the same utilization of emoji embedding indicates that the proposed bi-sense emoji embedding is capable of extracting more informative and distinguishable vectors over the use of conventional word embedding algorithms, which is consistent based on the comparisons between the proposed models (MATT-BiE-LSTM and WATT-BiE-LSTM) with bi-sense emoji embedding and the baseline model ATT-E-LSTM with word-emoji embedding and attention. Attention mechanism aligns and performs well with bi-sense embedding. MATT-BiE-LSTM and WATT-BiE-LSTM obtain similar performances when tested on both Vader and human annotated samples, though their ways of computing the attention (weights and vectors) are different that WATT computes attention weights and the senti-emoji embeddings guided by each word, and MATT obtains the senti-emoji embedding based on the LSTM encoder on the whole contexts and computes the attention weights of the senti-emoji embedding across all words. Both models outperforms the state-of-the-art baseline models including ATT-E-LSTM. The proposed attention-based LSTM can be further extended to handle tasks involving multi-sense embedding as inputs, such as the word-sense embedding in NLP, by using context-guide attention to self-select how much to attend on each sense of the embeddings each of which correspond to a distinct sense of semantics or sentiments. In this way we are able to take advantage of the more robust and fine-grained embeddings. Emojis and Sentiment Analysis With the overwhelming development of Internet of Things (IOT), the growing accessibility and popularity of subjective contents have provided new opportunities and challenges for sentiment analysis BIBREF33 . For example, social medias such as Twitter and Instagram have been explored because the massive user-generated contents with rich user sentiments BIBREF25 , BIBREF34 , BIBREF35 where emojis (and emoticons) are extensively used. Non-verbal cues of sentiment, such as emoticon which is considered as the previous generation of emoji, has been studied for their sentiment effect before emojis take over BIBREF36 , BIBREF37 , BIBREF38 . For instance, BIBREF36 , BIBREF38 pre-define sentiment labels to emoticons and construct a emoticon-sentiment dictionary. BIBREF37 applies emoticons for smoothing noisy sentiment labels. Similar work from BIBREF39 first considers emoji as a component in extracting the lexical feature for further sentiment analysis. BIBREF40 constructs an emoji sentiment ranking based on the occurrences of emojis and the human-annotated sentiments of the corresponding tweets where each emoji is assigned with a sentiment score from negative to positive , similar to the SentiWordNet BIBREF41 . However, the relatively intuitive use of emojis by lexical- and dictionary-based approaches lacks insightful understanding of the complexed semantics of emojis. Therefore, inspired by the success of word semantic embedding algorithms such as BIBREF28 , BIBREF16 , BIBREF7 obtains semantic embeddings of each emoji by averaging the words from its descriptions and shows it is effective to take advantage of the emoji embedding for the task of Twitter sentiment analysis. BIBREF8 proposes a convoluntional neural network to predict the emoji occurrence and jointly learns the emoji embedding via a matching layer based on cosine similarities. Despite the growing popularity of Twitter sentiment analysis, there is a limited number of emoji datasets with sentiment labels available because previous studies usually filter out urls, emojis and sometimes emoticons. However, BIBREF9 shows that it is effective to extract sentiment information from emojis for emotion classification and sarcasm detection tasks in the absence of learning vector-based emoji representations by pre-training a deep neural network to predict the emoji occurrence. Methodology We propose two mechanisms, namely Word-guide Attention-based LSTM and Multi-level Attention-based LSTM, to take advantage of bi-sense emoji embedding for the sentiment analysis task. The frameworks of these two methods are shown in Figure FIGREF10 and Figure FIGREF19 , respectively. Our workflow includes the following steps: initialization of bi-sense emoji embedding, generating senti-emoji embedding based on self-selected attention, and sentiment classification via the proposed attention-based LSTM networks. Bi-sense Embedding Recent research shows great success in word embedding task such as word2vec and fasttext BIBREF27 , BIBREF16 . We use fasttext to initialize emoji embeddings by considering each emoji as a special word, together with word embeddings. The catch is, different from conventional approaches where each emoji responds to one embedding vector (as we call word-emoji embedding), we embed each emoji into two distinct vectors (bi-sense emoji embedding): we first assign two distinct tokens to each emoji, of which one is for the particular emoji used in positive sentimental contexts and the other one is for this emoji used in negative sentimental contexts (text sentiment initialized by Vader BIBREF17 , details will be discussed in Section SECREF23 ), respectively; the same fasttext training process is used to embed each token into a distinct vector, and we thus obtain the positive-sense and negative-sense embeddings for each emoji. The word2vec is based on the skip-gram model whose objective is to maximize the log likelihood calculated by summing the probabilities of current word occurrences given a set of the surrounding words. The fasttext model is different by formatting the problem as a binary classification task to predict the occurrence of each context word, with negative samples being randomly selected from the absent context words. Given an input word sequence INLINEFORM0 , and the context word set INLINEFORM1 and the set of negative word samples INLINEFORM2 of the current word INLINEFORM3 , the objective function is obtained based on binary logistic loss as in Equation EQREF12 : DISPLAYFORM0 where INLINEFORM0 denotes the logistic loss of the score function INLINEFORM1 which is computed by summing up the scalar products between the n-gram embeddings of the current word and the context word embedding which is different from word2vec where the score is the scalar product between the current word and the context word embedding. We select fasttext over word2vec mainly because its computational efficiency. In general, the two models yield competitive performances and the comparison between word embeddings is beyond our discussion. Therefore we only show the results derived by the fasttext initialization within the scope of this work. Word-guide Attention-based LSTM Long short-term memory (LSTM) units have been extensively used to encode textual contents. The basic encoder model consists of a text embedding layer, LSTMs layer, and fully-connected layers for further tasks such as text classifications based on the encoded feature. The operations in an LSTM unit for time step INLINEFORM0 is formulated in Equation EQREF14 : DISPLAYFORM0 where INLINEFORM0 and INLINEFORM1 represent the current and previous hidden states, INLINEFORM2 denotes the current LSTM input and here we use the embedding INLINEFORM3 of the current word INLINEFORM4 , and INLINEFORM5 and INLINEFORM6 denote the weight matrices BIBREF42 . In order to take advantage of the bi-sense emoji embedding, we modify the input layer into the LSTM units. We first obtain the senti-emoji embedding as an weighted average of the bi-sense emoji embedding based on the self-selected attention mechanism. Let INLINEFORM7 represent the INLINEFORM8 -th sense embedding of emoji INLINEFORM9 ( INLINEFORM10 in our bi-sense embedding), and INLINEFORM11 denote the attention function conditioned on the current word embedding, the attention weight INLINEFORM12 and senti-emoji embedding vector INLINEFORM13 is formulated as follows: DISPLAYFORM0 We choose a fully-connected layer with ReLU activation as the attention function, and the attention vector INLINEFORM0 is concatenated with the word embedding as the new input of the LSTM. Thus the input vector INLINEFORM1 in Equation EQREF14 becomes INLINEFORM2 . The output of the final LSTM unit is then fed into a fully-connected layer with INLINEFORM3 activation to output the tweet sentiment and binary cross-entropy loss is used as the objection function (Equation EQREF16 ) where INLINEFORM4 is the total number of samples. The motivation behind this model is that each context word guides the attention weights in order to enforce the model to self-select which embedding sense it should attend on. Therefore we denote this model as the Word-guide Attention-based LSTM with Bi-sense emoji embedding (WATT-BiE-LSTM). DISPLAYFORM0 Multi-level Attention-based LSTM There is another way of formulating the attention mechanism where the attention weights indicate how the image information (which is emoji in our case) is distributed through the context words as proposed in BIBREF43 , BIBREF44 . The modified senti-emoji embedding vector INLINEFORM0 is thus at the tweet-level instead of the word-level in Equation EQREF15 by replacing the INLINEFORM1 with the final state vector INLINEFORM2 outputted from the last LSTM unit, as shown in Equation EQREF18 : DISPLAYFORM0 The derived senti-emoji embedding INLINEFORM0 is then used to calculate an additional layer of attention following BIBREF43 , BIBREF44 . Given the input tweet sequence INLINEFORM1 , the attention weight INLINEFORM2 conditioned on the senti-emoji embedding is formulated as follows: DISPLAYFORM0 Therefore, we construct the new input INLINEFORM0 to each LSTM unit by concatenating the original word embedding and the attention vector in Equation EQREF21 to distribute the senti-emoji information to each step. This model is called Multi-level Attention-based LSTM with Bi-sense Emoji Embedding (MATT-BiE-LSTM). We choose the same binary cross-entropy as the loss function with the same network configuration with WATT-BiE-LSTM. DISPLAYFORM0 Data Collection and Annotation Data Collection We construct our own Twitter sentiment dataset by crawling tweets through the REST API which consists of 350,000 users and is magnitude larger comparing to previous work. We collect up to 3,200 tweets from each user and follow the standard tweet preprocessing procedures to remove the tweets without emojis and tweets containing less than ten words, and contents including the urls, mentions, and emails. Data Annotation For acquiring the sentiment annotations, we first use Vader which is a rule-based sentiment analysis algorithm BIBREF17 for text tweets only to generate weak sentiment labels. The algorithm outputs sentiment scores ranging from -1 (negative) to 1 (positive) with neutral in the middle. We consider the sentiment analysis as a binary classification problem (positive sentiment and negative sentiment), we filter out samples with weak prediction scores within INLINEFORM0 and keep the tweets with strong sentiment signals. Emoji occurrences are calculated separately for positive tweets and negative tweets, and threshold is set to 2,000 to further filter out emojis which are less frequently used in at least one type of sentimental text. In the end, we have constructed a dataset with 1,492,065 tweets and 55 frequently used emojis in total. For the tweets with an absolute sentiment score over 0.70, we keep the auto-generated sentiment label as ground truth because the automatic annotation is reliable with high sentiment scores. On the other hand, we select a subset of the tweets with absolute sentiment scores between INLINEFORM1 for manual labeling by randomly sampling, following the distribution of emoji occurrences where each tweet is labeled by two graduate students. Tweets are discarded if the two annotations disagree with each other or they are labeled as neutral. In the end, we have obtained 4,183 manually labeled tweets among which 60% are used for fine-tuning and 40% are used for testing purposes. The remainder of the tweets with automatic annotations are divided into three sets: 60% are used for pre-training the bi-sense and conventional emoji embedding, 10% for validation and 30% are for testing. We do not include a “neutral” class because it is difficult to obtain valid neutral samples. For auto-generated labels, the neutrals are the samples with low absolute confidence scores and their sentiments are more likely to be model failures other than “true neutrals”. Moreover, based on the human annotations, most of the tweets with emojis convey non-neutral sentiment and only few neutral samples are observed during the manual labeling which are excluded from the manually labeled subset. In order to valid our motivation that emojis are also extensively used in tweets that contain contradictory information to the emoji sentiments, we calculate the emoji usage in Table TABREF22 according to the sentiment labels where Pos-Ratio means the percentage of each emoji occurs in the positive tweets over its total number of occurrences, AA and HA indicate automatic-annotation and human-annotation, respectively. We present the top-10 most frequently used emojis in our dataset and observe a slight difference in the Pos-Ratios between AA and HA dataset because of the randomness involved in the sampling process. Results from both of the datasets show a fair amount of emoji use in both positive and negative tweets. For example, it is interesting to notice that emoji () occurs more in the positive tweets in with the automatic annotations, while emojis with strong positive sentiment have also been used in negative tweets with about 5% occurrences, such as (, , and ). Given the averaged positive ratio among all emojis in the whole dataset is about 74% and that most emojis have been extensively used in tweets containing both positive and negative sentiments, it suggests that distinguishing the emoji occurrences in both sentiments via bi-sense embedding is worth investigating. Additionally, we observe the Pos-Ratios of the AA-sentiment and HA-sentiment have little differences which are due to two main reasons: 1) Some tweets we sampled to construct the HA-sentiment are discarded because the annotators have disagreements and we only keep the samples that we are confident about; 2) Tweets with absolute sentiment scores between (0.60,0.70) are selected for manual labeling as discussed in Section SECREF23 , which are lower than the tweets used to construct the AA-sentiment (0.7 and above). The lower sentiment scores indicate that Vader is less reliable on the samples of HA-sentiment dataset and the sentiments of these tweets are more likely to be affected by emojis. Qualitative Analysis In order to obtain insights about why the more fine-grained bi-sense emoji embedding helps in understanding the complexed sentiments behind tweets, we visualize the attention weights for ATT-E-LSTM and MATT-BiE-LSTM for comparison. The example tweets with corresponding attention weights calculated by word-emoji embedding and senti-emoji embedding are shown in Figure FIGREF27 , where the contexts are presented in the captions. The emojis used are , , and , respectively. In Figure FIGREF27 (a), the ATT-E-LSTM model (baseline) assigns relatively more weights on the word “no” and “pressure”, while MATT-BiE-LSTM attends mostly on the word “happy” and “lovely”. The different attention distributions suggest that the proposed senti-emoji embedding is capable of recognizing words with strong sentiments that are closely related to the true sentiment even with the presence of words with conflicting sentiments, such as “pressure” and “happy”. while ATT-E-LSTM tends to pick up all sentimental words which could raise confusions. The senti-emoji embedding is capable of extracting representations of complexed semantics and sentiments which help guide the attentions even in cases when the word sentiment and emoji sentiment are somewhat contradictory to each other. From Figure FIGREF27 (b) and (c) we can observe that the ATT-E-LSTM assigns more weights on the sentiment-irrelevant words than the MATT-BiE-LSTM such as “hoodies”, “wait” and “after”, indicating that the proposed model is more robust to irrelevant words and concentrates better on important words. Because of the senti-emoji embedding obtained through bi-sense emoji embedding and the sentence-level LSTM encoding on the text input (described in Section SECREF13 ), we are able to construct a more robust embedding based on the semantic and sentiment information from the whole context compared to the word-emoji embedding used in ATT-E-LSTM which takes only word-level information into account. Bi-sense Emoji Embedding Visualization To gain further insights on the bi-sense emoji embedding, we use t-SNE BIBREF47 to project high-dimensional bi-sense embedding vectors into a two-dimensional space and preserving the relative distances between the embedding vectors at the same time. In Figure FIGREF28 we visualize the bi-sense emoji embedding, positive-sense embedding, negative-sense embedding and the subtraction between positive and negative sense embeddings of each emoji, respectively. The subtraction of an emoji between its two sense embeddings indicates the semantic differences between emoji usages in positive and negative sentimental contexts, similarly to the objective of word embeddings BIBREF28 . The positive-sense of emoji ( and ), and the negative-sense of emoji (, and ) are embedded far from the two main clusters as observed in Figure FIGREF28 (a), suggesting that the semantics of these emojis are different from the other popular emojis. The positive-sense embedding and negative-sense embeddings are clustered well with no intersection with each other. Such observation supports our objective of applying bi-sense emoji embedding because there exist such significant differences in the semantics of each emoji when appears in positive and negative sentimental contexts, and it is well-motivated to consider the emoji usages individually according to the sentiment of the contexts to extract the more fine-grained bi-sense embedding. Additionally, we observe consistent patterns in the Figure FIGREF28 (b), (c) and (d) where the sentiments conveyed in the emojis become an important factor. For example, emojis with positive sentiments such as (, and ), and emojis with negative sentiment such as (, and ) are embedded into one clusters in both positive-sense and negative-sense space. The embedding subtractions of emojis in Figure FIGREF28 (d) shows the different usages of emojis across sentiments are similar between emojis and preserve the cluster patterns observed in Figure FIGREF28 (b) and (c). Conclusions In this paper, we present a novel approach to the task of sentiment analysis and achieve the state-of-the-art performance. Different from the previous work, our method combines a more robust and fine-grained bi-sense emoji embedding that effectively represents complex semantic and sentiment information, with attention-based LSTM networks that selectively attend on the correlated sense of the emoji embeddings, and seamlessly fuse the obtained senti-emoji embeddings with the word embeddings for a better understanding of the rich semantics and sentiments involved. In the future, we plan to further extend our attention-based LSTM with bi-embedding work frame to tackle tasks involving multi-sense embedding such as the learning and applications of word-sense embedding. Acknowledgement We would like to thank the support of New York State through the Goergen Institute for Data Science, and NSF Award #1704309.
previous emoji embedding methods fail to handle the situation when the semantics or sentiments of the learned emoji embeddings contradict the information from the corresponding contexts BIBREF5 , or when the emojis convey multiple senses of semantics and sentiments
fb427239c8d44f524a6c1bf1ce5c3383d5c33e52
fb427239c8d44f524a6c1bf1ce5c3383d5c33e52_0
Q: How many parameters does the model have? Text: Introduction There has been a recent surge of improvements in language modeling, powered by the introduction of the transformer architecture BIBREF0. These gains stem from the ability of the transformer self-attention mechanism to better model long context (as compared to RNN networks), spanning hundreds of characters BIBREF1 or words BIBREF2, BIBREF3. These approaches consider language modeling as a classification problem with the aim of predicting the next token given a fixed-size preceding context. To support variable-length context, BIBREF4 adds recurrence to a transformer model, improving the state-of-the-art further. Current word-based language models (LMs) depend on a series of preprocessing steps that include lowercasing, tokenization, normalization, and out-of-vocabulary handling. This preprocessing stage is language dependent and can add significant complexity to real applications. As such, it is appealing to shift to more general LMs that process raw text at the character level. Processing language at the character level allows us to model morphological variants of a word, assign reasonable likelihood to out-of-vocabulary words, and learn subword-level language abstractions. This open vocabulary modeling is quite important for languages with complex morphology such as Arabic, Turkish, or Finnish BIBREF5, BIBREF6, BIBREF7. While character- and word-based LMs have both improved in their performance over time, purely character-based LMs have continued to lag in performance compared to models that leverage a tokenizer. BIBREF1 report inferior performance from character-level modeling on a large scale word-level benchmark, lm1b BIBREF8. Similarly, BIBREF3 observe that a character-level LM is harder to train to competitive performance on their huge WebText corpus, as compared with subword segmentation using byte pair encoding (BPE) BIBREF9, BIBREF10. Sub-word tokenization approaches like BPE represent a middle ground for text segmentation. On one hand, they can help with better modeling open vocabulary. On the other hand, they still depend on a tokenizer, adding complexity to the final system. Moreover, the preprocessing stage is not jointly optimized with learning the task objective. This last point is especially relevant given that LMs are increasingly used for their ability to produce pretrained representations that will be fine-tuned for a downstream task BIBREF11, BIBREF12, BIBREF13, BIBREF14. Since word-based LMs use closed vocabulary and sub-word models adopt a segmentation that targets the pretraining corpus, there is little space to adapt the vocabulary or optimize the segmentation to fit the final task data distribution. The rest of this paper is organized as follows. In Section SECREF2, we describe our model architecture, which is a vanilla deep transformer byte-level LM. Section SECREF3 describes the lm1b dataset and our evaluation methodology. Section SECREF4 presents our results and how our model compares to the previous work. In Section SECREF5 we analyze the representations learned by the network at different depths using word-similarity benchmarks. For this analysis to be feasible we propose a strategy to extract word representations from a character model. To summarize our contributions: We develop a competitive tokenizer-free language model on a large scalable dataset. We probe the performance of our model's learned intermediate representations on word similarity tasks. Modeling Language models (LMs) assign a probability distribution over a sequence $x_{0:t}$ by factoring out the joint probability from left to right as follows Instead of reading in the tokenized input text, our model reads raw utf-8 bytes. For English text in the ASCII range, this is equivalent to processing characters as individual tokens. Non-ASCII characters (e.g. accented characters, or non-Latin scripts) are typically two or three utf-8 bytes. We use a standard “transformer decoder” (a stack of transformer layers with a causal attention mask) to process the sequence $x_{0:i-1}$ and predict the following byte $x_i$. The model's prediction is an estimate of the probability distribution over all possible 256 byte values. Our input byte embedding matrix has dimensionality 256. Our byte-level transformer model has 40 standard transformer layers with hidden size 1024, filter size 8192, and 16 heads. The model has around 836M parameters, of which only 66K are byte embeddings. Modeling ::: Training We sample random byte sequences of length 512. This sampling process does not respect the sentence boundary. Therefore, one example might span complete and partial sentences. We dropout both timesteps of self-attention layers and features of relu activations across timesteps with a probability of 0.3. We use the Adam optimizer BIBREF15 with initial learning rate $10^{-4}$ and batch size 1024. The training runs for two million steps, and at every 10,000 steps we decay the learning rate geometrically by 0.99. Modeling ::: Windowed Prediction To score each byte prediction, we need to process an entire 512-byte context from scratch, which is computationally intensive. To speed up development, for each window of context size $c$, we score $(\text{stride}=c/2)$ characters in parallel (the second half of the window). This leads to a tractable running time for our development evaluation process. While this setup is sub-optimal for our model, we did not observe any significant regression in our metrics. For example, the final bits/byte value of 0.874055 ($\text{stride}=1$) only grows to 0.87413 with $\text{stride}=256$. Our final test evaluation is reported with $\text{stride} = 1$. Experimental Setup There are no large scale datasets that are heavily studied for both word and character language modeling. Typically, a specific dataset will be considered under just one level of segmentation. For our efforts to be comparable with the literature, we use a word LM dataset. This puts our model at a disadvantage; the dataset is tokenized and our model will not utilize the given word boundary information. Our approach is able to model rare words and estimate their appropriate likelihoods, however, they have been replaced with a special token to produce closed vocabulary text that is appropriate for word-level modeling. Hence, the metrics we report are meant to provide a lower bound on the utility of our approach in realistic settings. Experimental Setup ::: LM1B We use the One Billion Word benchmark BIBREF8 to compare LM performance. The dataset consists of shuffled short sentences, and doesn't require modeling long contexts (95% of the training sentences are under 256 bytes and over 99.78% are under 512 bytes). The corpus is tokenized, and a small percentage of rare words are replaced with UNK tokens. The data gets split into 100 shards, and the first one (00) is held out while the rest (01-99) are used for training. The holdout set is split again into 50 shards, and historically shard 00 of the holdout has been used as the test set. There is no standard dev set, so we use shard 01 of the holdout as dev. See the corpus statistics in Table TABREF6 for details. Experimental Setup ::: Metrics Word LMs typically report their results in terms of perplexity per word $(ppl)$ while byte LMs report their results in bits per byte $(bpb)$. We report both metrics to make our results more accessible. Conversion between those metrics are based on the following observation: The amount of information in the test dataset is the same independent of segmentation. where $I(x)$ is the information contained in $x$, which is $- \log _2 P(x; model)$. Equation DISPLAY_FORM10 allows us to convert bits/word to bits/byte. Then straightforwardly, using Equation DISPLAY_FORM11 we can convert $bpb$ to $ppl$: We train our model to minimize $bpb$ over the training set and convert $bpb$ on the test set to $ppl$ for comparison. For the test dataset, we use the $|words|$ and $|bytes|$ values reported in Table TABREF6. Results and Discussion Table TABREF7 shows the perplexity of several models on lm1b. We observe that tokenizer-free LM performance improves significantly (40.6 to 23.0) when the model capacity is increased from 0.2B to 0.8B parameters. With sufficient capacity our byte-level LM is competitive with word based models (ranging from 21.8 to 28.0). Note, our model is able to achieve comparable performance without any explicit signal of word boundaries. Because of the large symbol space that word-based LMs address, they rely on sparse operations running on heterogeneous devices to run efficiently (e.g. running sparse embedding lookups on CPU as opposed to GPU/TPU). By contrast, byte LMs are dense, and all operations can be executed on specialized accelerators efficiently. We expect that with advances in accelerated hardware, byte-level text processing will become a popular choice. Of all the baseline models we reference, only BIBREF4 uses recurrence to model arbitrary length history. This technique could be added to tokenizer-free models as well. Indeed, we expect this approach to be particularly well-suited to byte and character models where text gets mapped onto longer token sequences, as BIBREF4 show that adding recurrence increases the length of context their model can effectively use. Extracting Word Representations In this section, we test our model's ability to produce meaningful word-level representations. We investigate this by feeding the model single words, and evaluating its intermediate activations on word similarity tasks. Since our model is trained to predict each individual character, activations within a word only have partial information about that word. To get a word representation, we append an empty space character at the end of the input word. The activation at the space position from the transformer's feed-forward layer takes all characters into account, given the causal attention. To predict what follows the space, the model must have a good understanding of the preceding word, so this activation can be used as a proxy for a word representation. To evaluate our extracted word representations, we use the word similarity tasks described in Swivel BIBREF16. Following their evaluation methodology, we score word pairs using cosine similarity, and then measure the correlation with human ratings using Spearman's $\rho $. We do not expect these results to be competitive, given that our model is never trained to represent words. Moreover, the Swivel model is trained on a combination of Wikipedia and the Gigaword5 corpus BIBREF17 which is composed of 3.3 billion lowercased words with discarded punctuation. They discard out-of-vocabulary words for evaluation, while we use all word pairs in the benchmark. Nevertheless, this evaluation is valuable for comparing the relative quality of representation across different layers. Figure FIGREF12 shows Spearman's $\rho $ across different layers of the model. We observe two main phases of performance. In the first phrase (layers 1-10), all task metrics improve with depth. In the second phase (layers 11-40), performance either plateaus or degrades slightly with depth. We suspect that the earlier layers learn general-purpose features which are linguistically relevant, while the final layers fine-tune specifically to the task of next character prediction. Interestingly, the Rare Word and SimLex999 datasets do not follow this paradigm. Their performance drops between layers 4-6, but picks up again and improves with depth (layers 6-40). We hypothesize that the model may be storing words at different depths according to their frequency. It would be interesting to investigate to what degree the improved performance of deeper LMs is due to better modeling of rare words/phrases. Table TABREF13 shows the best performance of our model across all layers compared to the state-of-the-art model on word similarity. The gap here is a reminder that work remains to be done on improving methods for extracting word representations from character models. Conclusion We show that a tokenizer-free language model with sufficient capacity can achieve results that are competitive with word-based LMs. Our model reads raw byte-level input without the use of any text preprocessing. As such, the model has no direct access to word boundary information. Finally, we show that our model's intermediate representations capture word-level semantic similarity and relatedness across layers.
model has around 836M parameters
7c45c6e5db6cfca2d6de8751e28403b35420ae38
7c45c6e5db6cfca2d6de8751e28403b35420ae38_0
Q: How many characters are accepted as input of the language model? Text: Introduction There has been a recent surge of improvements in language modeling, powered by the introduction of the transformer architecture BIBREF0. These gains stem from the ability of the transformer self-attention mechanism to better model long context (as compared to RNN networks), spanning hundreds of characters BIBREF1 or words BIBREF2, BIBREF3. These approaches consider language modeling as a classification problem with the aim of predicting the next token given a fixed-size preceding context. To support variable-length context, BIBREF4 adds recurrence to a transformer model, improving the state-of-the-art further. Current word-based language models (LMs) depend on a series of preprocessing steps that include lowercasing, tokenization, normalization, and out-of-vocabulary handling. This preprocessing stage is language dependent and can add significant complexity to real applications. As such, it is appealing to shift to more general LMs that process raw text at the character level. Processing language at the character level allows us to model morphological variants of a word, assign reasonable likelihood to out-of-vocabulary words, and learn subword-level language abstractions. This open vocabulary modeling is quite important for languages with complex morphology such as Arabic, Turkish, or Finnish BIBREF5, BIBREF6, BIBREF7. While character- and word-based LMs have both improved in their performance over time, purely character-based LMs have continued to lag in performance compared to models that leverage a tokenizer. BIBREF1 report inferior performance from character-level modeling on a large scale word-level benchmark, lm1b BIBREF8. Similarly, BIBREF3 observe that a character-level LM is harder to train to competitive performance on their huge WebText corpus, as compared with subword segmentation using byte pair encoding (BPE) BIBREF9, BIBREF10. Sub-word tokenization approaches like BPE represent a middle ground for text segmentation. On one hand, they can help with better modeling open vocabulary. On the other hand, they still depend on a tokenizer, adding complexity to the final system. Moreover, the preprocessing stage is not jointly optimized with learning the task objective. This last point is especially relevant given that LMs are increasingly used for their ability to produce pretrained representations that will be fine-tuned for a downstream task BIBREF11, BIBREF12, BIBREF13, BIBREF14. Since word-based LMs use closed vocabulary and sub-word models adopt a segmentation that targets the pretraining corpus, there is little space to adapt the vocabulary or optimize the segmentation to fit the final task data distribution. The rest of this paper is organized as follows. In Section SECREF2, we describe our model architecture, which is a vanilla deep transformer byte-level LM. Section SECREF3 describes the lm1b dataset and our evaluation methodology. Section SECREF4 presents our results and how our model compares to the previous work. In Section SECREF5 we analyze the representations learned by the network at different depths using word-similarity benchmarks. For this analysis to be feasible we propose a strategy to extract word representations from a character model. To summarize our contributions: We develop a competitive tokenizer-free language model on a large scalable dataset. We probe the performance of our model's learned intermediate representations on word similarity tasks. Modeling Language models (LMs) assign a probability distribution over a sequence $x_{0:t}$ by factoring out the joint probability from left to right as follows Instead of reading in the tokenized input text, our model reads raw utf-8 bytes. For English text in the ASCII range, this is equivalent to processing characters as individual tokens. Non-ASCII characters (e.g. accented characters, or non-Latin scripts) are typically two or three utf-8 bytes. We use a standard “transformer decoder” (a stack of transformer layers with a causal attention mask) to process the sequence $x_{0:i-1}$ and predict the following byte $x_i$. The model's prediction is an estimate of the probability distribution over all possible 256 byte values. Our input byte embedding matrix has dimensionality 256. Our byte-level transformer model has 40 standard transformer layers with hidden size 1024, filter size 8192, and 16 heads. The model has around 836M parameters, of which only 66K are byte embeddings. Modeling ::: Training We sample random byte sequences of length 512. This sampling process does not respect the sentence boundary. Therefore, one example might span complete and partial sentences. We dropout both timesteps of self-attention layers and features of relu activations across timesteps with a probability of 0.3. We use the Adam optimizer BIBREF15 with initial learning rate $10^{-4}$ and batch size 1024. The training runs for two million steps, and at every 10,000 steps we decay the learning rate geometrically by 0.99. Modeling ::: Windowed Prediction To score each byte prediction, we need to process an entire 512-byte context from scratch, which is computationally intensive. To speed up development, for each window of context size $c$, we score $(\text{stride}=c/2)$ characters in parallel (the second half of the window). This leads to a tractable running time for our development evaluation process. While this setup is sub-optimal for our model, we did not observe any significant regression in our metrics. For example, the final bits/byte value of 0.874055 ($\text{stride}=1$) only grows to 0.87413 with $\text{stride}=256$. Our final test evaluation is reported with $\text{stride} = 1$. Experimental Setup There are no large scale datasets that are heavily studied for both word and character language modeling. Typically, a specific dataset will be considered under just one level of segmentation. For our efforts to be comparable with the literature, we use a word LM dataset. This puts our model at a disadvantage; the dataset is tokenized and our model will not utilize the given word boundary information. Our approach is able to model rare words and estimate their appropriate likelihoods, however, they have been replaced with a special token to produce closed vocabulary text that is appropriate for word-level modeling. Hence, the metrics we report are meant to provide a lower bound on the utility of our approach in realistic settings. Experimental Setup ::: LM1B We use the One Billion Word benchmark BIBREF8 to compare LM performance. The dataset consists of shuffled short sentences, and doesn't require modeling long contexts (95% of the training sentences are under 256 bytes and over 99.78% are under 512 bytes). The corpus is tokenized, and a small percentage of rare words are replaced with UNK tokens. The data gets split into 100 shards, and the first one (00) is held out while the rest (01-99) are used for training. The holdout set is split again into 50 shards, and historically shard 00 of the holdout has been used as the test set. There is no standard dev set, so we use shard 01 of the holdout as dev. See the corpus statistics in Table TABREF6 for details. Experimental Setup ::: Metrics Word LMs typically report their results in terms of perplexity per word $(ppl)$ while byte LMs report their results in bits per byte $(bpb)$. We report both metrics to make our results more accessible. Conversion between those metrics are based on the following observation: The amount of information in the test dataset is the same independent of segmentation. where $I(x)$ is the information contained in $x$, which is $- \log _2 P(x; model)$. Equation DISPLAY_FORM10 allows us to convert bits/word to bits/byte. Then straightforwardly, using Equation DISPLAY_FORM11 we can convert $bpb$ to $ppl$: We train our model to minimize $bpb$ over the training set and convert $bpb$ on the test set to $ppl$ for comparison. For the test dataset, we use the $|words|$ and $|bytes|$ values reported in Table TABREF6. Results and Discussion Table TABREF7 shows the perplexity of several models on lm1b. We observe that tokenizer-free LM performance improves significantly (40.6 to 23.0) when the model capacity is increased from 0.2B to 0.8B parameters. With sufficient capacity our byte-level LM is competitive with word based models (ranging from 21.8 to 28.0). Note, our model is able to achieve comparable performance without any explicit signal of word boundaries. Because of the large symbol space that word-based LMs address, they rely on sparse operations running on heterogeneous devices to run efficiently (e.g. running sparse embedding lookups on CPU as opposed to GPU/TPU). By contrast, byte LMs are dense, and all operations can be executed on specialized accelerators efficiently. We expect that with advances in accelerated hardware, byte-level text processing will become a popular choice. Of all the baseline models we reference, only BIBREF4 uses recurrence to model arbitrary length history. This technique could be added to tokenizer-free models as well. Indeed, we expect this approach to be particularly well-suited to byte and character models where text gets mapped onto longer token sequences, as BIBREF4 show that adding recurrence increases the length of context their model can effectively use. Extracting Word Representations In this section, we test our model's ability to produce meaningful word-level representations. We investigate this by feeding the model single words, and evaluating its intermediate activations on word similarity tasks. Since our model is trained to predict each individual character, activations within a word only have partial information about that word. To get a word representation, we append an empty space character at the end of the input word. The activation at the space position from the transformer's feed-forward layer takes all characters into account, given the causal attention. To predict what follows the space, the model must have a good understanding of the preceding word, so this activation can be used as a proxy for a word representation. To evaluate our extracted word representations, we use the word similarity tasks described in Swivel BIBREF16. Following their evaluation methodology, we score word pairs using cosine similarity, and then measure the correlation with human ratings using Spearman's $\rho $. We do not expect these results to be competitive, given that our model is never trained to represent words. Moreover, the Swivel model is trained on a combination of Wikipedia and the Gigaword5 corpus BIBREF17 which is composed of 3.3 billion lowercased words with discarded punctuation. They discard out-of-vocabulary words for evaluation, while we use all word pairs in the benchmark. Nevertheless, this evaluation is valuable for comparing the relative quality of representation across different layers. Figure FIGREF12 shows Spearman's $\rho $ across different layers of the model. We observe two main phases of performance. In the first phrase (layers 1-10), all task metrics improve with depth. In the second phase (layers 11-40), performance either plateaus or degrades slightly with depth. We suspect that the earlier layers learn general-purpose features which are linguistically relevant, while the final layers fine-tune specifically to the task of next character prediction. Interestingly, the Rare Word and SimLex999 datasets do not follow this paradigm. Their performance drops between layers 4-6, but picks up again and improves with depth (layers 6-40). We hypothesize that the model may be storing words at different depths according to their frequency. It would be interesting to investigate to what degree the improved performance of deeper LMs is due to better modeling of rare words/phrases. Table TABREF13 shows the best performance of our model across all layers compared to the state-of-the-art model on word similarity. The gap here is a reminder that work remains to be done on improving methods for extracting word representations from character models. Conclusion We show that a tokenizer-free language model with sufficient capacity can achieve results that are competitive with word-based LMs. Our model reads raw byte-level input without the use of any text preprocessing. As such, the model has no direct access to word boundary information. Finally, we show that our model's intermediate representations capture word-level semantic similarity and relatedness across layers.
input byte embedding matrix has dimensionality 256
49ea25af6f75e2e96318bad5ecf784ce84e4f76b
49ea25af6f75e2e96318bad5ecf784ce84e4f76b_0
Q: What dataset is used for this task? Text: Introduction From the early days of artificial intelligence, automatically summarizing a text was an interesting task for many researchers. Followed by the advance of the World Wide Web and the advent of concepts such as Social networks, Big Data, and Cloud computing among others, text summarization became a crucial task in many applications BIBREF0, BIBREF1, BIBREF2. For example, it is essential, in many search engines and text retrieval systems to display a portion of each result entry which is representative of the whole text BIBREF3, BIBREF4. It is also becoming essential for managers and the general public to gain the gist of news and articles immediately, in order to save time, while being inundated with information on social media BIBREF5. Researchers have approached this challenge from various perspectives and have obtained some promising results BIBREF6, BIBREF7. However, this area continues to present more research challenges and has a long path to maturity. One method of investigating this challenge, is (supervised) extractive summarization. Extractive implementations use a ranking mechanism and select top-n-ranked sentences as the summary BIBREF8. Sentences of a document are represented as vectors of features. Using summarization corpora, a rank will be assigned to each sentence, based on its presence in several human-written summaries (golden summaries). The system should then learn how to use those features to predict the rank of sentences in any given text. Various machine learning approaches such as regression and classification algorithms are used to perform the ranking task BIBREF9, BIBREF10. As far as our knowledge goes, in all current implementations, sets of sentence vectors of every document are merged together to compose a larger set, which is then passed to the learning model as a matrix. In this approach, the locality of ranks is disregarded. In other words, the rank of sentences is highly relative to the context and document. A sentence might be ranked high in one document while being ranked lower in another. As a result, merging sentences of a whole dataset into a matrix removes document boundaries and a main source of information will be lost. We addressed this issue by taking certain features of documents into account, such as its length, topical category and so on in addition to some new sentence features that also reflect document properties. Thus, more information will be provided to the model, and ranking could be done with respect to local features of the document. Our experiments show that this rectification leads to improvement in both the performance of the learned model and the quality of produced summaries. We also represent a new baseline for the evaluation of extractive text summarizers which can be used to measure the performance of any summarizing method more accurately. The remainder of this paper is organized as follows. (Section SECREF2) reviews related works. (Section SECREF3) presents the proposed method and evaluation measures. (Section SECREF5) discusses how the experiments are set up. The results are discussed in (Section SECREF5), and finally (Section SECREF6) concludes the paper. Related works Text summarization has been widely studied by both academic and enterprise disciplines. Text summarization methods may be classified into different types. Based on input type, there are single-document BIBREF11, BIBREF12 vs multi-document summarization methods BIBREF13, BIBREF14, BIBREF15. Based on language, there are mono-lingual, bilingual and multi-lingual methods BIBREF16. There are also “query focused” methods in which a summary relevant to a given query is produced BIBREF17. From the perspective of procedure, however, there are two main approaches: abstractive vs extractive BIBREF18. Abstractive approaches try to generate a new short text based on the concepts understood from the original text BIBREF19. This usually requires a full pass through NLP pipeline and is faced with many complexities and challenges BIBREF20. The abstractive approach relies on linguistic methods to examine and interpret the text in order to find new concepts and expressions. The output is a new shorter text which consists of the most important information from the original text document BIBREF8. Extractive approaches, on the other hand, select a few sentences from the document based on some measures in order to place them in a summary BIBREF8. A broad range of methods has been examined in this approach, including graph-based BIBREF8, BIBREF21, unsupervised BIBREF21, BIBREF22 and supervised (corpus-based) methods BIBREF9, BIBREF23, BIBREF24. In supervised methods, training data is generally needed to select important content from the documents. In these methods, usually, the problem is reduced to a classification or regression problem, and machine learning techniques applied to the dataset of documents and their gold summaries represented by some features. Support Vector Machines (SVM) BIBREF25 and neural networks BIBREF26 are more popular sentence classification algorithms. The key step in extractive summarization is to determine the importance of sentences in the document BIBREF27. Previous studies examine the ordinal position of sentences BIBREF28, BIBREF29, length of sentences BIBREF9, the ratio of nouns, the Ratio of Verbs, Ratio of Adjectives, Ratio of Adverbs BIBREF30, the Ratio of Numerical entities BIBREF31, BIBREF32 and Cue Words BIBREF28. Gupta and Lehal in their survey of text summarization techniques list the following groups of features: content-based, title-based, location-based, length-based, proper noun and upper-case word-based, font-based, specific phrase-based, and features based on sentence similarity to other sentences in a text BIBREF8. Previous studies use different sentence features such as terms from keywords/key phrases, terms from user queries, frequency of words, and position of words/sentences for text summarization BIBREF33. However, in most cases, selection and weighting of features are an important matter of debate. Some works have been carried out with respect to this BIBREF34, but none, to the best of our knowledge, has shown that target attribute is highly related to the scope of the document. It is occasionally mentioned but not included in practice. For instance, Ferreira et al studied various combinations of sentence scoring methods on three types of documents in BIBREF6 and BIBREF31 and concluded that the weight of features varies, dependent on the properties of context: “the effectiveness of sentence scoring methods for automatic extractive text summarization algorithms depends on the kind of text one wants to summarize, the length of documents, the kind of language used, and their structure.”. JY Yeh et al in BIBREF35 utilized a Genetic Algorithm (GA) to find the weight of features for calculating sentence scores. However, their following statement implies that performance of weights is generally dependent to genre, that could be seen as a feature of context: “It cannot be guaranteed that the score function whose feature weights are obtained by GA definitely performs well for the test corpus; nevertheless, if the genre of the test corpus is close to that of the training corpus, we can make a prediction that the score function will work well.” BIBREF35. Berenjkoub et al studied the effectiveness of various subsets of features in summarization of distinct sections of scientific papers BIBREF36. They showed that some features work well only in some specific portion of text, for example, on the abstract section, while others perform better on the methodology section. This could be considered to be a consequence of differences in the structure and context of each section. All the above studies imply the significance of document context in ranking. Nevertheless, it has not been given enough attention in the NLP community, and even sometimes is neglected. For instance, authors in BIBREF30 suggest the use of a wide range of various features. Among these, seventeen part-of-speech based sentences features have been introduced, all of which are sentence-normalized, but not document-normalized, i.e. they count the ratio of a syntactic unit e.g. verbs, divided by the number of words in a sentence. Such features do not consider the total number of those units, e.g. verbs, in the whole document. Our work contributes to this line of research and includes document features in the learning and ranking processes. Incorporating Document Features As a way to investigate the need for document features in sentence ranking (as explained in the introduction and related works), we introduced several document-level features and incorporated them in the summarization process. These features are listed under subsection (SECREF4). Although stages of our method do not differ from general supervised extractive summarization, the whole process is explained in order to clarify the method of investigation. Every supervised summarization has two phases. The first is the “Learning Phase”, a corpus of ideal summaries is used to train the system how to rank sentences. The second is the “Summarization Phase”, where the system applies its learning gained from the first phase, in order to rank the sentences of a new given text. A process of selection is then performed to form a summary. Each of these phases has several intricacies which are briefly described in the following sections. Incorporating Document Features ::: Learning Phase The input to this phase is a dataset of documents, each of which is associated with several human-written summaries. The output is a learned model with a good level of accuracy that is able to reliably predict the rank of sentences, in almost the same way that a human may rank them. To accomplish this, it is necessary to first perform normalization and transform various forms of phrases into their canonical form. Then, every text should be tokenized to sentences, and further tokenized to words. Another prerequisite is to remove stop words. The following subtasks should be carried out next. Incorporating Document Features ::: Learning Phase ::: Feature Extraction Foremost, it is necessary to represent each sentence with those features that have the most distinguishing effect on the prediction of the rank. Many features have been examined in the literature. We entitle some as “document-aware” because they do implicitly represent some information about a document. However, other features have been used, that say nothing about the document in which they appeared. We call them “document-unaware”. In the previous sections, we argued that this lack of information might be misleading for the system, especially when we train it with sample sentences from different documents. Thus, we modified some document-unaware features and derived new features that cover document properties. We also examined the effect of incorporating explicit features of a document into vectors of its sentences. The following sub-sections describe the features mentioned above in more detail. Incorporating Document Features ::: Learning Phase ::: Feature Extraction ::: Document-unaware Features Ordinal position: It is shown that inclusion of sentence, in summary, is relevant to its position in the document or even in a paragraph. Intuitively, sentences at the beginning or the end of a text are more likely to be included in the summary. Depending on how it is defined, this feature might be document-unaware or not. For example, in BIBREF29 and BIBREF37 it is defined as $\frac{5}{5}$ for the first sentence, $\frac{4}{5}$ for the second, and so on to $\frac{1}{5}$ for fifth and zero for remaining sentences. In another research conducted by Wong et al. BIBREF9, it is defined as $\frac{1}{sentence\ number}$. With such a definition, we may have several sentences, for example, with position=$\frac{1}{5}$ in the training set, these may not have the same sense of position. While a sentence position=$\frac{1}{5}$ means “among the firsts” in a document with 40 sentences, it has a totally different meaning of “in the middle”, in another document containing 10 sentences. Thus, a useful feature formula should involve differences of documents which may change the meaning of information within it. In our experiments, we used the definition of BIBREF9. A document-aware version of position will be introduced in (SECREF6). Length of sentence: the intuition behind this feature is that sentences of too long or too short length are less likely to be included in the summary. Like sentence position, this feature is also subject to the wrong definition that makes it document-unaware. For example, in BIBREF9 it is defined as a number of words in a sentence. Such a definition does not take into account that a sentence with, say 15 words may be considered long if all other sentences of document have fewer words. Another sentence with the same number of words may be regarded as short, because other sentences in that document have more than 15 words. This might occur due to different writing styles. However, we included this in our experiments to compare its effect with that of its document-aware counterpart, which will be listed in (SECREF6). The Ratio of Nouns: is defined in BIBREF30 as the number of nouns divided by total number of words in the sentence, after stop-words are removed. Three other features, Ratio of Verbs, Ratio of Adjectives, and Ratio of Adverbs are defined in the same manner and proved to have a positive effect on ranking performance. From our perspective, however, a sentence with a ratio of nouns =0.5, for example, in a document containing many nouns, must be discriminated in the training set from another sentence with the same ratio of nouns, that appeared in another document having fewer nouns. This feature does not represent how many nouns are there in the document, which is important in sentence ranking. The same discussion goes on to justify the need to consider the number of verbs, adjectives, and adverbs in the document. The impact of these features is examined in our experiments and compared to that of their document-aware counterparts. The Ratio of Numerical entities: assuming that sentences containing more numerical data are probably giving us more information, this feature may help us in ranking BIBREF31, BIBREF32. For calculation, we count the occurrences of numbers and digits proportional to the length of sentence. This feature must be less weighted if almost all sentences of a document have numerical data. However, it does not count numbers and digits in other sentences of the document. Cue Words: if a sentence contains special phrases such as “in conclusion”, “overall”, “to summarize”, “in a nutshell” and so forth, its selection as a part of the summary is more probable than others. The number of these phrases is counted for this feature. Incorporating Document Features ::: Learning Phase ::: Feature Extraction ::: Document-aware Features Cosine position: As mentioned in (SECREF5) a good definition of position should take into account document length. A well-known formula used in the literature BIBREF38, BIBREF7 is in which index is an integer representing the order of sentences and T is the total number of sentences in document. This feature ranges from 0 to 1, the closer to the beginning or to the end, the higher value this feature will take. $\alpha $ is a tuning parameter. As it increases, the value of this feature will be distributed more equally over sentences. In this manner, equal values of this feature in the training set represent a uniform notion of position in a document, so it becomes document-aware. Relative Length: the intuition behind this feature is explained in (SECREF5). A discussion went there that a simple count of words does not take into account that a sentence with a certain number of words may be considered long or short, based on the other sentences appeared the document. Taking this into consideration, we divided the number of words in the sentence by the average length of sentences in the document. More formally, the formula is: in which n is number of sentences in the document and $s_i$ is the i’th sentence of it. Values greater than 1 could be interpreted as long and vice versa. TF-ISF: this feature counts the frequency of terms in a document and assigns higher values to sentences having more frequent terms. It also discounts terms which appear in more sentences. Since it is well explained in the literature, we have not included details and formula which are in references BIBREF34 and BIBREF39. Nonetheless, the aspect that matters in our discussion is that both frequency and inverse sentence frequency are terms which involve properties of context, and consequently are document-aware. POS features: Here we introduce another way to include the ratio of part of speech (POS) units in features and keep them document-normalized. To do this, the number of occurrences of each POS unit should be divided by the number of them in the document, instead of that occurring in a sentence. The formal definition of the new document-aware features are as follows: Incorporating Document Features ::: Learning Phase ::: Feature Extraction ::: Explicit Document Features In order to further investigate how effective are document specific features in sentence ranking, we defined several features for documents. These features are then calculated for each document and repeated in the feature vector of every sentence of that document. Their formal definition is described below and their effect is examined in the result and discussion section (SECREF5): Document sentences: An important property of a document that affects summarization is the total number of sentences participating in sentence ranking. As this number grows, a summarizer should be more selective and precise. Also, some sentence features such as cue words, maybe more weighted for longer documents. In addition, the main contextual information is probably more distributed over sentences. In such a case even lower values of other features should be considered important. Document words: the number of words in the document is another notion of document length. Since the number of sentences alone is not enough to represent document length, this feature should also be considered. Topical category: different topics such as political, economic, etc. have different writing styles and this might affect sentence ranking. For instance, numerical entities may appear more in economic or sport reports than in religious or social news. Therefore the weight of this attribute should be more or less, based on a document’s category. So it needs to be included. An overview of our feature set is represented by example in figure FIGREF15. Column ID is just for enumeration and column Target is explained in the next section. Incorporating Document Features ::: Learning Phase ::: Target Assignment Every feature vector needs a target value from which the system should learn how to rank sentences. The value of target is usually determined based on golden summaries. If a sentence is included in a majority of human-written extracts, its target is near to 1. In contrast, it would be closer to 0 if the sentence could not be found in any human-made summaries. In some datasets, like the one we used, golden summaries are not absolutely extractive, and they are not composed of exact copies of sentences in the original text. In such cases, a measure of similarity between the sentence whose target we are looking for, and each ideal summaries’ sentence will be calculated. This results in real values between 0 and 1 for this attribute. Section (SECREF4) includes more details about target assignment. Incorporating Document Features ::: Learning Phase ::: Training Model Since target attribute values vary between zero and one, we opted to use regression methods for the learning task. To build a training and a test set, a global matrix is composed in which every row corresponds to a sentence in the corpus and each column corresponds to a feature. The last column is for target attribute which will be omitted in the test set. It might be required to perform scaling on certain columns, depending on its corresponding feature and range of values. In cases where the dataset is large, the total number of sentences which are not included in golden summaries, and consequently have lower targets, is many times larger than the number of included sentences. This might lead the regression bias toward lower target values. To avoid this, dataset balancing is needed. That is to leave aside a portion of not included sentences and not to feed them to learner model. Lastly, in this phase, the regression model should be fitted on training set and be evaluated on a test set as described in sections (SECREF4) and (SECREF5). Incorporating Document Features ::: Summarization Phase Having acquired a model that can precisely rank sentences, we can apply it to any new given text and use ranked sentences in order to create a summary. This summarization process could also be executed on dataset texts, in order to evaluate how precisely our method resembles human-written summaries. In this section, we briefly describe the summarization process. The evaluation process is explained in section (SECREF22). Incorporating Document Features ::: Summarization Phase ::: Feature Extraction Initially, sentence features need to be extracted. Again, normalization, sentence tokenization, word tokenization, and stop words removal are preliminary steps. The same features used in the learning phase should be calculated. Incorporating Document Features ::: Summarization Phase ::: Sentence Ranking In comparison with learning phase, in which a global matrix was used, this time a local matrix is composed whose rows correspond with the sentences of the input text. If during learning, any scaling was performed on features, they should be carried out here in the same manner. The matrix is then fed to the regressor obtained in the previous phase, and a rank value between zero and one will be predicted for each sentence. Incorporating Document Features ::: Summarization Phase ::: Sentence Selection By sorting sentences based on their ranks, the most appropriate sentences for being included in summary will be determined. To preserve readability, however, it is important to place them in the summary in the same order they appeared in the input document. Another consideration is the cut-off length. How many of the top sentences should we select for summary? The answer should be as simple as a constant number, a percentage of total sentences, or it could be determined by more advanced heuristics. We allowed cut-off length to be an input parameter. This allows us, in the evaluation phase, to produce summaries of dataset documents in the same length as golden summaries. This makes the comparison more equitable. Incorporating Document Features ::: Evaluation Measures In this section, some measures are described to evaluate the performance of both phases explained in the previous section: the learning phase and summarization phase. The former is evaluated using common regression metrics such as mean square error (MSE) and coefficient of determination (R2). The latter is carried out using ROUGE which is a well-known metric for evaluating summarization systems. Mean Square Error (MSE) is the average of squared errors in all estimated targets. An ideal regressor tends to make this measure as near as possible to zero. Though, an exact zero for MSE is not desirable, because it is suspected to be due to over fitting. The coefficient of determination is another metric for evaluating how well a regression model is fitted to data. It ranges from $-\infty $ to 1. As it approaches 1, “goodness-of-fit” is increased, while negative values show that the mean of data is a better estimator for target BIBREF40. ROUGE is proposed in BIBREF41 as an evaluation metric for summaries. It matches n-grams in both system produced summaries and reference summaries and returns the percentage of matching in terms of precision, recall and f-measure. There is a variety of ROUGE family metrics, namely ROUGE-1, ROUGE-2, and ROUGE-L. In ROUGE-1 the overlap of 1-grams, each word, is calculated. In ROUGE-2 the bigrams are considered as units of comparison. The ROUGE-L uses the Longest Common Subsequence (LCS) to measure resemblance. Nevertheless, we found that ROUGE assessments are always relatively high, even for a summary that is produced perfunctorily. Hence, we also designed a random summarizer that selects random sentences for the summary, and evaluated it by ROUGE. This could be used as a baseline for comparison. Experiments Two experiments were set up to verify our hypothesis: “sentence ranking is highly dependent to document, and features must also represent context”. The first experiment involves document-unaware features (listed in section SECREF5) alongside TF-ISF. In the second experiment, document-aware features were used instead of document-unaware ones. We also set up a random summarizer based on a random regressor that acts as a baseline for comparisons. More details are recorded in section (SECREF25). A good experimental study should be as reproducible as possible. Here we explain the technical details that are more specific to our dataset, to allow the interested user to set up the same experiments for further research. Experiments ::: Dataset We used the Pasokh dataset BIBREF42 that contains 100 Persian news documents each of which is associated with 5 summaries. Each summary consists of several sentences of the original text, selected by a human expert. Some sentences are slightly modified and are not, therefore, an exact copy of any original sentences. Documents are categorized into six categories such as political, economic and so on. The length of documents ranges from 4 to 156 sentences. Overall, it has about 2,500 sentences. Experiments ::: Extracting Features and Scaling All features introduced in section SECREF4 are calculated. Pre-processing, sentence and word tokenization, stop words removal, and part of speech tagging is performed using the Hazm library BIBREF43. The majority of features have a range between zero and one. Other features are passed to a min-max scaler to transform into the same range. For the category feature which is nominal, the one-hot-encoding method applied and six flag features used instead. Experiments ::: Target Assignment In assigning the target to a sentence, as mentioned in section (SECREF16), the goal is to assign a number between 0 and 1, with higher values as an indicator that the sentence is present in the majority of golden summaries. Because exact matching between sentences is not possible, to resolve the question of presence in a single golden summary such as $g$, we calculated the cosine similarity of the desired sentence with each sentence: $s_j\in g$ . Then the maximum value of these similarities is selected as an indicator of presence. This indicator is then calculated for other golden summaries and their average is assigned to the sentence as the target. in which G is set of summaries written for the document containing s. This is an additional explicit evidence that target (and subsequently, ranking) is related to the document. Experiments ::: Training Model A vast collection of scikit-learn tools were used for the learning phase. K-fold cross-validation is applied with k=4 and split size of 0.25. Three different regression methods were applied, including Linear Regression, Decision Tree Regression, and Epsilon-Support Vector Regression(SVR). Overall results were the same with minor differences. Thus only the SVR result is reported. Various values for parameters were examined but the best results were achieved by epsilon=0.01, kernel=rbf, and default values for other parameters. With the aim of evaluating summary qualities, the fitted regressor of each run was used to rank documents sentences in the test set. To compare with each standard summary, a summary with the same count of sentences was produced, and compared by ROUGE. Averaging these ROUGE scores over each document and then over the dataset, the overall quality of summaries produced by the model can be obtained. The same process was repeated with a random regressor that needed no training, and which simply assigns a random number between zero and one to any given sample. Apart from measuring the performance of this regressor on the test set, the quality of summaries produced is evaluated and reported as a baseline. The juxtaposition of this baseline and our measured results will demonstrate how effective our feature set was and how intelligent our whole system worked. Results and Discussion In section (SECREF22) MSE, R2 and ROUGE scores are remarked as evaluation measures. The results of our experiments are reported below in terms of these measures. For better comparison, we also ran another experiment in which the random regressor was used for ranking sentences and producing summaries. Table TABREF28 shows and compares MSE and R2 reported from these experiments. The results show that in experiment 2, the mean squared error is reduced and the r2 score is increased. This means that using document-aware features leads to a more accurate learned model, proving our hypothesis about the relationship between document features and target ranks. ROUGE scores are displayed separately in terms of precision, recall and f-measure in Figures FIGREF29 to FIGREF31 respectively. F-measure scores are displayed in the figure FIGREF29, comparing ROUGE-1, ROUGE-2 and ROUGE-L. Figures FIGREF30 and FIGREF31 allow comparison of precision and recall scores. The higher values gained in experiment 2, confirm that document-aware features perform better than unaware features. These results are also interpretable from viewpoint of entropy-based decision tree methods. In learning phase, impurity of features within the whole dataset will be measured, and features having higher information gain will take place in upper levels of tree. But in summarization phase, within which decisions have to be made within a single document, impurity of those features may be low, causing less effective decisions and precision's. By incorporating document features, we help model to use different features (thus different trees) for different documents. Another insight gained from these charts is that a random summarizer resulted in scores more than 50% in all measures, and without using document-aware features, the model achieves a small improvement over a random summarizer. Conclusion This paper has discussed that in supervised extractive summarization, we cannot learn to rank by considering dataset sentences as independent educational examples. The rank of sentences is dependent on each other within a document. To overcome this issue, we suggested incorporating document features explicitly in the feature vector of sentences. We also suggested using features that take into account the properties of document. We named this kind of features as document-aware. Conducted experiments demonstrated the benefit of adding explicit document features, as well as document-aware features, both in model precision and summary quality. For future work, more document-aware features can be examined. It is also possible to run the same experiments on an English (or any other language) dataset, if available. Another clue for study is measuring degree of entropy difference between dataset and single documents, in a standard dataset. Our source code is hosted on GitHub and is published for later reference, further experiments and reproducing results. A web interface and a Telegram bot is also implemented as demo.
the Pasokh dataset BIBREF42
aecd09a817c38cf7606e2888d0df7f14e5a74b95
aecd09a817c38cf7606e2888d0df7f14e5a74b95_0
Q: What features of the document are integrated into vectors of every sentence? Text: Introduction From the early days of artificial intelligence, automatically summarizing a text was an interesting task for many researchers. Followed by the advance of the World Wide Web and the advent of concepts such as Social networks, Big Data, and Cloud computing among others, text summarization became a crucial task in many applications BIBREF0, BIBREF1, BIBREF2. For example, it is essential, in many search engines and text retrieval systems to display a portion of each result entry which is representative of the whole text BIBREF3, BIBREF4. It is also becoming essential for managers and the general public to gain the gist of news and articles immediately, in order to save time, while being inundated with information on social media BIBREF5. Researchers have approached this challenge from various perspectives and have obtained some promising results BIBREF6, BIBREF7. However, this area continues to present more research challenges and has a long path to maturity. One method of investigating this challenge, is (supervised) extractive summarization. Extractive implementations use a ranking mechanism and select top-n-ranked sentences as the summary BIBREF8. Sentences of a document are represented as vectors of features. Using summarization corpora, a rank will be assigned to each sentence, based on its presence in several human-written summaries (golden summaries). The system should then learn how to use those features to predict the rank of sentences in any given text. Various machine learning approaches such as regression and classification algorithms are used to perform the ranking task BIBREF9, BIBREF10. As far as our knowledge goes, in all current implementations, sets of sentence vectors of every document are merged together to compose a larger set, which is then passed to the learning model as a matrix. In this approach, the locality of ranks is disregarded. In other words, the rank of sentences is highly relative to the context and document. A sentence might be ranked high in one document while being ranked lower in another. As a result, merging sentences of a whole dataset into a matrix removes document boundaries and a main source of information will be lost. We addressed this issue by taking certain features of documents into account, such as its length, topical category and so on in addition to some new sentence features that also reflect document properties. Thus, more information will be provided to the model, and ranking could be done with respect to local features of the document. Our experiments show that this rectification leads to improvement in both the performance of the learned model and the quality of produced summaries. We also represent a new baseline for the evaluation of extractive text summarizers which can be used to measure the performance of any summarizing method more accurately. The remainder of this paper is organized as follows. (Section SECREF2) reviews related works. (Section SECREF3) presents the proposed method and evaluation measures. (Section SECREF5) discusses how the experiments are set up. The results are discussed in (Section SECREF5), and finally (Section SECREF6) concludes the paper. Related works Text summarization has been widely studied by both academic and enterprise disciplines. Text summarization methods may be classified into different types. Based on input type, there are single-document BIBREF11, BIBREF12 vs multi-document summarization methods BIBREF13, BIBREF14, BIBREF15. Based on language, there are mono-lingual, bilingual and multi-lingual methods BIBREF16. There are also “query focused” methods in which a summary relevant to a given query is produced BIBREF17. From the perspective of procedure, however, there are two main approaches: abstractive vs extractive BIBREF18. Abstractive approaches try to generate a new short text based on the concepts understood from the original text BIBREF19. This usually requires a full pass through NLP pipeline and is faced with many complexities and challenges BIBREF20. The abstractive approach relies on linguistic methods to examine and interpret the text in order to find new concepts and expressions. The output is a new shorter text which consists of the most important information from the original text document BIBREF8. Extractive approaches, on the other hand, select a few sentences from the document based on some measures in order to place them in a summary BIBREF8. A broad range of methods has been examined in this approach, including graph-based BIBREF8, BIBREF21, unsupervised BIBREF21, BIBREF22 and supervised (corpus-based) methods BIBREF9, BIBREF23, BIBREF24. In supervised methods, training data is generally needed to select important content from the documents. In these methods, usually, the problem is reduced to a classification or regression problem, and machine learning techniques applied to the dataset of documents and their gold summaries represented by some features. Support Vector Machines (SVM) BIBREF25 and neural networks BIBREF26 are more popular sentence classification algorithms. The key step in extractive summarization is to determine the importance of sentences in the document BIBREF27. Previous studies examine the ordinal position of sentences BIBREF28, BIBREF29, length of sentences BIBREF9, the ratio of nouns, the Ratio of Verbs, Ratio of Adjectives, Ratio of Adverbs BIBREF30, the Ratio of Numerical entities BIBREF31, BIBREF32 and Cue Words BIBREF28. Gupta and Lehal in their survey of text summarization techniques list the following groups of features: content-based, title-based, location-based, length-based, proper noun and upper-case word-based, font-based, specific phrase-based, and features based on sentence similarity to other sentences in a text BIBREF8. Previous studies use different sentence features such as terms from keywords/key phrases, terms from user queries, frequency of words, and position of words/sentences for text summarization BIBREF33. However, in most cases, selection and weighting of features are an important matter of debate. Some works have been carried out with respect to this BIBREF34, but none, to the best of our knowledge, has shown that target attribute is highly related to the scope of the document. It is occasionally mentioned but not included in practice. For instance, Ferreira et al studied various combinations of sentence scoring methods on three types of documents in BIBREF6 and BIBREF31 and concluded that the weight of features varies, dependent on the properties of context: “the effectiveness of sentence scoring methods for automatic extractive text summarization algorithms depends on the kind of text one wants to summarize, the length of documents, the kind of language used, and their structure.”. JY Yeh et al in BIBREF35 utilized a Genetic Algorithm (GA) to find the weight of features for calculating sentence scores. However, their following statement implies that performance of weights is generally dependent to genre, that could be seen as a feature of context: “It cannot be guaranteed that the score function whose feature weights are obtained by GA definitely performs well for the test corpus; nevertheless, if the genre of the test corpus is close to that of the training corpus, we can make a prediction that the score function will work well.” BIBREF35. Berenjkoub et al studied the effectiveness of various subsets of features in summarization of distinct sections of scientific papers BIBREF36. They showed that some features work well only in some specific portion of text, for example, on the abstract section, while others perform better on the methodology section. This could be considered to be a consequence of differences in the structure and context of each section. All the above studies imply the significance of document context in ranking. Nevertheless, it has not been given enough attention in the NLP community, and even sometimes is neglected. For instance, authors in BIBREF30 suggest the use of a wide range of various features. Among these, seventeen part-of-speech based sentences features have been introduced, all of which are sentence-normalized, but not document-normalized, i.e. they count the ratio of a syntactic unit e.g. verbs, divided by the number of words in a sentence. Such features do not consider the total number of those units, e.g. verbs, in the whole document. Our work contributes to this line of research and includes document features in the learning and ranking processes. Incorporating Document Features As a way to investigate the need for document features in sentence ranking (as explained in the introduction and related works), we introduced several document-level features and incorporated them in the summarization process. These features are listed under subsection (SECREF4). Although stages of our method do not differ from general supervised extractive summarization, the whole process is explained in order to clarify the method of investigation. Every supervised summarization has two phases. The first is the “Learning Phase”, a corpus of ideal summaries is used to train the system how to rank sentences. The second is the “Summarization Phase”, where the system applies its learning gained from the first phase, in order to rank the sentences of a new given text. A process of selection is then performed to form a summary. Each of these phases has several intricacies which are briefly described in the following sections. Incorporating Document Features ::: Learning Phase The input to this phase is a dataset of documents, each of which is associated with several human-written summaries. The output is a learned model with a good level of accuracy that is able to reliably predict the rank of sentences, in almost the same way that a human may rank them. To accomplish this, it is necessary to first perform normalization and transform various forms of phrases into their canonical form. Then, every text should be tokenized to sentences, and further tokenized to words. Another prerequisite is to remove stop words. The following subtasks should be carried out next. Incorporating Document Features ::: Learning Phase ::: Feature Extraction Foremost, it is necessary to represent each sentence with those features that have the most distinguishing effect on the prediction of the rank. Many features have been examined in the literature. We entitle some as “document-aware” because they do implicitly represent some information about a document. However, other features have been used, that say nothing about the document in which they appeared. We call them “document-unaware”. In the previous sections, we argued that this lack of information might be misleading for the system, especially when we train it with sample sentences from different documents. Thus, we modified some document-unaware features and derived new features that cover document properties. We also examined the effect of incorporating explicit features of a document into vectors of its sentences. The following sub-sections describe the features mentioned above in more detail. Incorporating Document Features ::: Learning Phase ::: Feature Extraction ::: Document-unaware Features Ordinal position: It is shown that inclusion of sentence, in summary, is relevant to its position in the document or even in a paragraph. Intuitively, sentences at the beginning or the end of a text are more likely to be included in the summary. Depending on how it is defined, this feature might be document-unaware or not. For example, in BIBREF29 and BIBREF37 it is defined as $\frac{5}{5}$ for the first sentence, $\frac{4}{5}$ for the second, and so on to $\frac{1}{5}$ for fifth and zero for remaining sentences. In another research conducted by Wong et al. BIBREF9, it is defined as $\frac{1}{sentence\ number}$. With such a definition, we may have several sentences, for example, with position=$\frac{1}{5}$ in the training set, these may not have the same sense of position. While a sentence position=$\frac{1}{5}$ means “among the firsts” in a document with 40 sentences, it has a totally different meaning of “in the middle”, in another document containing 10 sentences. Thus, a useful feature formula should involve differences of documents which may change the meaning of information within it. In our experiments, we used the definition of BIBREF9. A document-aware version of position will be introduced in (SECREF6). Length of sentence: the intuition behind this feature is that sentences of too long or too short length are less likely to be included in the summary. Like sentence position, this feature is also subject to the wrong definition that makes it document-unaware. For example, in BIBREF9 it is defined as a number of words in a sentence. Such a definition does not take into account that a sentence with, say 15 words may be considered long if all other sentences of document have fewer words. Another sentence with the same number of words may be regarded as short, because other sentences in that document have more than 15 words. This might occur due to different writing styles. However, we included this in our experiments to compare its effect with that of its document-aware counterpart, which will be listed in (SECREF6). The Ratio of Nouns: is defined in BIBREF30 as the number of nouns divided by total number of words in the sentence, after stop-words are removed. Three other features, Ratio of Verbs, Ratio of Adjectives, and Ratio of Adverbs are defined in the same manner and proved to have a positive effect on ranking performance. From our perspective, however, a sentence with a ratio of nouns =0.5, for example, in a document containing many nouns, must be discriminated in the training set from another sentence with the same ratio of nouns, that appeared in another document having fewer nouns. This feature does not represent how many nouns are there in the document, which is important in sentence ranking. The same discussion goes on to justify the need to consider the number of verbs, adjectives, and adverbs in the document. The impact of these features is examined in our experiments and compared to that of their document-aware counterparts. The Ratio of Numerical entities: assuming that sentences containing more numerical data are probably giving us more information, this feature may help us in ranking BIBREF31, BIBREF32. For calculation, we count the occurrences of numbers and digits proportional to the length of sentence. This feature must be less weighted if almost all sentences of a document have numerical data. However, it does not count numbers and digits in other sentences of the document. Cue Words: if a sentence contains special phrases such as “in conclusion”, “overall”, “to summarize”, “in a nutshell” and so forth, its selection as a part of the summary is more probable than others. The number of these phrases is counted for this feature. Incorporating Document Features ::: Learning Phase ::: Feature Extraction ::: Document-aware Features Cosine position: As mentioned in (SECREF5) a good definition of position should take into account document length. A well-known formula used in the literature BIBREF38, BIBREF7 is in which index is an integer representing the order of sentences and T is the total number of sentences in document. This feature ranges from 0 to 1, the closer to the beginning or to the end, the higher value this feature will take. $\alpha $ is a tuning parameter. As it increases, the value of this feature will be distributed more equally over sentences. In this manner, equal values of this feature in the training set represent a uniform notion of position in a document, so it becomes document-aware. Relative Length: the intuition behind this feature is explained in (SECREF5). A discussion went there that a simple count of words does not take into account that a sentence with a certain number of words may be considered long or short, based on the other sentences appeared the document. Taking this into consideration, we divided the number of words in the sentence by the average length of sentences in the document. More formally, the formula is: in which n is number of sentences in the document and $s_i$ is the i’th sentence of it. Values greater than 1 could be interpreted as long and vice versa. TF-ISF: this feature counts the frequency of terms in a document and assigns higher values to sentences having more frequent terms. It also discounts terms which appear in more sentences. Since it is well explained in the literature, we have not included details and formula which are in references BIBREF34 and BIBREF39. Nonetheless, the aspect that matters in our discussion is that both frequency and inverse sentence frequency are terms which involve properties of context, and consequently are document-aware. POS features: Here we introduce another way to include the ratio of part of speech (POS) units in features and keep them document-normalized. To do this, the number of occurrences of each POS unit should be divided by the number of them in the document, instead of that occurring in a sentence. The formal definition of the new document-aware features are as follows: Incorporating Document Features ::: Learning Phase ::: Feature Extraction ::: Explicit Document Features In order to further investigate how effective are document specific features in sentence ranking, we defined several features for documents. These features are then calculated for each document and repeated in the feature vector of every sentence of that document. Their formal definition is described below and their effect is examined in the result and discussion section (SECREF5): Document sentences: An important property of a document that affects summarization is the total number of sentences participating in sentence ranking. As this number grows, a summarizer should be more selective and precise. Also, some sentence features such as cue words, maybe more weighted for longer documents. In addition, the main contextual information is probably more distributed over sentences. In such a case even lower values of other features should be considered important. Document words: the number of words in the document is another notion of document length. Since the number of sentences alone is not enough to represent document length, this feature should also be considered. Topical category: different topics such as political, economic, etc. have different writing styles and this might affect sentence ranking. For instance, numerical entities may appear more in economic or sport reports than in religious or social news. Therefore the weight of this attribute should be more or less, based on a document’s category. So it needs to be included. An overview of our feature set is represented by example in figure FIGREF15. Column ID is just for enumeration and column Target is explained in the next section. Incorporating Document Features ::: Learning Phase ::: Target Assignment Every feature vector needs a target value from which the system should learn how to rank sentences. The value of target is usually determined based on golden summaries. If a sentence is included in a majority of human-written extracts, its target is near to 1. In contrast, it would be closer to 0 if the sentence could not be found in any human-made summaries. In some datasets, like the one we used, golden summaries are not absolutely extractive, and they are not composed of exact copies of sentences in the original text. In such cases, a measure of similarity between the sentence whose target we are looking for, and each ideal summaries’ sentence will be calculated. This results in real values between 0 and 1 for this attribute. Section (SECREF4) includes more details about target assignment. Incorporating Document Features ::: Learning Phase ::: Training Model Since target attribute values vary between zero and one, we opted to use regression methods for the learning task. To build a training and a test set, a global matrix is composed in which every row corresponds to a sentence in the corpus and each column corresponds to a feature. The last column is for target attribute which will be omitted in the test set. It might be required to perform scaling on certain columns, depending on its corresponding feature and range of values. In cases where the dataset is large, the total number of sentences which are not included in golden summaries, and consequently have lower targets, is many times larger than the number of included sentences. This might lead the regression bias toward lower target values. To avoid this, dataset balancing is needed. That is to leave aside a portion of not included sentences and not to feed them to learner model. Lastly, in this phase, the regression model should be fitted on training set and be evaluated on a test set as described in sections (SECREF4) and (SECREF5). Incorporating Document Features ::: Summarization Phase Having acquired a model that can precisely rank sentences, we can apply it to any new given text and use ranked sentences in order to create a summary. This summarization process could also be executed on dataset texts, in order to evaluate how precisely our method resembles human-written summaries. In this section, we briefly describe the summarization process. The evaluation process is explained in section (SECREF22). Incorporating Document Features ::: Summarization Phase ::: Feature Extraction Initially, sentence features need to be extracted. Again, normalization, sentence tokenization, word tokenization, and stop words removal are preliminary steps. The same features used in the learning phase should be calculated. Incorporating Document Features ::: Summarization Phase ::: Sentence Ranking In comparison with learning phase, in which a global matrix was used, this time a local matrix is composed whose rows correspond with the sentences of the input text. If during learning, any scaling was performed on features, they should be carried out here in the same manner. The matrix is then fed to the regressor obtained in the previous phase, and a rank value between zero and one will be predicted for each sentence. Incorporating Document Features ::: Summarization Phase ::: Sentence Selection By sorting sentences based on their ranks, the most appropriate sentences for being included in summary will be determined. To preserve readability, however, it is important to place them in the summary in the same order they appeared in the input document. Another consideration is the cut-off length. How many of the top sentences should we select for summary? The answer should be as simple as a constant number, a percentage of total sentences, or it could be determined by more advanced heuristics. We allowed cut-off length to be an input parameter. This allows us, in the evaluation phase, to produce summaries of dataset documents in the same length as golden summaries. This makes the comparison more equitable. Incorporating Document Features ::: Evaluation Measures In this section, some measures are described to evaluate the performance of both phases explained in the previous section: the learning phase and summarization phase. The former is evaluated using common regression metrics such as mean square error (MSE) and coefficient of determination (R2). The latter is carried out using ROUGE which is a well-known metric for evaluating summarization systems. Mean Square Error (MSE) is the average of squared errors in all estimated targets. An ideal regressor tends to make this measure as near as possible to zero. Though, an exact zero for MSE is not desirable, because it is suspected to be due to over fitting. The coefficient of determination is another metric for evaluating how well a regression model is fitted to data. It ranges from $-\infty $ to 1. As it approaches 1, “goodness-of-fit” is increased, while negative values show that the mean of data is a better estimator for target BIBREF40. ROUGE is proposed in BIBREF41 as an evaluation metric for summaries. It matches n-grams in both system produced summaries and reference summaries and returns the percentage of matching in terms of precision, recall and f-measure. There is a variety of ROUGE family metrics, namely ROUGE-1, ROUGE-2, and ROUGE-L. In ROUGE-1 the overlap of 1-grams, each word, is calculated. In ROUGE-2 the bigrams are considered as units of comparison. The ROUGE-L uses the Longest Common Subsequence (LCS) to measure resemblance. Nevertheless, we found that ROUGE assessments are always relatively high, even for a summary that is produced perfunctorily. Hence, we also designed a random summarizer that selects random sentences for the summary, and evaluated it by ROUGE. This could be used as a baseline for comparison. Experiments Two experiments were set up to verify our hypothesis: “sentence ranking is highly dependent to document, and features must also represent context”. The first experiment involves document-unaware features (listed in section SECREF5) alongside TF-ISF. In the second experiment, document-aware features were used instead of document-unaware ones. We also set up a random summarizer based on a random regressor that acts as a baseline for comparisons. More details are recorded in section (SECREF25). A good experimental study should be as reproducible as possible. Here we explain the technical details that are more specific to our dataset, to allow the interested user to set up the same experiments for further research. Experiments ::: Dataset We used the Pasokh dataset BIBREF42 that contains 100 Persian news documents each of which is associated with 5 summaries. Each summary consists of several sentences of the original text, selected by a human expert. Some sentences are slightly modified and are not, therefore, an exact copy of any original sentences. Documents are categorized into six categories such as political, economic and so on. The length of documents ranges from 4 to 156 sentences. Overall, it has about 2,500 sentences. Experiments ::: Extracting Features and Scaling All features introduced in section SECREF4 are calculated. Pre-processing, sentence and word tokenization, stop words removal, and part of speech tagging is performed using the Hazm library BIBREF43. The majority of features have a range between zero and one. Other features are passed to a min-max scaler to transform into the same range. For the category feature which is nominal, the one-hot-encoding method applied and six flag features used instead. Experiments ::: Target Assignment In assigning the target to a sentence, as mentioned in section (SECREF16), the goal is to assign a number between 0 and 1, with higher values as an indicator that the sentence is present in the majority of golden summaries. Because exact matching between sentences is not possible, to resolve the question of presence in a single golden summary such as $g$, we calculated the cosine similarity of the desired sentence with each sentence: $s_j\in g$ . Then the maximum value of these similarities is selected as an indicator of presence. This indicator is then calculated for other golden summaries and their average is assigned to the sentence as the target. in which G is set of summaries written for the document containing s. This is an additional explicit evidence that target (and subsequently, ranking) is related to the document. Experiments ::: Training Model A vast collection of scikit-learn tools were used for the learning phase. K-fold cross-validation is applied with k=4 and split size of 0.25. Three different regression methods were applied, including Linear Regression, Decision Tree Regression, and Epsilon-Support Vector Regression(SVR). Overall results were the same with minor differences. Thus only the SVR result is reported. Various values for parameters were examined but the best results were achieved by epsilon=0.01, kernel=rbf, and default values for other parameters. With the aim of evaluating summary qualities, the fitted regressor of each run was used to rank documents sentences in the test set. To compare with each standard summary, a summary with the same count of sentences was produced, and compared by ROUGE. Averaging these ROUGE scores over each document and then over the dataset, the overall quality of summaries produced by the model can be obtained. The same process was repeated with a random regressor that needed no training, and which simply assigns a random number between zero and one to any given sample. Apart from measuring the performance of this regressor on the test set, the quality of summaries produced is evaluated and reported as a baseline. The juxtaposition of this baseline and our measured results will demonstrate how effective our feature set was and how intelligent our whole system worked. Results and Discussion In section (SECREF22) MSE, R2 and ROUGE scores are remarked as evaluation measures. The results of our experiments are reported below in terms of these measures. For better comparison, we also ran another experiment in which the random regressor was used for ranking sentences and producing summaries. Table TABREF28 shows and compares MSE and R2 reported from these experiments. The results show that in experiment 2, the mean squared error is reduced and the r2 score is increased. This means that using document-aware features leads to a more accurate learned model, proving our hypothesis about the relationship between document features and target ranks. ROUGE scores are displayed separately in terms of precision, recall and f-measure in Figures FIGREF29 to FIGREF31 respectively. F-measure scores are displayed in the figure FIGREF29, comparing ROUGE-1, ROUGE-2 and ROUGE-L. Figures FIGREF30 and FIGREF31 allow comparison of precision and recall scores. The higher values gained in experiment 2, confirm that document-aware features perform better than unaware features. These results are also interpretable from viewpoint of entropy-based decision tree methods. In learning phase, impurity of features within the whole dataset will be measured, and features having higher information gain will take place in upper levels of tree. But in summarization phase, within which decisions have to be made within a single document, impurity of those features may be low, causing less effective decisions and precision's. By incorporating document features, we help model to use different features (thus different trees) for different documents. Another insight gained from these charts is that a random summarizer resulted in scores more than 50% in all measures, and without using document-aware features, the model achieves a small improvement over a random summarizer. Conclusion This paper has discussed that in supervised extractive summarization, we cannot learn to rank by considering dataset sentences as independent educational examples. The rank of sentences is dependent on each other within a document. To overcome this issue, we suggested incorporating document features explicitly in the feature vector of sentences. We also suggested using features that take into account the properties of document. We named this kind of features as document-aware. Conducted experiments demonstrated the benefit of adding explicit document features, as well as document-aware features, both in model precision and summary quality. For future work, more document-aware features can be examined. It is also possible to run the same experiments on an English (or any other language) dataset, if available. Another clue for study is measuring degree of entropy difference between dataset and single documents, in a standard dataset. Our source code is hosted on GitHub and is published for later reference, further experiments and reproducing results. A web interface and a Telegram bot is also implemented as demo.
Ordinal position, Length of sentence, The Ratio of Nouns, The Ratio of Numerical entities, Cue Words, Cosine position, Relative Length, TF-ISF, POS features, Document sentences, Document words, Topical category, Ratio of Verbs, Ratio of Adjectives, and Ratio of Adverbs
81064bbd0a0d72a82d8677c32fb71b06501830a0
81064bbd0a0d72a82d8677c32fb71b06501830a0_0
Q: By how much is precission increased? Text: Introduction From the early days of artificial intelligence, automatically summarizing a text was an interesting task for many researchers. Followed by the advance of the World Wide Web and the advent of concepts such as Social networks, Big Data, and Cloud computing among others, text summarization became a crucial task in many applications BIBREF0, BIBREF1, BIBREF2. For example, it is essential, in many search engines and text retrieval systems to display a portion of each result entry which is representative of the whole text BIBREF3, BIBREF4. It is also becoming essential for managers and the general public to gain the gist of news and articles immediately, in order to save time, while being inundated with information on social media BIBREF5. Researchers have approached this challenge from various perspectives and have obtained some promising results BIBREF6, BIBREF7. However, this area continues to present more research challenges and has a long path to maturity. One method of investigating this challenge, is (supervised) extractive summarization. Extractive implementations use a ranking mechanism and select top-n-ranked sentences as the summary BIBREF8. Sentences of a document are represented as vectors of features. Using summarization corpora, a rank will be assigned to each sentence, based on its presence in several human-written summaries (golden summaries). The system should then learn how to use those features to predict the rank of sentences in any given text. Various machine learning approaches such as regression and classification algorithms are used to perform the ranking task BIBREF9, BIBREF10. As far as our knowledge goes, in all current implementations, sets of sentence vectors of every document are merged together to compose a larger set, which is then passed to the learning model as a matrix. In this approach, the locality of ranks is disregarded. In other words, the rank of sentences is highly relative to the context and document. A sentence might be ranked high in one document while being ranked lower in another. As a result, merging sentences of a whole dataset into a matrix removes document boundaries and a main source of information will be lost. We addressed this issue by taking certain features of documents into account, such as its length, topical category and so on in addition to some new sentence features that also reflect document properties. Thus, more information will be provided to the model, and ranking could be done with respect to local features of the document. Our experiments show that this rectification leads to improvement in both the performance of the learned model and the quality of produced summaries. We also represent a new baseline for the evaluation of extractive text summarizers which can be used to measure the performance of any summarizing method more accurately. The remainder of this paper is organized as follows. (Section SECREF2) reviews related works. (Section SECREF3) presents the proposed method and evaluation measures. (Section SECREF5) discusses how the experiments are set up. The results are discussed in (Section SECREF5), and finally (Section SECREF6) concludes the paper. Related works Text summarization has been widely studied by both academic and enterprise disciplines. Text summarization methods may be classified into different types. Based on input type, there are single-document BIBREF11, BIBREF12 vs multi-document summarization methods BIBREF13, BIBREF14, BIBREF15. Based on language, there are mono-lingual, bilingual and multi-lingual methods BIBREF16. There are also “query focused” methods in which a summary relevant to a given query is produced BIBREF17. From the perspective of procedure, however, there are two main approaches: abstractive vs extractive BIBREF18. Abstractive approaches try to generate a new short text based on the concepts understood from the original text BIBREF19. This usually requires a full pass through NLP pipeline and is faced with many complexities and challenges BIBREF20. The abstractive approach relies on linguistic methods to examine and interpret the text in order to find new concepts and expressions. The output is a new shorter text which consists of the most important information from the original text document BIBREF8. Extractive approaches, on the other hand, select a few sentences from the document based on some measures in order to place them in a summary BIBREF8. A broad range of methods has been examined in this approach, including graph-based BIBREF8, BIBREF21, unsupervised BIBREF21, BIBREF22 and supervised (corpus-based) methods BIBREF9, BIBREF23, BIBREF24. In supervised methods, training data is generally needed to select important content from the documents. In these methods, usually, the problem is reduced to a classification or regression problem, and machine learning techniques applied to the dataset of documents and their gold summaries represented by some features. Support Vector Machines (SVM) BIBREF25 and neural networks BIBREF26 are more popular sentence classification algorithms. The key step in extractive summarization is to determine the importance of sentences in the document BIBREF27. Previous studies examine the ordinal position of sentences BIBREF28, BIBREF29, length of sentences BIBREF9, the ratio of nouns, the Ratio of Verbs, Ratio of Adjectives, Ratio of Adverbs BIBREF30, the Ratio of Numerical entities BIBREF31, BIBREF32 and Cue Words BIBREF28. Gupta and Lehal in their survey of text summarization techniques list the following groups of features: content-based, title-based, location-based, length-based, proper noun and upper-case word-based, font-based, specific phrase-based, and features based on sentence similarity to other sentences in a text BIBREF8. Previous studies use different sentence features such as terms from keywords/key phrases, terms from user queries, frequency of words, and position of words/sentences for text summarization BIBREF33. However, in most cases, selection and weighting of features are an important matter of debate. Some works have been carried out with respect to this BIBREF34, but none, to the best of our knowledge, has shown that target attribute is highly related to the scope of the document. It is occasionally mentioned but not included in practice. For instance, Ferreira et al studied various combinations of sentence scoring methods on three types of documents in BIBREF6 and BIBREF31 and concluded that the weight of features varies, dependent on the properties of context: “the effectiveness of sentence scoring methods for automatic extractive text summarization algorithms depends on the kind of text one wants to summarize, the length of documents, the kind of language used, and their structure.”. JY Yeh et al in BIBREF35 utilized a Genetic Algorithm (GA) to find the weight of features for calculating sentence scores. However, their following statement implies that performance of weights is generally dependent to genre, that could be seen as a feature of context: “It cannot be guaranteed that the score function whose feature weights are obtained by GA definitely performs well for the test corpus; nevertheless, if the genre of the test corpus is close to that of the training corpus, we can make a prediction that the score function will work well.” BIBREF35. Berenjkoub et al studied the effectiveness of various subsets of features in summarization of distinct sections of scientific papers BIBREF36. They showed that some features work well only in some specific portion of text, for example, on the abstract section, while others perform better on the methodology section. This could be considered to be a consequence of differences in the structure and context of each section. All the above studies imply the significance of document context in ranking. Nevertheless, it has not been given enough attention in the NLP community, and even sometimes is neglected. For instance, authors in BIBREF30 suggest the use of a wide range of various features. Among these, seventeen part-of-speech based sentences features have been introduced, all of which are sentence-normalized, but not document-normalized, i.e. they count the ratio of a syntactic unit e.g. verbs, divided by the number of words in a sentence. Such features do not consider the total number of those units, e.g. verbs, in the whole document. Our work contributes to this line of research and includes document features in the learning and ranking processes. Incorporating Document Features As a way to investigate the need for document features in sentence ranking (as explained in the introduction and related works), we introduced several document-level features and incorporated them in the summarization process. These features are listed under subsection (SECREF4). Although stages of our method do not differ from general supervised extractive summarization, the whole process is explained in order to clarify the method of investigation. Every supervised summarization has two phases. The first is the “Learning Phase”, a corpus of ideal summaries is used to train the system how to rank sentences. The second is the “Summarization Phase”, where the system applies its learning gained from the first phase, in order to rank the sentences of a new given text. A process of selection is then performed to form a summary. Each of these phases has several intricacies which are briefly described in the following sections. Incorporating Document Features ::: Learning Phase The input to this phase is a dataset of documents, each of which is associated with several human-written summaries. The output is a learned model with a good level of accuracy that is able to reliably predict the rank of sentences, in almost the same way that a human may rank them. To accomplish this, it is necessary to first perform normalization and transform various forms of phrases into their canonical form. Then, every text should be tokenized to sentences, and further tokenized to words. Another prerequisite is to remove stop words. The following subtasks should be carried out next. Incorporating Document Features ::: Learning Phase ::: Feature Extraction Foremost, it is necessary to represent each sentence with those features that have the most distinguishing effect on the prediction of the rank. Many features have been examined in the literature. We entitle some as “document-aware” because they do implicitly represent some information about a document. However, other features have been used, that say nothing about the document in which they appeared. We call them “document-unaware”. In the previous sections, we argued that this lack of information might be misleading for the system, especially when we train it with sample sentences from different documents. Thus, we modified some document-unaware features and derived new features that cover document properties. We also examined the effect of incorporating explicit features of a document into vectors of its sentences. The following sub-sections describe the features mentioned above in more detail. Incorporating Document Features ::: Learning Phase ::: Feature Extraction ::: Document-unaware Features Ordinal position: It is shown that inclusion of sentence, in summary, is relevant to its position in the document or even in a paragraph. Intuitively, sentences at the beginning or the end of a text are more likely to be included in the summary. Depending on how it is defined, this feature might be document-unaware or not. For example, in BIBREF29 and BIBREF37 it is defined as $\frac{5}{5}$ for the first sentence, $\frac{4}{5}$ for the second, and so on to $\frac{1}{5}$ for fifth and zero for remaining sentences. In another research conducted by Wong et al. BIBREF9, it is defined as $\frac{1}{sentence\ number}$. With such a definition, we may have several sentences, for example, with position=$\frac{1}{5}$ in the training set, these may not have the same sense of position. While a sentence position=$\frac{1}{5}$ means “among the firsts” in a document with 40 sentences, it has a totally different meaning of “in the middle”, in another document containing 10 sentences. Thus, a useful feature formula should involve differences of documents which may change the meaning of information within it. In our experiments, we used the definition of BIBREF9. A document-aware version of position will be introduced in (SECREF6). Length of sentence: the intuition behind this feature is that sentences of too long or too short length are less likely to be included in the summary. Like sentence position, this feature is also subject to the wrong definition that makes it document-unaware. For example, in BIBREF9 it is defined as a number of words in a sentence. Such a definition does not take into account that a sentence with, say 15 words may be considered long if all other sentences of document have fewer words. Another sentence with the same number of words may be regarded as short, because other sentences in that document have more than 15 words. This might occur due to different writing styles. However, we included this in our experiments to compare its effect with that of its document-aware counterpart, which will be listed in (SECREF6). The Ratio of Nouns: is defined in BIBREF30 as the number of nouns divided by total number of words in the sentence, after stop-words are removed. Three other features, Ratio of Verbs, Ratio of Adjectives, and Ratio of Adverbs are defined in the same manner and proved to have a positive effect on ranking performance. From our perspective, however, a sentence with a ratio of nouns =0.5, for example, in a document containing many nouns, must be discriminated in the training set from another sentence with the same ratio of nouns, that appeared in another document having fewer nouns. This feature does not represent how many nouns are there in the document, which is important in sentence ranking. The same discussion goes on to justify the need to consider the number of verbs, adjectives, and adverbs in the document. The impact of these features is examined in our experiments and compared to that of their document-aware counterparts. The Ratio of Numerical entities: assuming that sentences containing more numerical data are probably giving us more information, this feature may help us in ranking BIBREF31, BIBREF32. For calculation, we count the occurrences of numbers and digits proportional to the length of sentence. This feature must be less weighted if almost all sentences of a document have numerical data. However, it does not count numbers and digits in other sentences of the document. Cue Words: if a sentence contains special phrases such as “in conclusion”, “overall”, “to summarize”, “in a nutshell” and so forth, its selection as a part of the summary is more probable than others. The number of these phrases is counted for this feature. Incorporating Document Features ::: Learning Phase ::: Feature Extraction ::: Document-aware Features Cosine position: As mentioned in (SECREF5) a good definition of position should take into account document length. A well-known formula used in the literature BIBREF38, BIBREF7 is in which index is an integer representing the order of sentences and T is the total number of sentences in document. This feature ranges from 0 to 1, the closer to the beginning or to the end, the higher value this feature will take. $\alpha $ is a tuning parameter. As it increases, the value of this feature will be distributed more equally over sentences. In this manner, equal values of this feature in the training set represent a uniform notion of position in a document, so it becomes document-aware. Relative Length: the intuition behind this feature is explained in (SECREF5). A discussion went there that a simple count of words does not take into account that a sentence with a certain number of words may be considered long or short, based on the other sentences appeared the document. Taking this into consideration, we divided the number of words in the sentence by the average length of sentences in the document. More formally, the formula is: in which n is number of sentences in the document and $s_i$ is the i’th sentence of it. Values greater than 1 could be interpreted as long and vice versa. TF-ISF: this feature counts the frequency of terms in a document and assigns higher values to sentences having more frequent terms. It also discounts terms which appear in more sentences. Since it is well explained in the literature, we have not included details and formula which are in references BIBREF34 and BIBREF39. Nonetheless, the aspect that matters in our discussion is that both frequency and inverse sentence frequency are terms which involve properties of context, and consequently are document-aware. POS features: Here we introduce another way to include the ratio of part of speech (POS) units in features and keep them document-normalized. To do this, the number of occurrences of each POS unit should be divided by the number of them in the document, instead of that occurring in a sentence. The formal definition of the new document-aware features are as follows: Incorporating Document Features ::: Learning Phase ::: Feature Extraction ::: Explicit Document Features In order to further investigate how effective are document specific features in sentence ranking, we defined several features for documents. These features are then calculated for each document and repeated in the feature vector of every sentence of that document. Their formal definition is described below and their effect is examined in the result and discussion section (SECREF5): Document sentences: An important property of a document that affects summarization is the total number of sentences participating in sentence ranking. As this number grows, a summarizer should be more selective and precise. Also, some sentence features such as cue words, maybe more weighted for longer documents. In addition, the main contextual information is probably more distributed over sentences. In such a case even lower values of other features should be considered important. Document words: the number of words in the document is another notion of document length. Since the number of sentences alone is not enough to represent document length, this feature should also be considered. Topical category: different topics such as political, economic, etc. have different writing styles and this might affect sentence ranking. For instance, numerical entities may appear more in economic or sport reports than in religious or social news. Therefore the weight of this attribute should be more or less, based on a document’s category. So it needs to be included. An overview of our feature set is represented by example in figure FIGREF15. Column ID is just for enumeration and column Target is explained in the next section. Incorporating Document Features ::: Learning Phase ::: Target Assignment Every feature vector needs a target value from which the system should learn how to rank sentences. The value of target is usually determined based on golden summaries. If a sentence is included in a majority of human-written extracts, its target is near to 1. In contrast, it would be closer to 0 if the sentence could not be found in any human-made summaries. In some datasets, like the one we used, golden summaries are not absolutely extractive, and they are not composed of exact copies of sentences in the original text. In such cases, a measure of similarity between the sentence whose target we are looking for, and each ideal summaries’ sentence will be calculated. This results in real values between 0 and 1 for this attribute. Section (SECREF4) includes more details about target assignment. Incorporating Document Features ::: Learning Phase ::: Training Model Since target attribute values vary between zero and one, we opted to use regression methods for the learning task. To build a training and a test set, a global matrix is composed in which every row corresponds to a sentence in the corpus and each column corresponds to a feature. The last column is for target attribute which will be omitted in the test set. It might be required to perform scaling on certain columns, depending on its corresponding feature and range of values. In cases where the dataset is large, the total number of sentences which are not included in golden summaries, and consequently have lower targets, is many times larger than the number of included sentences. This might lead the regression bias toward lower target values. To avoid this, dataset balancing is needed. That is to leave aside a portion of not included sentences and not to feed them to learner model. Lastly, in this phase, the regression model should be fitted on training set and be evaluated on a test set as described in sections (SECREF4) and (SECREF5). Incorporating Document Features ::: Summarization Phase Having acquired a model that can precisely rank sentences, we can apply it to any new given text and use ranked sentences in order to create a summary. This summarization process could also be executed on dataset texts, in order to evaluate how precisely our method resembles human-written summaries. In this section, we briefly describe the summarization process. The evaluation process is explained in section (SECREF22). Incorporating Document Features ::: Summarization Phase ::: Feature Extraction Initially, sentence features need to be extracted. Again, normalization, sentence tokenization, word tokenization, and stop words removal are preliminary steps. The same features used in the learning phase should be calculated. Incorporating Document Features ::: Summarization Phase ::: Sentence Ranking In comparison with learning phase, in which a global matrix was used, this time a local matrix is composed whose rows correspond with the sentences of the input text. If during learning, any scaling was performed on features, they should be carried out here in the same manner. The matrix is then fed to the regressor obtained in the previous phase, and a rank value between zero and one will be predicted for each sentence. Incorporating Document Features ::: Summarization Phase ::: Sentence Selection By sorting sentences based on their ranks, the most appropriate sentences for being included in summary will be determined. To preserve readability, however, it is important to place them in the summary in the same order they appeared in the input document. Another consideration is the cut-off length. How many of the top sentences should we select for summary? The answer should be as simple as a constant number, a percentage of total sentences, or it could be determined by more advanced heuristics. We allowed cut-off length to be an input parameter. This allows us, in the evaluation phase, to produce summaries of dataset documents in the same length as golden summaries. This makes the comparison more equitable. Incorporating Document Features ::: Evaluation Measures In this section, some measures are described to evaluate the performance of both phases explained in the previous section: the learning phase and summarization phase. The former is evaluated using common regression metrics such as mean square error (MSE) and coefficient of determination (R2). The latter is carried out using ROUGE which is a well-known metric for evaluating summarization systems. Mean Square Error (MSE) is the average of squared errors in all estimated targets. An ideal regressor tends to make this measure as near as possible to zero. Though, an exact zero for MSE is not desirable, because it is suspected to be due to over fitting. The coefficient of determination is another metric for evaluating how well a regression model is fitted to data. It ranges from $-\infty $ to 1. As it approaches 1, “goodness-of-fit” is increased, while negative values show that the mean of data is a better estimator for target BIBREF40. ROUGE is proposed in BIBREF41 as an evaluation metric for summaries. It matches n-grams in both system produced summaries and reference summaries and returns the percentage of matching in terms of precision, recall and f-measure. There is a variety of ROUGE family metrics, namely ROUGE-1, ROUGE-2, and ROUGE-L. In ROUGE-1 the overlap of 1-grams, each word, is calculated. In ROUGE-2 the bigrams are considered as units of comparison. The ROUGE-L uses the Longest Common Subsequence (LCS) to measure resemblance. Nevertheless, we found that ROUGE assessments are always relatively high, even for a summary that is produced perfunctorily. Hence, we also designed a random summarizer that selects random sentences for the summary, and evaluated it by ROUGE. This could be used as a baseline for comparison. Experiments Two experiments were set up to verify our hypothesis: “sentence ranking is highly dependent to document, and features must also represent context”. The first experiment involves document-unaware features (listed in section SECREF5) alongside TF-ISF. In the second experiment, document-aware features were used instead of document-unaware ones. We also set up a random summarizer based on a random regressor that acts as a baseline for comparisons. More details are recorded in section (SECREF25). A good experimental study should be as reproducible as possible. Here we explain the technical details that are more specific to our dataset, to allow the interested user to set up the same experiments for further research. Experiments ::: Dataset We used the Pasokh dataset BIBREF42 that contains 100 Persian news documents each of which is associated with 5 summaries. Each summary consists of several sentences of the original text, selected by a human expert. Some sentences are slightly modified and are not, therefore, an exact copy of any original sentences. Documents are categorized into six categories such as political, economic and so on. The length of documents ranges from 4 to 156 sentences. Overall, it has about 2,500 sentences. Experiments ::: Extracting Features and Scaling All features introduced in section SECREF4 are calculated. Pre-processing, sentence and word tokenization, stop words removal, and part of speech tagging is performed using the Hazm library BIBREF43. The majority of features have a range between zero and one. Other features are passed to a min-max scaler to transform into the same range. For the category feature which is nominal, the one-hot-encoding method applied and six flag features used instead. Experiments ::: Target Assignment In assigning the target to a sentence, as mentioned in section (SECREF16), the goal is to assign a number between 0 and 1, with higher values as an indicator that the sentence is present in the majority of golden summaries. Because exact matching between sentences is not possible, to resolve the question of presence in a single golden summary such as $g$, we calculated the cosine similarity of the desired sentence with each sentence: $s_j\in g$ . Then the maximum value of these similarities is selected as an indicator of presence. This indicator is then calculated for other golden summaries and their average is assigned to the sentence as the target. in which G is set of summaries written for the document containing s. This is an additional explicit evidence that target (and subsequently, ranking) is related to the document. Experiments ::: Training Model A vast collection of scikit-learn tools were used for the learning phase. K-fold cross-validation is applied with k=4 and split size of 0.25. Three different regression methods were applied, including Linear Regression, Decision Tree Regression, and Epsilon-Support Vector Regression(SVR). Overall results were the same with minor differences. Thus only the SVR result is reported. Various values for parameters were examined but the best results were achieved by epsilon=0.01, kernel=rbf, and default values for other parameters. With the aim of evaluating summary qualities, the fitted regressor of each run was used to rank documents sentences in the test set. To compare with each standard summary, a summary with the same count of sentences was produced, and compared by ROUGE. Averaging these ROUGE scores over each document and then over the dataset, the overall quality of summaries produced by the model can be obtained. The same process was repeated with a random regressor that needed no training, and which simply assigns a random number between zero and one to any given sample. Apart from measuring the performance of this regressor on the test set, the quality of summaries produced is evaluated and reported as a baseline. The juxtaposition of this baseline and our measured results will demonstrate how effective our feature set was and how intelligent our whole system worked. Results and Discussion In section (SECREF22) MSE, R2 and ROUGE scores are remarked as evaluation measures. The results of our experiments are reported below in terms of these measures. For better comparison, we also ran another experiment in which the random regressor was used for ranking sentences and producing summaries. Table TABREF28 shows and compares MSE and R2 reported from these experiments. The results show that in experiment 2, the mean squared error is reduced and the r2 score is increased. This means that using document-aware features leads to a more accurate learned model, proving our hypothesis about the relationship between document features and target ranks. ROUGE scores are displayed separately in terms of precision, recall and f-measure in Figures FIGREF29 to FIGREF31 respectively. F-measure scores are displayed in the figure FIGREF29, comparing ROUGE-1, ROUGE-2 and ROUGE-L. Figures FIGREF30 and FIGREF31 allow comparison of precision and recall scores. The higher values gained in experiment 2, confirm that document-aware features perform better than unaware features. These results are also interpretable from viewpoint of entropy-based decision tree methods. In learning phase, impurity of features within the whole dataset will be measured, and features having higher information gain will take place in upper levels of tree. But in summarization phase, within which decisions have to be made within a single document, impurity of those features may be low, causing less effective decisions and precision's. By incorporating document features, we help model to use different features (thus different trees) for different documents. Another insight gained from these charts is that a random summarizer resulted in scores more than 50% in all measures, and without using document-aware features, the model achieves a small improvement over a random summarizer. Conclusion This paper has discussed that in supervised extractive summarization, we cannot learn to rank by considering dataset sentences as independent educational examples. The rank of sentences is dependent on each other within a document. To overcome this issue, we suggested incorporating document features explicitly in the feature vector of sentences. We also suggested using features that take into account the properties of document. We named this kind of features as document-aware. Conducted experiments demonstrated the benefit of adding explicit document features, as well as document-aware features, both in model precision and summary quality. For future work, more document-aware features can be examined. It is also possible to run the same experiments on an English (or any other language) dataset, if available. Another clue for study is measuring degree of entropy difference between dataset and single documents, in a standard dataset. Our source code is hosted on GitHub and is published for later reference, further experiments and reproducing results. A web interface and a Telegram bot is also implemented as demo.
ROUGE-1 increases by 0.05, ROUGE-2 by 0.06 and ROUGE-L by 0.09
7d841b98bcee29aaa9852ef7ceea1213d703deba
7d841b98bcee29aaa9852ef7ceea1213d703deba_0
Q: Is new approach tested against state of the art? Text: Introduction From the early days of artificial intelligence, automatically summarizing a text was an interesting task for many researchers. Followed by the advance of the World Wide Web and the advent of concepts such as Social networks, Big Data, and Cloud computing among others, text summarization became a crucial task in many applications BIBREF0, BIBREF1, BIBREF2. For example, it is essential, in many search engines and text retrieval systems to display a portion of each result entry which is representative of the whole text BIBREF3, BIBREF4. It is also becoming essential for managers and the general public to gain the gist of news and articles immediately, in order to save time, while being inundated with information on social media BIBREF5. Researchers have approached this challenge from various perspectives and have obtained some promising results BIBREF6, BIBREF7. However, this area continues to present more research challenges and has a long path to maturity. One method of investigating this challenge, is (supervised) extractive summarization. Extractive implementations use a ranking mechanism and select top-n-ranked sentences as the summary BIBREF8. Sentences of a document are represented as vectors of features. Using summarization corpora, a rank will be assigned to each sentence, based on its presence in several human-written summaries (golden summaries). The system should then learn how to use those features to predict the rank of sentences in any given text. Various machine learning approaches such as regression and classification algorithms are used to perform the ranking task BIBREF9, BIBREF10. As far as our knowledge goes, in all current implementations, sets of sentence vectors of every document are merged together to compose a larger set, which is then passed to the learning model as a matrix. In this approach, the locality of ranks is disregarded. In other words, the rank of sentences is highly relative to the context and document. A sentence might be ranked high in one document while being ranked lower in another. As a result, merging sentences of a whole dataset into a matrix removes document boundaries and a main source of information will be lost. We addressed this issue by taking certain features of documents into account, such as its length, topical category and so on in addition to some new sentence features that also reflect document properties. Thus, more information will be provided to the model, and ranking could be done with respect to local features of the document. Our experiments show that this rectification leads to improvement in both the performance of the learned model and the quality of produced summaries. We also represent a new baseline for the evaluation of extractive text summarizers which can be used to measure the performance of any summarizing method more accurately. The remainder of this paper is organized as follows. (Section SECREF2) reviews related works. (Section SECREF3) presents the proposed method and evaluation measures. (Section SECREF5) discusses how the experiments are set up. The results are discussed in (Section SECREF5), and finally (Section SECREF6) concludes the paper. Related works Text summarization has been widely studied by both academic and enterprise disciplines. Text summarization methods may be classified into different types. Based on input type, there are single-document BIBREF11, BIBREF12 vs multi-document summarization methods BIBREF13, BIBREF14, BIBREF15. Based on language, there are mono-lingual, bilingual and multi-lingual methods BIBREF16. There are also “query focused” methods in which a summary relevant to a given query is produced BIBREF17. From the perspective of procedure, however, there are two main approaches: abstractive vs extractive BIBREF18. Abstractive approaches try to generate a new short text based on the concepts understood from the original text BIBREF19. This usually requires a full pass through NLP pipeline and is faced with many complexities and challenges BIBREF20. The abstractive approach relies on linguistic methods to examine and interpret the text in order to find new concepts and expressions. The output is a new shorter text which consists of the most important information from the original text document BIBREF8. Extractive approaches, on the other hand, select a few sentences from the document based on some measures in order to place them in a summary BIBREF8. A broad range of methods has been examined in this approach, including graph-based BIBREF8, BIBREF21, unsupervised BIBREF21, BIBREF22 and supervised (corpus-based) methods BIBREF9, BIBREF23, BIBREF24. In supervised methods, training data is generally needed to select important content from the documents. In these methods, usually, the problem is reduced to a classification or regression problem, and machine learning techniques applied to the dataset of documents and their gold summaries represented by some features. Support Vector Machines (SVM) BIBREF25 and neural networks BIBREF26 are more popular sentence classification algorithms. The key step in extractive summarization is to determine the importance of sentences in the document BIBREF27. Previous studies examine the ordinal position of sentences BIBREF28, BIBREF29, length of sentences BIBREF9, the ratio of nouns, the Ratio of Verbs, Ratio of Adjectives, Ratio of Adverbs BIBREF30, the Ratio of Numerical entities BIBREF31, BIBREF32 and Cue Words BIBREF28. Gupta and Lehal in their survey of text summarization techniques list the following groups of features: content-based, title-based, location-based, length-based, proper noun and upper-case word-based, font-based, specific phrase-based, and features based on sentence similarity to other sentences in a text BIBREF8. Previous studies use different sentence features such as terms from keywords/key phrases, terms from user queries, frequency of words, and position of words/sentences for text summarization BIBREF33. However, in most cases, selection and weighting of features are an important matter of debate. Some works have been carried out with respect to this BIBREF34, but none, to the best of our knowledge, has shown that target attribute is highly related to the scope of the document. It is occasionally mentioned but not included in practice. For instance, Ferreira et al studied various combinations of sentence scoring methods on three types of documents in BIBREF6 and BIBREF31 and concluded that the weight of features varies, dependent on the properties of context: “the effectiveness of sentence scoring methods for automatic extractive text summarization algorithms depends on the kind of text one wants to summarize, the length of documents, the kind of language used, and their structure.”. JY Yeh et al in BIBREF35 utilized a Genetic Algorithm (GA) to find the weight of features for calculating sentence scores. However, their following statement implies that performance of weights is generally dependent to genre, that could be seen as a feature of context: “It cannot be guaranteed that the score function whose feature weights are obtained by GA definitely performs well for the test corpus; nevertheless, if the genre of the test corpus is close to that of the training corpus, we can make a prediction that the score function will work well.” BIBREF35. Berenjkoub et al studied the effectiveness of various subsets of features in summarization of distinct sections of scientific papers BIBREF36. They showed that some features work well only in some specific portion of text, for example, on the abstract section, while others perform better on the methodology section. This could be considered to be a consequence of differences in the structure and context of each section. All the above studies imply the significance of document context in ranking. Nevertheless, it has not been given enough attention in the NLP community, and even sometimes is neglected. For instance, authors in BIBREF30 suggest the use of a wide range of various features. Among these, seventeen part-of-speech based sentences features have been introduced, all of which are sentence-normalized, but not document-normalized, i.e. they count the ratio of a syntactic unit e.g. verbs, divided by the number of words in a sentence. Such features do not consider the total number of those units, e.g. verbs, in the whole document. Our work contributes to this line of research and includes document features in the learning and ranking processes. Incorporating Document Features As a way to investigate the need for document features in sentence ranking (as explained in the introduction and related works), we introduced several document-level features and incorporated them in the summarization process. These features are listed under subsection (SECREF4). Although stages of our method do not differ from general supervised extractive summarization, the whole process is explained in order to clarify the method of investigation. Every supervised summarization has two phases. The first is the “Learning Phase”, a corpus of ideal summaries is used to train the system how to rank sentences. The second is the “Summarization Phase”, where the system applies its learning gained from the first phase, in order to rank the sentences of a new given text. A process of selection is then performed to form a summary. Each of these phases has several intricacies which are briefly described in the following sections. Incorporating Document Features ::: Learning Phase The input to this phase is a dataset of documents, each of which is associated with several human-written summaries. The output is a learned model with a good level of accuracy that is able to reliably predict the rank of sentences, in almost the same way that a human may rank them. To accomplish this, it is necessary to first perform normalization and transform various forms of phrases into their canonical form. Then, every text should be tokenized to sentences, and further tokenized to words. Another prerequisite is to remove stop words. The following subtasks should be carried out next. Incorporating Document Features ::: Learning Phase ::: Feature Extraction Foremost, it is necessary to represent each sentence with those features that have the most distinguishing effect on the prediction of the rank. Many features have been examined in the literature. We entitle some as “document-aware” because they do implicitly represent some information about a document. However, other features have been used, that say nothing about the document in which they appeared. We call them “document-unaware”. In the previous sections, we argued that this lack of information might be misleading for the system, especially when we train it with sample sentences from different documents. Thus, we modified some document-unaware features and derived new features that cover document properties. We also examined the effect of incorporating explicit features of a document into vectors of its sentences. The following sub-sections describe the features mentioned above in more detail. Incorporating Document Features ::: Learning Phase ::: Feature Extraction ::: Document-unaware Features Ordinal position: It is shown that inclusion of sentence, in summary, is relevant to its position in the document or even in a paragraph. Intuitively, sentences at the beginning or the end of a text are more likely to be included in the summary. Depending on how it is defined, this feature might be document-unaware or not. For example, in BIBREF29 and BIBREF37 it is defined as $\frac{5}{5}$ for the first sentence, $\frac{4}{5}$ for the second, and so on to $\frac{1}{5}$ for fifth and zero for remaining sentences. In another research conducted by Wong et al. BIBREF9, it is defined as $\frac{1}{sentence\ number}$. With such a definition, we may have several sentences, for example, with position=$\frac{1}{5}$ in the training set, these may not have the same sense of position. While a sentence position=$\frac{1}{5}$ means “among the firsts” in a document with 40 sentences, it has a totally different meaning of “in the middle”, in another document containing 10 sentences. Thus, a useful feature formula should involve differences of documents which may change the meaning of information within it. In our experiments, we used the definition of BIBREF9. A document-aware version of position will be introduced in (SECREF6). Length of sentence: the intuition behind this feature is that sentences of too long or too short length are less likely to be included in the summary. Like sentence position, this feature is also subject to the wrong definition that makes it document-unaware. For example, in BIBREF9 it is defined as a number of words in a sentence. Such a definition does not take into account that a sentence with, say 15 words may be considered long if all other sentences of document have fewer words. Another sentence with the same number of words may be regarded as short, because other sentences in that document have more than 15 words. This might occur due to different writing styles. However, we included this in our experiments to compare its effect with that of its document-aware counterpart, which will be listed in (SECREF6). The Ratio of Nouns: is defined in BIBREF30 as the number of nouns divided by total number of words in the sentence, after stop-words are removed. Three other features, Ratio of Verbs, Ratio of Adjectives, and Ratio of Adverbs are defined in the same manner and proved to have a positive effect on ranking performance. From our perspective, however, a sentence with a ratio of nouns =0.5, for example, in a document containing many nouns, must be discriminated in the training set from another sentence with the same ratio of nouns, that appeared in another document having fewer nouns. This feature does not represent how many nouns are there in the document, which is important in sentence ranking. The same discussion goes on to justify the need to consider the number of verbs, adjectives, and adverbs in the document. The impact of these features is examined in our experiments and compared to that of their document-aware counterparts. The Ratio of Numerical entities: assuming that sentences containing more numerical data are probably giving us more information, this feature may help us in ranking BIBREF31, BIBREF32. For calculation, we count the occurrences of numbers and digits proportional to the length of sentence. This feature must be less weighted if almost all sentences of a document have numerical data. However, it does not count numbers and digits in other sentences of the document. Cue Words: if a sentence contains special phrases such as “in conclusion”, “overall”, “to summarize”, “in a nutshell” and so forth, its selection as a part of the summary is more probable than others. The number of these phrases is counted for this feature. Incorporating Document Features ::: Learning Phase ::: Feature Extraction ::: Document-aware Features Cosine position: As mentioned in (SECREF5) a good definition of position should take into account document length. A well-known formula used in the literature BIBREF38, BIBREF7 is in which index is an integer representing the order of sentences and T is the total number of sentences in document. This feature ranges from 0 to 1, the closer to the beginning or to the end, the higher value this feature will take. $\alpha $ is a tuning parameter. As it increases, the value of this feature will be distributed more equally over sentences. In this manner, equal values of this feature in the training set represent a uniform notion of position in a document, so it becomes document-aware. Relative Length: the intuition behind this feature is explained in (SECREF5). A discussion went there that a simple count of words does not take into account that a sentence with a certain number of words may be considered long or short, based on the other sentences appeared the document. Taking this into consideration, we divided the number of words in the sentence by the average length of sentences in the document. More formally, the formula is: in which n is number of sentences in the document and $s_i$ is the i’th sentence of it. Values greater than 1 could be interpreted as long and vice versa. TF-ISF: this feature counts the frequency of terms in a document and assigns higher values to sentences having more frequent terms. It also discounts terms which appear in more sentences. Since it is well explained in the literature, we have not included details and formula which are in references BIBREF34 and BIBREF39. Nonetheless, the aspect that matters in our discussion is that both frequency and inverse sentence frequency are terms which involve properties of context, and consequently are document-aware. POS features: Here we introduce another way to include the ratio of part of speech (POS) units in features and keep them document-normalized. To do this, the number of occurrences of each POS unit should be divided by the number of them in the document, instead of that occurring in a sentence. The formal definition of the new document-aware features are as follows: Incorporating Document Features ::: Learning Phase ::: Feature Extraction ::: Explicit Document Features In order to further investigate how effective are document specific features in sentence ranking, we defined several features for documents. These features are then calculated for each document and repeated in the feature vector of every sentence of that document. Their formal definition is described below and their effect is examined in the result and discussion section (SECREF5): Document sentences: An important property of a document that affects summarization is the total number of sentences participating in sentence ranking. As this number grows, a summarizer should be more selective and precise. Also, some sentence features such as cue words, maybe more weighted for longer documents. In addition, the main contextual information is probably more distributed over sentences. In such a case even lower values of other features should be considered important. Document words: the number of words in the document is another notion of document length. Since the number of sentences alone is not enough to represent document length, this feature should also be considered. Topical category: different topics such as political, economic, etc. have different writing styles and this might affect sentence ranking. For instance, numerical entities may appear more in economic or sport reports than in religious or social news. Therefore the weight of this attribute should be more or less, based on a document’s category. So it needs to be included. An overview of our feature set is represented by example in figure FIGREF15. Column ID is just for enumeration and column Target is explained in the next section. Incorporating Document Features ::: Learning Phase ::: Target Assignment Every feature vector needs a target value from which the system should learn how to rank sentences. The value of target is usually determined based on golden summaries. If a sentence is included in a majority of human-written extracts, its target is near to 1. In contrast, it would be closer to 0 if the sentence could not be found in any human-made summaries. In some datasets, like the one we used, golden summaries are not absolutely extractive, and they are not composed of exact copies of sentences in the original text. In such cases, a measure of similarity between the sentence whose target we are looking for, and each ideal summaries’ sentence will be calculated. This results in real values between 0 and 1 for this attribute. Section (SECREF4) includes more details about target assignment. Incorporating Document Features ::: Learning Phase ::: Training Model Since target attribute values vary between zero and one, we opted to use regression methods for the learning task. To build a training and a test set, a global matrix is composed in which every row corresponds to a sentence in the corpus and each column corresponds to a feature. The last column is for target attribute which will be omitted in the test set. It might be required to perform scaling on certain columns, depending on its corresponding feature and range of values. In cases where the dataset is large, the total number of sentences which are not included in golden summaries, and consequently have lower targets, is many times larger than the number of included sentences. This might lead the regression bias toward lower target values. To avoid this, dataset balancing is needed. That is to leave aside a portion of not included sentences and not to feed them to learner model. Lastly, in this phase, the regression model should be fitted on training set and be evaluated on a test set as described in sections (SECREF4) and (SECREF5). Incorporating Document Features ::: Summarization Phase Having acquired a model that can precisely rank sentences, we can apply it to any new given text and use ranked sentences in order to create a summary. This summarization process could also be executed on dataset texts, in order to evaluate how precisely our method resembles human-written summaries. In this section, we briefly describe the summarization process. The evaluation process is explained in section (SECREF22). Incorporating Document Features ::: Summarization Phase ::: Feature Extraction Initially, sentence features need to be extracted. Again, normalization, sentence tokenization, word tokenization, and stop words removal are preliminary steps. The same features used in the learning phase should be calculated. Incorporating Document Features ::: Summarization Phase ::: Sentence Ranking In comparison with learning phase, in which a global matrix was used, this time a local matrix is composed whose rows correspond with the sentences of the input text. If during learning, any scaling was performed on features, they should be carried out here in the same manner. The matrix is then fed to the regressor obtained in the previous phase, and a rank value between zero and one will be predicted for each sentence. Incorporating Document Features ::: Summarization Phase ::: Sentence Selection By sorting sentences based on their ranks, the most appropriate sentences for being included in summary will be determined. To preserve readability, however, it is important to place them in the summary in the same order they appeared in the input document. Another consideration is the cut-off length. How many of the top sentences should we select for summary? The answer should be as simple as a constant number, a percentage of total sentences, or it could be determined by more advanced heuristics. We allowed cut-off length to be an input parameter. This allows us, in the evaluation phase, to produce summaries of dataset documents in the same length as golden summaries. This makes the comparison more equitable. Incorporating Document Features ::: Evaluation Measures In this section, some measures are described to evaluate the performance of both phases explained in the previous section: the learning phase and summarization phase. The former is evaluated using common regression metrics such as mean square error (MSE) and coefficient of determination (R2). The latter is carried out using ROUGE which is a well-known metric for evaluating summarization systems. Mean Square Error (MSE) is the average of squared errors in all estimated targets. An ideal regressor tends to make this measure as near as possible to zero. Though, an exact zero for MSE is not desirable, because it is suspected to be due to over fitting. The coefficient of determination is another metric for evaluating how well a regression model is fitted to data. It ranges from $-\infty $ to 1. As it approaches 1, “goodness-of-fit” is increased, while negative values show that the mean of data is a better estimator for target BIBREF40. ROUGE is proposed in BIBREF41 as an evaluation metric for summaries. It matches n-grams in both system produced summaries and reference summaries and returns the percentage of matching in terms of precision, recall and f-measure. There is a variety of ROUGE family metrics, namely ROUGE-1, ROUGE-2, and ROUGE-L. In ROUGE-1 the overlap of 1-grams, each word, is calculated. In ROUGE-2 the bigrams are considered as units of comparison. The ROUGE-L uses the Longest Common Subsequence (LCS) to measure resemblance. Nevertheless, we found that ROUGE assessments are always relatively high, even for a summary that is produced perfunctorily. Hence, we also designed a random summarizer that selects random sentences for the summary, and evaluated it by ROUGE. This could be used as a baseline for comparison. Experiments Two experiments were set up to verify our hypothesis: “sentence ranking is highly dependent to document, and features must also represent context”. The first experiment involves document-unaware features (listed in section SECREF5) alongside TF-ISF. In the second experiment, document-aware features were used instead of document-unaware ones. We also set up a random summarizer based on a random regressor that acts as a baseline for comparisons. More details are recorded in section (SECREF25). A good experimental study should be as reproducible as possible. Here we explain the technical details that are more specific to our dataset, to allow the interested user to set up the same experiments for further research. Experiments ::: Dataset We used the Pasokh dataset BIBREF42 that contains 100 Persian news documents each of which is associated with 5 summaries. Each summary consists of several sentences of the original text, selected by a human expert. Some sentences are slightly modified and are not, therefore, an exact copy of any original sentences. Documents are categorized into six categories such as political, economic and so on. The length of documents ranges from 4 to 156 sentences. Overall, it has about 2,500 sentences. Experiments ::: Extracting Features and Scaling All features introduced in section SECREF4 are calculated. Pre-processing, sentence and word tokenization, stop words removal, and part of speech tagging is performed using the Hazm library BIBREF43. The majority of features have a range between zero and one. Other features are passed to a min-max scaler to transform into the same range. For the category feature which is nominal, the one-hot-encoding method applied and six flag features used instead. Experiments ::: Target Assignment In assigning the target to a sentence, as mentioned in section (SECREF16), the goal is to assign a number between 0 and 1, with higher values as an indicator that the sentence is present in the majority of golden summaries. Because exact matching between sentences is not possible, to resolve the question of presence in a single golden summary such as $g$, we calculated the cosine similarity of the desired sentence with each sentence: $s_j\in g$ . Then the maximum value of these similarities is selected as an indicator of presence. This indicator is then calculated for other golden summaries and their average is assigned to the sentence as the target. in which G is set of summaries written for the document containing s. This is an additional explicit evidence that target (and subsequently, ranking) is related to the document. Experiments ::: Training Model A vast collection of scikit-learn tools were used for the learning phase. K-fold cross-validation is applied with k=4 and split size of 0.25. Three different regression methods were applied, including Linear Regression, Decision Tree Regression, and Epsilon-Support Vector Regression(SVR). Overall results were the same with minor differences. Thus only the SVR result is reported. Various values for parameters were examined but the best results were achieved by epsilon=0.01, kernel=rbf, and default values for other parameters. With the aim of evaluating summary qualities, the fitted regressor of each run was used to rank documents sentences in the test set. To compare with each standard summary, a summary with the same count of sentences was produced, and compared by ROUGE. Averaging these ROUGE scores over each document and then over the dataset, the overall quality of summaries produced by the model can be obtained. The same process was repeated with a random regressor that needed no training, and which simply assigns a random number between zero and one to any given sample. Apart from measuring the performance of this regressor on the test set, the quality of summaries produced is evaluated and reported as a baseline. The juxtaposition of this baseline and our measured results will demonstrate how effective our feature set was and how intelligent our whole system worked. Results and Discussion In section (SECREF22) MSE, R2 and ROUGE scores are remarked as evaluation measures. The results of our experiments are reported below in terms of these measures. For better comparison, we also ran another experiment in which the random regressor was used for ranking sentences and producing summaries. Table TABREF28 shows and compares MSE and R2 reported from these experiments. The results show that in experiment 2, the mean squared error is reduced and the r2 score is increased. This means that using document-aware features leads to a more accurate learned model, proving our hypothesis about the relationship between document features and target ranks. ROUGE scores are displayed separately in terms of precision, recall and f-measure in Figures FIGREF29 to FIGREF31 respectively. F-measure scores are displayed in the figure FIGREF29, comparing ROUGE-1, ROUGE-2 and ROUGE-L. Figures FIGREF30 and FIGREF31 allow comparison of precision and recall scores. The higher values gained in experiment 2, confirm that document-aware features perform better than unaware features. These results are also interpretable from viewpoint of entropy-based decision tree methods. In learning phase, impurity of features within the whole dataset will be measured, and features having higher information gain will take place in upper levels of tree. But in summarization phase, within which decisions have to be made within a single document, impurity of those features may be low, causing less effective decisions and precision's. By incorporating document features, we help model to use different features (thus different trees) for different documents. Another insight gained from these charts is that a random summarizer resulted in scores more than 50% in all measures, and without using document-aware features, the model achieves a small improvement over a random summarizer. Conclusion This paper has discussed that in supervised extractive summarization, we cannot learn to rank by considering dataset sentences as independent educational examples. The rank of sentences is dependent on each other within a document. To overcome this issue, we suggested incorporating document features explicitly in the feature vector of sentences. We also suggested using features that take into account the properties of document. We named this kind of features as document-aware. Conducted experiments demonstrated the benefit of adding explicit document features, as well as document-aware features, both in model precision and summary quality. For future work, more document-aware features can be examined. It is also possible to run the same experiments on an English (or any other language) dataset, if available. Another clue for study is measuring degree of entropy difference between dataset and single documents, in a standard dataset. Our source code is hosted on GitHub and is published for later reference, further experiments and reproducing results. A web interface and a Telegram bot is also implemented as demo.
No
4e8233826f9e04f5763b307988298e73f841af74
4e8233826f9e04f5763b307988298e73f841af74_0
Q: Is the dataset balanced across categories? Text: Introduction In our online world, social media users tweet, post, and message an incredible number of times each day, and the interconnected, information-heavy nature of our lives makes stress more prominent and easily observable than ever before. With many platforms such as Twitter, Reddit, and Facebook, the scientific community has access to a massive amount of data to study the daily worries and stresses of people across the world. Stress is a nearly universal phenomenon, and we have some evidence of its prevalence and recent increase. For example, the American Psychological Association (APA) has performed annual studies assessing stress in the United States since 2007 which demonstrate widespread experiences of chronic stress. Stress is a subjective experience whose effects and even definition can vary from person to person; as a baseline, the APA defines stress as a reaction to extant and future demands and pressures, which can be positive in moderation. Health and psychology researchers have extensively studied the connection between too much stress and physical and mental health BIBREF0, BIBREF1. In this work, we present a corpus of social media text for detecting the presence of stress. We hope this corpus will facilitate the development of models for this problem, which has diverse applications in areas such as diagnosing physical and mental illness, gauging public mood and worries in politics and economics, and tracking the effects of disasters. Our contributions are as follows: Dreaddit, a dataset of lengthy social media posts in five categories, each including stressful and non-stressful text and different ways of expressing stress, with a subset of the data annotated by human annotators; Supervised models, both discrete and neural, for predicting stress, providing benchmarks to stimulate further work in the area; and Analysis of the content of our dataset and the performance of our models, which provides insight into the problem of stress detection. In the remainder of this paper, we will review relevant work, describe our dataset and its annotation, provide some analysis of the data and stress detection problem, present and discuss results of some supervised models on our dataset, and finally conclude with our summary and future work. Related Work Because of the subjective nature of stress, relevant research tends to focus on physical signals, such as cortisol levels in saliva BIBREF2, electroencephalogram (EEG) readings BIBREF3, or speech data BIBREF4. This work captures important aspects of the human reaction to stress, but has the disadvantage that hardware or physical presence is required. However, because of the aforementioned proliferation of stress on social media, we believe that stress can be observed and studied purely from text. Other threads of research have also made this observation and generally use microblog data (e.g., Twitter). The most similar work to ours includes BIBREF5, who use Long Short-Term Memory Networks (LSTMs) to detect stress in speech and Twitter data; BIBREF6, who examine the Facebook and Twitter posts of users who score highly on a diagnostic stress questionnaire; and BIBREF7, who detect stress on microblogging websites using a Convolutional Neural Network (CNN) and factor graph model with a suite of discrete features. Our work is unique in that it uses data from Reddit, which is both typically longer and not typically as conducive to distant labeling as microblogs (which are labeled in the above work with hashtags or pattern matching, such as “I feel stressed”). The length of our posts will ultimately enable research into the causes of stress and will allow us to identify more implicit indicators. We also limit ourselves to text data and metadata (e.g., posting time, number of replies), whereas BIBREF5 also train on speech data and BIBREF7 include information from photos, neither of which is always available. Finally, we label individual parts of longer posts for acute stress using human annotators, while BIBREF6 label users themselves for chronic stress with the users' voluntary answers to a psychological questionnaire. Researchers have used Reddit data to examine a variety of mental health conditions such as depression BIBREF8 and other clinical diagnoses such as general anxiety BIBREF9, but to our knowledge, our corpus is the first to focus on stress as a general experience, not only a clinical concept. Dataset ::: Reddit Data Reddit is a social media website where users post in topic-specific communities called subreddits, and other users comment and vote on these posts. The lengthy nature of these posts makes Reddit an ideal source of data for studying the nuances of phenomena like stress. To collect expressions of stress, we select categories of subreddits where members are likely to discuss stressful topics: Interpersonal conflict: abuse and social domains. Posters in the abuse subreddits are largely survivors of an abusive relationship or situation sharing stories and support, while posters in the social subreddit post about any difficulty in a relationship (often but not exclusively romantic) and seek advice for how to handle the situation. Mental illness: anxiety and Post-Traumatic Stress Disorder (PTSD) domains. Posters in these subreddits seek advice about coping with mental illness and its symptoms, share support and successes, seek diagnoses, and so on. Financial need: financial domain. Posters in the financial subreddits generally seek financial or material help from other posters. We include ten subreddits in the five domains of abuse, social, anxiety, PTSD, and financial, as detailed in tab:data-spread, and our analysis focuses on the domain level. Using the PRAW API, we scrape all available posts on these subreddits between January 1, 2017 and November 19, 2018; in total, 187,444 posts. As we will describe in sec:annotation, we assign binary stress labels to 3,553 segments of these posts to form a supervised and semi-supervised training set. An example segment is shown in fig:stress-example. Highlighted phrases are indicators that the writer is stressed: the writer mentions common physical symptoms (nausea), explicitly names fear and dread, and uses language indicating helplessness and help-seeking behavior. The average length of a post in our dataset is 420 tokens, much longer than most microblog data (e.g., Twitter's character limit as of this writing is 280 characters). While we label segments that are about 100 tokens long, we still have much additional data from the author on which to draw. We feel this is important because, while our goal in this paper is to predict stress, having longer posts will ultimately allow more detailed study of the causes and effects of stress. In tab:data-examples, we provide examples of labeled segments from the various domains in our dataset. The samples are fairly typical; the dataset contains mostly first-person narrative accounts of personal experiences and requests for assistance or advice. Our data displays a range of topics, language, and agreement levels among annotators, and we provide only a few examples. Lengthier examples are available in the appendix. Dataset ::: Data Annotation We annotate a subset of the data using Amazon Mechanical Turk in order to begin exploring the characteristics of stress. We partition the posts into contiguous five-sentence chunks for labeling; we wish to annotate segments of the posts because we are ultimately interested in what parts of the post depict stress, but we find through manual inspection that some amount of context is important. Our posts, however, are quite long, and it would be difficult for annotators to read and annotate entire posts. This type of data will allow us in the future not only to classify the presence of stress, but also to locate its expressions in the text, even if they are diffused throughout the post. We set up an annotation task in which English-speaking Mechanical Turk Workers are asked to label five randomly selected text segments (of five sentences each) after taking a qualification test; Workers are allowed to select “Stress”, “Not Stress”, or “Can't Tell” for each segment. In our instructions, we define stress as follows: “The Oxford English Dictionary defines stress as `a state of mental or emotional strain or tension resulting from adverse or demanding circumstances'. This means that stress results from someone being uncertain that they can handle some threatening situation. We are interested in cases where that someone also feels negatively about it (sometimes we can find an event stressful, but also find it exciting and positive, like a first date or an interview).”. We specifically ask Workers to decide whether the author is expressing both stress and a negative attitude about it, not whether the situation itself seems stressful. Our full instructions are available in the appendix. We submit 4,000 segments, sampled equally from each domain and uniformly within domains, to Mechanical Turk to be annotated by at least five Workers each and include in each batch one of 50 “check questions” which have been previously verified by two in-house annotators. After removing annotations which failed the check questions, and data points for which at least half of the annotators selected “Can't Tell”, we are left with 3,553 labeled data points from 2,929 different posts. We take the annotators' majority vote as the label for each segment and record the percentage of annotators who agreed. The resulting dataset is nearly balanced, with 52.3% of the data (1,857 instances) labeled stressful. Our agreement on all labeled data is $\kappa =0.47$, using Fleiss's Kappa BIBREF10, considered “moderate agreement” by BIBREF11. We observe that annotators achieved perfect agreement on 39% of the data, and for another 32% the majority was 3/5 or less. This suggests that our data displays significant variation in how stress is expressed, which we explore in the next section. Data Analysis While all our data has the same genre and personal narrative style, we find distinctions among domains with which classification systems must contend in order to perform well, and distinctions between stressful and non-stressful data which may be useful when developing such systems. Posters in each subreddit express stress, but we expect that their different functions and stressors lead to differences in how they do so in each subreddit, domain, and broad category. By domain. We examine the vocabulary patterns of each domain on our training data only, not including unlabeled data so that we may extend our analysis to the label level. First, we use the word categories from the Linguistic Inquiry and Word Count (LIWC) BIBREF12, a lexicon-based tool that gives scores for psychologically relevant categories such as sadness or cognitive processes, as a proxy for topic prevalence and expression variety. We calculate both the percentage of tokens per domain which are included in a specific LIWC word list, and the percentage of words in a specific LIWC word list that appear in each domain (“coverage” of the domain). Results of the analysis are highlighted in tab:domain-liwc. We first note that variety of expression depends on domain and topic; for example, the variety in the expression of negative emotions is particularly low in the financial domain (with 1.54% of words being negative emotion (“negemo”) words and only 31% of “negemo” words used). We also see clear topic shifts among domains: the interpersonal domains contain roughly 1.5 times as many social words, proportionally, as the others; and domains are stratified by their coverage of the anxiety word list (with the most in the mental illness domains and the least in the financial domain). We also examine the overall lexical diversity of each domain by calculating Yule's I measure BIBREF13. fig:domain-yule shows the lexical diversity of our data, both for all words in the vocabulary and for only words in LIWC's “negemo” word list. Yule's I measure reflects the repetitiveness of the data (as opposed to the broader coverage measured by our LIWC analysis). We notice exceptionally low lexical diversity for the mental illness domains, which we believe is due to the structured, clinical language surrounding mental illnesses. For example, posters in these domains discuss topics such as symptoms, medical care, and diagnoses (fig:stress-example, tab:data-examples). When we restrict our analysis to negative emotion words, this pattern persists only for anxiety; the PTSD domain has comparatively little lexical variety, but what it does have contributes to its variety of expression for negative emotions. By label. We perform similar analyses on data labeled stressful or non-stressful by a majority of annotators. We confirm some common results in the mental health literature, including that stressful data uses more first-person pronouns (perhaps reflecting increased self-focus) and that non-stressful data uses more social words (perhaps reflecting a better social support network). Additionally, we calculate measures of syntactic complexity, including the percentage of words that are conjunctions, average number of tokens per labeled segment, average number of clauses per sentence, Flesch-Kincaid Grade Level BIBREF14, and Automated Readability Index BIBREF15. These scores are comparable for all splits of our data; however, as shown in tab:label-complexity, we do see non-significant but persistent differences between stressful and non-stressful data, with stressful data being generally longer and more complex but also rated simpler by readability indices. These findings are intriguing and can be explored in future work. By agreement. Finally, we examine the differences among annotator agreement levels. We find an inverse relationship between the lexical variety and the proportion of annotators who agree, as shown in fig:agreement-diversity. While the amount of data and lexical variety seem to be related, Yule's I measure controls for length, so we believe that this trend reflects a difference in the type of data that encourages high or low agreement. Methods In order to train supervised models, we group the labeled segments by post and randomly select 10% of the posts ($\approx $ 10% of the labeled segments) to form a test set. This ensures that while there is a reasonable distribution of labels and domains in the train and test set, the two do not explicitly share any of the same content. This results in a total of 2,838 train data points (51.6% labeled stressful) and 715 test data points (52.4% labeled stressful). Because our data is relatively small, we train our traditional supervised models with 10-fold cross-validation; for our neural models, we break off a further random 10% of the training data for validation and average the predictions of 10 randomly-initialized trained models. In addition to the words of the posts (both as bag-of-n-grams and distributed word embeddings), we include features in three categories: Lexical features. Average, maximum, and minimum scores for pleasantness, activation, and imagery from the Dictionary of Affect in Language (DAL) BIBREF16; the full suite of 93 LIWC features; and sentiment calculated using the Pattern sentiment library BIBREF17. Syntactic features. Part-of-speech unigrams and bigrams, the Flesch-Kincaid Grade Level, and the Automated Readability Index. Social media features. The UTC timestamp of the post; the ratio of upvotes to downvotes on the post, where an upvote roughly corresponds to a reaction of “like” and a downvote to “dislike” (upvote ratio); the net score of the post (karma) (calculated by Reddit, $n_\text{upvotes} - n_\text{downvotes}$); and the total number of comments in the entire thread under the post. Methods ::: Supervised Models We first experiment with a suite of non-neural models, including Support Vector Machines (SVMs), logistic regression, Naïve Bayes, Perceptron, and decision trees. We tune the parameters for these models using grid search and 10-fold cross-validation, and obtain results for different combinations of input and features. For input representation, we experiment with bag-of-n-grams (for $n \in \lbrace 1..3\rbrace $), Google News pre-trained Word2Vec embeddings (300-dimensional) BIBREF18, Word2Vec embeddings trained on our large unlabeled corpus (300-dimensional, to match), and BERT embeddings trained on our unlabeled corpus (768-dimensional, the top-level [CLS] embedding) BIBREF19. We experiment with subsets of the above features, including separating the features by category (lexical, syntactic, social) and by magnitude of the Pearson correlation coefficient ($r$) with the training labels. Finally, we stratify the training data by annotator agreement, including separate experiments on only data for which all annotators agreed, data for which at least 4/5 annotators agreed, and so on. We finally experiment with neural models, although our dataset is relatively small. We train both a two-layer bidirectional Gated Recurrent Neural Network (GRNN) BIBREF20 and Convolutional Neural Network (CNN) (as designed in BIBREF21) with parallel filters of size 2 and 3, as these have been shown to be effective in the literature on emotion detection in text (e.g., BIBREF22, BIBREF23). Because neural models require large amounts of data, we do not cull the data by annotator agreement for these experiments and use all the labeled data we have. We experiment with training embeddings with random initialization as well as initializing with our domain-specific Word2Vec embeddings, and we also concatenate the best feature set from our non-neural experiments onto the representations after the recurrent and convolutional/pooling layers respectively. Finally, we apply BERT directly to our task, fine-tuning the pretrained BERT-base on our classification task for three epochs (as performed in BIBREF19 when applying BERT to any task). Our parameter settings for our various models are available in the appendix. Results and Discussion We present our results in tab:supervised-results. Our best model is a logistic regression classifier with Word2Vec embeddings trained on our unlabeled corpus, high-correlation features ($\ge $ 0.4 absolute Pearson's $r$), and high-agreement data (at least 4/5 annotators agreed); this model achieves an F-score of 79.8 on our test set, a significant improvement over the majority baseline, the n-gram baseline, and the pre-trained embedding model, (all by the approximate randomization test, $p < 0.01$). The high-correlation features used by this model are LIWC's clout, tone, and “I” pronoun features, and we investigate the use of these features in the other model types. Particularly, we apply different architectures (GRNN and CNN) and different input representations (pretrained Word2Vec, domain-specific BERT). We find that our logistic regression classifier described above achieves comparable performance to BERT-base (approximate randomization test, $p > 0.5$) with the added benefits of increased interpretability and less intensive training. Additionally, domain-specific word embeddings trained on our unlabeled corpus (Word2Vec, BERT) significantly outperform n-grams or pretrained embeddings, as expected, signaling the importance of domain knowledge in this problem. We note that our basic deep learning models do not perform as well as our traditional supervised models or BERT, although they consistently, significantly outperform the majority baseline. We believe this is due to a serious lack of data; our labeled dataset is orders of magnitude smaller than neural models typically require to perform well. We expect that neural models can make good use of our large unlabeled dataset, which we plan to explore in future work. We believe that the superior performance of the pretrained BERT-base model (which uses no additional features) on our dataset supports this hypothesis as well. In tab:data-and-feat-comparison, we examine the impact of different feature sets and levels of annotator agreement on our logistic regressor with domain-specific Word2Vec embeddings and find consistent patterns supporting this model. First, we see a tradeoff between data size and data quality, where lower-agreement data (which can be seen as lower-quality) results in worse performance, but the larger 80% agreement data consistently outperforms the smaller perfect agreement data. Additionally, LIWC features consistently perform well while syntactic features consistently do not, and we see a trend towards the quality of features over their quantity; those with the highest Pearson correlation with the train set (which all happen to be LIWC features) outperform sets with lower correlations, which in turn outperform the set of all features. This suggests that stress detection is a highly lexical problem, and in particular, resources developed with psychological applications in mind, like LIWC, are very helpful. Finally, we perform an error analysis of the two best-performing models. Although the dataset is nearly balanced, both BERT-base and our best logistic regression model greatly overclassify stress, as shown in tab:confusion-matrices, and they broadly overlap but do differ in their predictions (disagreeing with one another on approximately 100 instances). We note that the examples misclassified by both models are often, though not always, ones with low annotator agreement (with the average percent agreement for misclassified examples being 0.55 for BERT and 0.61 for logistic regression). Both models seem to have trouble with less explicit expressions of stress, framing negative experiences in a positive or retrospective way, and stories where another person aside from the poster is the focus; these types of errors are difficult to capture with the features we used (primarily lexical), and further work should be aware of them. We include some examples of these errors in tab:error-analysis-paper, and further illustrative examples are available in the appendix. Conclusion and Future Work In this paper, we present a new dataset, Dreaddit, for stress classification in social media, and find the current baseline at 80% F-score on the binary stress classification problem. We believe this dataset has the potential to spur development of sophisticated, interpretable models of psychological stress. Analysis of our data and our models shows that stress detection is a highly lexical problem benefitting from domain knowledge, but we note there is still room for improvement, especially in incorporating the framing and intentions of the writer. We intend for our future work to use this dataset to contextualize stress and offer explanations using the content features of the text. Additional interesting problems applicable to this dataset include the development of effective distant labeling schemes, which is a significant first step to developing a quantitative model of stress. Acknowledgements We would like to thank Fei-Tzin Lee, Christopher Hidey, Diana Abagyan, and our anonymous reviewers for their insightful comments during the writing of this paper. This research was funded in part by a Presidential Fellowship from the Fu Foundation School of Engineering and Applied Science at Columbia University. Data Samples We include several full posts (with identifying information removed and whitespace collapsed) in fig:data-appendix-1,fig:data-appendix-2,fig:data-appendix-3,fig:data-appendix-4. Posts are otherwise reproduced exactly as obtained (with spelling errors, etc.). The selected examples are deliberately of a reasonable but fairly typical length for readability and space concerns; recall that our average post length is 420 tokens, longer for interpersonal subreddits and shorter for other subreddits. Full Annotation Guidelines We provide our annotation instructions in full in fig:annotation. Mechanical Turk Workers were given these instructions and examples followed by five text segments (one of which was one of our 50 check questions) and allowed to select “Stress”, “Not Stress', or “Can't Tell” for each. Workers were given one hour to complete the HIT and paid $0.12 for each HIT where they correctly answered the check question, with a limit of 30 total submissions per Worker. Parameter Settings We tune our traditional supervised models' parameters using grid search, all as implemented in Python's scikit-learn library BIBREF25. Our best model uses unbalanced class weights, L2 penalty, and a constant term C=10, with other parameters at their default values. All cross-validation runs were initialized with the same random seed for comparability and reproducibility. We train each of our neural models with the Adam optimizer BIBREF24 for up to ten epochs with early stopping measured on the validation set. We apply a dropout rate of 0.5 during training in the recurrent layers and after the convolutional layers. We set our hidden sizes (i.e., the output of the recurrent and pooling layers) as well as our batch size to 128, and tune our learning rate to $5\cdot 10^{-4}$; we set these parameters relatively small to try to work with our small data. We also experiment with scheduling the learning rate on plateau of the validation loss, and with pre-training the models on a much larger sentiment dataset, the Stanford Sentiment Treebank BIBREF26, to help combat the problem of small data, but this does not improve the performance of our neural networks. Error Analysis Examples As a supplement to our error analysis discussion in sec:results, we provide additional examples of test data points which one or both of our best models (BERT-base or our best logistic regressor with embeddings trained on our unlabeled corpus and high-correlation discrete features) failed to classify correctly in tab:error-analysis-appendix.
Yes
adae0c32a69928929101d0ba37d36c0a45298ad6
adae0c32a69928929101d0ba37d36c0a45298ad6_0
Q: What supervised methods are used? Text: Introduction In our online world, social media users tweet, post, and message an incredible number of times each day, and the interconnected, information-heavy nature of our lives makes stress more prominent and easily observable than ever before. With many platforms such as Twitter, Reddit, and Facebook, the scientific community has access to a massive amount of data to study the daily worries and stresses of people across the world. Stress is a nearly universal phenomenon, and we have some evidence of its prevalence and recent increase. For example, the American Psychological Association (APA) has performed annual studies assessing stress in the United States since 2007 which demonstrate widespread experiences of chronic stress. Stress is a subjective experience whose effects and even definition can vary from person to person; as a baseline, the APA defines stress as a reaction to extant and future demands and pressures, which can be positive in moderation. Health and psychology researchers have extensively studied the connection between too much stress and physical and mental health BIBREF0, BIBREF1. In this work, we present a corpus of social media text for detecting the presence of stress. We hope this corpus will facilitate the development of models for this problem, which has diverse applications in areas such as diagnosing physical and mental illness, gauging public mood and worries in politics and economics, and tracking the effects of disasters. Our contributions are as follows: Dreaddit, a dataset of lengthy social media posts in five categories, each including stressful and non-stressful text and different ways of expressing stress, with a subset of the data annotated by human annotators; Supervised models, both discrete and neural, for predicting stress, providing benchmarks to stimulate further work in the area; and Analysis of the content of our dataset and the performance of our models, which provides insight into the problem of stress detection. In the remainder of this paper, we will review relevant work, describe our dataset and its annotation, provide some analysis of the data and stress detection problem, present and discuss results of some supervised models on our dataset, and finally conclude with our summary and future work. Related Work Because of the subjective nature of stress, relevant research tends to focus on physical signals, such as cortisol levels in saliva BIBREF2, electroencephalogram (EEG) readings BIBREF3, or speech data BIBREF4. This work captures important aspects of the human reaction to stress, but has the disadvantage that hardware or physical presence is required. However, because of the aforementioned proliferation of stress on social media, we believe that stress can be observed and studied purely from text. Other threads of research have also made this observation and generally use microblog data (e.g., Twitter). The most similar work to ours includes BIBREF5, who use Long Short-Term Memory Networks (LSTMs) to detect stress in speech and Twitter data; BIBREF6, who examine the Facebook and Twitter posts of users who score highly on a diagnostic stress questionnaire; and BIBREF7, who detect stress on microblogging websites using a Convolutional Neural Network (CNN) and factor graph model with a suite of discrete features. Our work is unique in that it uses data from Reddit, which is both typically longer and not typically as conducive to distant labeling as microblogs (which are labeled in the above work with hashtags or pattern matching, such as “I feel stressed”). The length of our posts will ultimately enable research into the causes of stress and will allow us to identify more implicit indicators. We also limit ourselves to text data and metadata (e.g., posting time, number of replies), whereas BIBREF5 also train on speech data and BIBREF7 include information from photos, neither of which is always available. Finally, we label individual parts of longer posts for acute stress using human annotators, while BIBREF6 label users themselves for chronic stress with the users' voluntary answers to a psychological questionnaire. Researchers have used Reddit data to examine a variety of mental health conditions such as depression BIBREF8 and other clinical diagnoses such as general anxiety BIBREF9, but to our knowledge, our corpus is the first to focus on stress as a general experience, not only a clinical concept. Dataset ::: Reddit Data Reddit is a social media website where users post in topic-specific communities called subreddits, and other users comment and vote on these posts. The lengthy nature of these posts makes Reddit an ideal source of data for studying the nuances of phenomena like stress. To collect expressions of stress, we select categories of subreddits where members are likely to discuss stressful topics: Interpersonal conflict: abuse and social domains. Posters in the abuse subreddits are largely survivors of an abusive relationship or situation sharing stories and support, while posters in the social subreddit post about any difficulty in a relationship (often but not exclusively romantic) and seek advice for how to handle the situation. Mental illness: anxiety and Post-Traumatic Stress Disorder (PTSD) domains. Posters in these subreddits seek advice about coping with mental illness and its symptoms, share support and successes, seek diagnoses, and so on. Financial need: financial domain. Posters in the financial subreddits generally seek financial or material help from other posters. We include ten subreddits in the five domains of abuse, social, anxiety, PTSD, and financial, as detailed in tab:data-spread, and our analysis focuses on the domain level. Using the PRAW API, we scrape all available posts on these subreddits between January 1, 2017 and November 19, 2018; in total, 187,444 posts. As we will describe in sec:annotation, we assign binary stress labels to 3,553 segments of these posts to form a supervised and semi-supervised training set. An example segment is shown in fig:stress-example. Highlighted phrases are indicators that the writer is stressed: the writer mentions common physical symptoms (nausea), explicitly names fear and dread, and uses language indicating helplessness and help-seeking behavior. The average length of a post in our dataset is 420 tokens, much longer than most microblog data (e.g., Twitter's character limit as of this writing is 280 characters). While we label segments that are about 100 tokens long, we still have much additional data from the author on which to draw. We feel this is important because, while our goal in this paper is to predict stress, having longer posts will ultimately allow more detailed study of the causes and effects of stress. In tab:data-examples, we provide examples of labeled segments from the various domains in our dataset. The samples are fairly typical; the dataset contains mostly first-person narrative accounts of personal experiences and requests for assistance or advice. Our data displays a range of topics, language, and agreement levels among annotators, and we provide only a few examples. Lengthier examples are available in the appendix. Dataset ::: Data Annotation We annotate a subset of the data using Amazon Mechanical Turk in order to begin exploring the characteristics of stress. We partition the posts into contiguous five-sentence chunks for labeling; we wish to annotate segments of the posts because we are ultimately interested in what parts of the post depict stress, but we find through manual inspection that some amount of context is important. Our posts, however, are quite long, and it would be difficult for annotators to read and annotate entire posts. This type of data will allow us in the future not only to classify the presence of stress, but also to locate its expressions in the text, even if they are diffused throughout the post. We set up an annotation task in which English-speaking Mechanical Turk Workers are asked to label five randomly selected text segments (of five sentences each) after taking a qualification test; Workers are allowed to select “Stress”, “Not Stress”, or “Can't Tell” for each segment. In our instructions, we define stress as follows: “The Oxford English Dictionary defines stress as `a state of mental or emotional strain or tension resulting from adverse or demanding circumstances'. This means that stress results from someone being uncertain that they can handle some threatening situation. We are interested in cases where that someone also feels negatively about it (sometimes we can find an event stressful, but also find it exciting and positive, like a first date or an interview).”. We specifically ask Workers to decide whether the author is expressing both stress and a negative attitude about it, not whether the situation itself seems stressful. Our full instructions are available in the appendix. We submit 4,000 segments, sampled equally from each domain and uniformly within domains, to Mechanical Turk to be annotated by at least five Workers each and include in each batch one of 50 “check questions” which have been previously verified by two in-house annotators. After removing annotations which failed the check questions, and data points for which at least half of the annotators selected “Can't Tell”, we are left with 3,553 labeled data points from 2,929 different posts. We take the annotators' majority vote as the label for each segment and record the percentage of annotators who agreed. The resulting dataset is nearly balanced, with 52.3% of the data (1,857 instances) labeled stressful. Our agreement on all labeled data is $\kappa =0.47$, using Fleiss's Kappa BIBREF10, considered “moderate agreement” by BIBREF11. We observe that annotators achieved perfect agreement on 39% of the data, and for another 32% the majority was 3/5 or less. This suggests that our data displays significant variation in how stress is expressed, which we explore in the next section. Data Analysis While all our data has the same genre and personal narrative style, we find distinctions among domains with which classification systems must contend in order to perform well, and distinctions between stressful and non-stressful data which may be useful when developing such systems. Posters in each subreddit express stress, but we expect that their different functions and stressors lead to differences in how they do so in each subreddit, domain, and broad category. By domain. We examine the vocabulary patterns of each domain on our training data only, not including unlabeled data so that we may extend our analysis to the label level. First, we use the word categories from the Linguistic Inquiry and Word Count (LIWC) BIBREF12, a lexicon-based tool that gives scores for psychologically relevant categories such as sadness or cognitive processes, as a proxy for topic prevalence and expression variety. We calculate both the percentage of tokens per domain which are included in a specific LIWC word list, and the percentage of words in a specific LIWC word list that appear in each domain (“coverage” of the domain). Results of the analysis are highlighted in tab:domain-liwc. We first note that variety of expression depends on domain and topic; for example, the variety in the expression of negative emotions is particularly low in the financial domain (with 1.54% of words being negative emotion (“negemo”) words and only 31% of “negemo” words used). We also see clear topic shifts among domains: the interpersonal domains contain roughly 1.5 times as many social words, proportionally, as the others; and domains are stratified by their coverage of the anxiety word list (with the most in the mental illness domains and the least in the financial domain). We also examine the overall lexical diversity of each domain by calculating Yule's I measure BIBREF13. fig:domain-yule shows the lexical diversity of our data, both for all words in the vocabulary and for only words in LIWC's “negemo” word list. Yule's I measure reflects the repetitiveness of the data (as opposed to the broader coverage measured by our LIWC analysis). We notice exceptionally low lexical diversity for the mental illness domains, which we believe is due to the structured, clinical language surrounding mental illnesses. For example, posters in these domains discuss topics such as symptoms, medical care, and diagnoses (fig:stress-example, tab:data-examples). When we restrict our analysis to negative emotion words, this pattern persists only for anxiety; the PTSD domain has comparatively little lexical variety, but what it does have contributes to its variety of expression for negative emotions. By label. We perform similar analyses on data labeled stressful or non-stressful by a majority of annotators. We confirm some common results in the mental health literature, including that stressful data uses more first-person pronouns (perhaps reflecting increased self-focus) and that non-stressful data uses more social words (perhaps reflecting a better social support network). Additionally, we calculate measures of syntactic complexity, including the percentage of words that are conjunctions, average number of tokens per labeled segment, average number of clauses per sentence, Flesch-Kincaid Grade Level BIBREF14, and Automated Readability Index BIBREF15. These scores are comparable for all splits of our data; however, as shown in tab:label-complexity, we do see non-significant but persistent differences between stressful and non-stressful data, with stressful data being generally longer and more complex but also rated simpler by readability indices. These findings are intriguing and can be explored in future work. By agreement. Finally, we examine the differences among annotator agreement levels. We find an inverse relationship between the lexical variety and the proportion of annotators who agree, as shown in fig:agreement-diversity. While the amount of data and lexical variety seem to be related, Yule's I measure controls for length, so we believe that this trend reflects a difference in the type of data that encourages high or low agreement. Methods In order to train supervised models, we group the labeled segments by post and randomly select 10% of the posts ($\approx $ 10% of the labeled segments) to form a test set. This ensures that while there is a reasonable distribution of labels and domains in the train and test set, the two do not explicitly share any of the same content. This results in a total of 2,838 train data points (51.6% labeled stressful) and 715 test data points (52.4% labeled stressful). Because our data is relatively small, we train our traditional supervised models with 10-fold cross-validation; for our neural models, we break off a further random 10% of the training data for validation and average the predictions of 10 randomly-initialized trained models. In addition to the words of the posts (both as bag-of-n-grams and distributed word embeddings), we include features in three categories: Lexical features. Average, maximum, and minimum scores for pleasantness, activation, and imagery from the Dictionary of Affect in Language (DAL) BIBREF16; the full suite of 93 LIWC features; and sentiment calculated using the Pattern sentiment library BIBREF17. Syntactic features. Part-of-speech unigrams and bigrams, the Flesch-Kincaid Grade Level, and the Automated Readability Index. Social media features. The UTC timestamp of the post; the ratio of upvotes to downvotes on the post, where an upvote roughly corresponds to a reaction of “like” and a downvote to “dislike” (upvote ratio); the net score of the post (karma) (calculated by Reddit, $n_\text{upvotes} - n_\text{downvotes}$); and the total number of comments in the entire thread under the post. Methods ::: Supervised Models We first experiment with a suite of non-neural models, including Support Vector Machines (SVMs), logistic regression, Naïve Bayes, Perceptron, and decision trees. We tune the parameters for these models using grid search and 10-fold cross-validation, and obtain results for different combinations of input and features. For input representation, we experiment with bag-of-n-grams (for $n \in \lbrace 1..3\rbrace $), Google News pre-trained Word2Vec embeddings (300-dimensional) BIBREF18, Word2Vec embeddings trained on our large unlabeled corpus (300-dimensional, to match), and BERT embeddings trained on our unlabeled corpus (768-dimensional, the top-level [CLS] embedding) BIBREF19. We experiment with subsets of the above features, including separating the features by category (lexical, syntactic, social) and by magnitude of the Pearson correlation coefficient ($r$) with the training labels. Finally, we stratify the training data by annotator agreement, including separate experiments on only data for which all annotators agreed, data for which at least 4/5 annotators agreed, and so on. We finally experiment with neural models, although our dataset is relatively small. We train both a two-layer bidirectional Gated Recurrent Neural Network (GRNN) BIBREF20 and Convolutional Neural Network (CNN) (as designed in BIBREF21) with parallel filters of size 2 and 3, as these have been shown to be effective in the literature on emotion detection in text (e.g., BIBREF22, BIBREF23). Because neural models require large amounts of data, we do not cull the data by annotator agreement for these experiments and use all the labeled data we have. We experiment with training embeddings with random initialization as well as initializing with our domain-specific Word2Vec embeddings, and we also concatenate the best feature set from our non-neural experiments onto the representations after the recurrent and convolutional/pooling layers respectively. Finally, we apply BERT directly to our task, fine-tuning the pretrained BERT-base on our classification task for three epochs (as performed in BIBREF19 when applying BERT to any task). Our parameter settings for our various models are available in the appendix. Results and Discussion We present our results in tab:supervised-results. Our best model is a logistic regression classifier with Word2Vec embeddings trained on our unlabeled corpus, high-correlation features ($\ge $ 0.4 absolute Pearson's $r$), and high-agreement data (at least 4/5 annotators agreed); this model achieves an F-score of 79.8 on our test set, a significant improvement over the majority baseline, the n-gram baseline, and the pre-trained embedding model, (all by the approximate randomization test, $p < 0.01$). The high-correlation features used by this model are LIWC's clout, tone, and “I” pronoun features, and we investigate the use of these features in the other model types. Particularly, we apply different architectures (GRNN and CNN) and different input representations (pretrained Word2Vec, domain-specific BERT). We find that our logistic regression classifier described above achieves comparable performance to BERT-base (approximate randomization test, $p > 0.5$) with the added benefits of increased interpretability and less intensive training. Additionally, domain-specific word embeddings trained on our unlabeled corpus (Word2Vec, BERT) significantly outperform n-grams or pretrained embeddings, as expected, signaling the importance of domain knowledge in this problem. We note that our basic deep learning models do not perform as well as our traditional supervised models or BERT, although they consistently, significantly outperform the majority baseline. We believe this is due to a serious lack of data; our labeled dataset is orders of magnitude smaller than neural models typically require to perform well. We expect that neural models can make good use of our large unlabeled dataset, which we plan to explore in future work. We believe that the superior performance of the pretrained BERT-base model (which uses no additional features) on our dataset supports this hypothesis as well. In tab:data-and-feat-comparison, we examine the impact of different feature sets and levels of annotator agreement on our logistic regressor with domain-specific Word2Vec embeddings and find consistent patterns supporting this model. First, we see a tradeoff between data size and data quality, where lower-agreement data (which can be seen as lower-quality) results in worse performance, but the larger 80% agreement data consistently outperforms the smaller perfect agreement data. Additionally, LIWC features consistently perform well while syntactic features consistently do not, and we see a trend towards the quality of features over their quantity; those with the highest Pearson correlation with the train set (which all happen to be LIWC features) outperform sets with lower correlations, which in turn outperform the set of all features. This suggests that stress detection is a highly lexical problem, and in particular, resources developed with psychological applications in mind, like LIWC, are very helpful. Finally, we perform an error analysis of the two best-performing models. Although the dataset is nearly balanced, both BERT-base and our best logistic regression model greatly overclassify stress, as shown in tab:confusion-matrices, and they broadly overlap but do differ in their predictions (disagreeing with one another on approximately 100 instances). We note that the examples misclassified by both models are often, though not always, ones with low annotator agreement (with the average percent agreement for misclassified examples being 0.55 for BERT and 0.61 for logistic regression). Both models seem to have trouble with less explicit expressions of stress, framing negative experiences in a positive or retrospective way, and stories where another person aside from the poster is the focus; these types of errors are difficult to capture with the features we used (primarily lexical), and further work should be aware of them. We include some examples of these errors in tab:error-analysis-paper, and further illustrative examples are available in the appendix. Conclusion and Future Work In this paper, we present a new dataset, Dreaddit, for stress classification in social media, and find the current baseline at 80% F-score on the binary stress classification problem. We believe this dataset has the potential to spur development of sophisticated, interpretable models of psychological stress. Analysis of our data and our models shows that stress detection is a highly lexical problem benefitting from domain knowledge, but we note there is still room for improvement, especially in incorporating the framing and intentions of the writer. We intend for our future work to use this dataset to contextualize stress and offer explanations using the content features of the text. Additional interesting problems applicable to this dataset include the development of effective distant labeling schemes, which is a significant first step to developing a quantitative model of stress. Acknowledgements We would like to thank Fei-Tzin Lee, Christopher Hidey, Diana Abagyan, and our anonymous reviewers for their insightful comments during the writing of this paper. This research was funded in part by a Presidential Fellowship from the Fu Foundation School of Engineering and Applied Science at Columbia University. Data Samples We include several full posts (with identifying information removed and whitespace collapsed) in fig:data-appendix-1,fig:data-appendix-2,fig:data-appendix-3,fig:data-appendix-4. Posts are otherwise reproduced exactly as obtained (with spelling errors, etc.). The selected examples are deliberately of a reasonable but fairly typical length for readability and space concerns; recall that our average post length is 420 tokens, longer for interpersonal subreddits and shorter for other subreddits. Full Annotation Guidelines We provide our annotation instructions in full in fig:annotation. Mechanical Turk Workers were given these instructions and examples followed by five text segments (one of which was one of our 50 check questions) and allowed to select “Stress”, “Not Stress', or “Can't Tell” for each. Workers were given one hour to complete the HIT and paid $0.12 for each HIT where they correctly answered the check question, with a limit of 30 total submissions per Worker. Parameter Settings We tune our traditional supervised models' parameters using grid search, all as implemented in Python's scikit-learn library BIBREF25. Our best model uses unbalanced class weights, L2 penalty, and a constant term C=10, with other parameters at their default values. All cross-validation runs were initialized with the same random seed for comparability and reproducibility. We train each of our neural models with the Adam optimizer BIBREF24 for up to ten epochs with early stopping measured on the validation set. We apply a dropout rate of 0.5 during training in the recurrent layers and after the convolutional layers. We set our hidden sizes (i.e., the output of the recurrent and pooling layers) as well as our batch size to 128, and tune our learning rate to $5\cdot 10^{-4}$; we set these parameters relatively small to try to work with our small data. We also experiment with scheduling the learning rate on plateau of the validation loss, and with pre-training the models on a much larger sentiment dataset, the Stanford Sentiment Treebank BIBREF26, to help combat the problem of small data, but this does not improve the performance of our neural networks. Error Analysis Examples As a supplement to our error analysis discussion in sec:results, we provide additional examples of test data points which one or both of our best models (BERT-base or our best logistic regressor with embeddings trained on our unlabeled corpus and high-correlation discrete features) failed to classify correctly in tab:error-analysis-appendix.
Support Vector Machines (SVMs), logistic regression, Naïve Bayes, Perceptron, and decision trees, a two-layer bidirectional Gated Recurrent Neural Network (GRNN) BIBREF20 and Convolutional Neural Network (CNN) (as designed in BIBREF21)
d0f831c97d345a5b8149a9d51bf321f844518434
d0f831c97d345a5b8149a9d51bf321f844518434_0
Q: What labels are in the dataset? Text: Introduction In our online world, social media users tweet, post, and message an incredible number of times each day, and the interconnected, information-heavy nature of our lives makes stress more prominent and easily observable than ever before. With many platforms such as Twitter, Reddit, and Facebook, the scientific community has access to a massive amount of data to study the daily worries and stresses of people across the world. Stress is a nearly universal phenomenon, and we have some evidence of its prevalence and recent increase. For example, the American Psychological Association (APA) has performed annual studies assessing stress in the United States since 2007 which demonstrate widespread experiences of chronic stress. Stress is a subjective experience whose effects and even definition can vary from person to person; as a baseline, the APA defines stress as a reaction to extant and future demands and pressures, which can be positive in moderation. Health and psychology researchers have extensively studied the connection between too much stress and physical and mental health BIBREF0, BIBREF1. In this work, we present a corpus of social media text for detecting the presence of stress. We hope this corpus will facilitate the development of models for this problem, which has diverse applications in areas such as diagnosing physical and mental illness, gauging public mood and worries in politics and economics, and tracking the effects of disasters. Our contributions are as follows: Dreaddit, a dataset of lengthy social media posts in five categories, each including stressful and non-stressful text and different ways of expressing stress, with a subset of the data annotated by human annotators; Supervised models, both discrete and neural, for predicting stress, providing benchmarks to stimulate further work in the area; and Analysis of the content of our dataset and the performance of our models, which provides insight into the problem of stress detection. In the remainder of this paper, we will review relevant work, describe our dataset and its annotation, provide some analysis of the data and stress detection problem, present and discuss results of some supervised models on our dataset, and finally conclude with our summary and future work. Related Work Because of the subjective nature of stress, relevant research tends to focus on physical signals, such as cortisol levels in saliva BIBREF2, electroencephalogram (EEG) readings BIBREF3, or speech data BIBREF4. This work captures important aspects of the human reaction to stress, but has the disadvantage that hardware or physical presence is required. However, because of the aforementioned proliferation of stress on social media, we believe that stress can be observed and studied purely from text. Other threads of research have also made this observation and generally use microblog data (e.g., Twitter). The most similar work to ours includes BIBREF5, who use Long Short-Term Memory Networks (LSTMs) to detect stress in speech and Twitter data; BIBREF6, who examine the Facebook and Twitter posts of users who score highly on a diagnostic stress questionnaire; and BIBREF7, who detect stress on microblogging websites using a Convolutional Neural Network (CNN) and factor graph model with a suite of discrete features. Our work is unique in that it uses data from Reddit, which is both typically longer and not typically as conducive to distant labeling as microblogs (which are labeled in the above work with hashtags or pattern matching, such as “I feel stressed”). The length of our posts will ultimately enable research into the causes of stress and will allow us to identify more implicit indicators. We also limit ourselves to text data and metadata (e.g., posting time, number of replies), whereas BIBREF5 also train on speech data and BIBREF7 include information from photos, neither of which is always available. Finally, we label individual parts of longer posts for acute stress using human annotators, while BIBREF6 label users themselves for chronic stress with the users' voluntary answers to a psychological questionnaire. Researchers have used Reddit data to examine a variety of mental health conditions such as depression BIBREF8 and other clinical diagnoses such as general anxiety BIBREF9, but to our knowledge, our corpus is the first to focus on stress as a general experience, not only a clinical concept. Dataset ::: Reddit Data Reddit is a social media website where users post in topic-specific communities called subreddits, and other users comment and vote on these posts. The lengthy nature of these posts makes Reddit an ideal source of data for studying the nuances of phenomena like stress. To collect expressions of stress, we select categories of subreddits where members are likely to discuss stressful topics: Interpersonal conflict: abuse and social domains. Posters in the abuse subreddits are largely survivors of an abusive relationship or situation sharing stories and support, while posters in the social subreddit post about any difficulty in a relationship (often but not exclusively romantic) and seek advice for how to handle the situation. Mental illness: anxiety and Post-Traumatic Stress Disorder (PTSD) domains. Posters in these subreddits seek advice about coping with mental illness and its symptoms, share support and successes, seek diagnoses, and so on. Financial need: financial domain. Posters in the financial subreddits generally seek financial or material help from other posters. We include ten subreddits in the five domains of abuse, social, anxiety, PTSD, and financial, as detailed in tab:data-spread, and our analysis focuses on the domain level. Using the PRAW API, we scrape all available posts on these subreddits between January 1, 2017 and November 19, 2018; in total, 187,444 posts. As we will describe in sec:annotation, we assign binary stress labels to 3,553 segments of these posts to form a supervised and semi-supervised training set. An example segment is shown in fig:stress-example. Highlighted phrases are indicators that the writer is stressed: the writer mentions common physical symptoms (nausea), explicitly names fear and dread, and uses language indicating helplessness and help-seeking behavior. The average length of a post in our dataset is 420 tokens, much longer than most microblog data (e.g., Twitter's character limit as of this writing is 280 characters). While we label segments that are about 100 tokens long, we still have much additional data from the author on which to draw. We feel this is important because, while our goal in this paper is to predict stress, having longer posts will ultimately allow more detailed study of the causes and effects of stress. In tab:data-examples, we provide examples of labeled segments from the various domains in our dataset. The samples are fairly typical; the dataset contains mostly first-person narrative accounts of personal experiences and requests for assistance or advice. Our data displays a range of topics, language, and agreement levels among annotators, and we provide only a few examples. Lengthier examples are available in the appendix. Dataset ::: Data Annotation We annotate a subset of the data using Amazon Mechanical Turk in order to begin exploring the characteristics of stress. We partition the posts into contiguous five-sentence chunks for labeling; we wish to annotate segments of the posts because we are ultimately interested in what parts of the post depict stress, but we find through manual inspection that some amount of context is important. Our posts, however, are quite long, and it would be difficult for annotators to read and annotate entire posts. This type of data will allow us in the future not only to classify the presence of stress, but also to locate its expressions in the text, even if they are diffused throughout the post. We set up an annotation task in which English-speaking Mechanical Turk Workers are asked to label five randomly selected text segments (of five sentences each) after taking a qualification test; Workers are allowed to select “Stress”, “Not Stress”, or “Can't Tell” for each segment. In our instructions, we define stress as follows: “The Oxford English Dictionary defines stress as `a state of mental or emotional strain or tension resulting from adverse or demanding circumstances'. This means that stress results from someone being uncertain that they can handle some threatening situation. We are interested in cases where that someone also feels negatively about it (sometimes we can find an event stressful, but also find it exciting and positive, like a first date or an interview).”. We specifically ask Workers to decide whether the author is expressing both stress and a negative attitude about it, not whether the situation itself seems stressful. Our full instructions are available in the appendix. We submit 4,000 segments, sampled equally from each domain and uniformly within domains, to Mechanical Turk to be annotated by at least five Workers each and include in each batch one of 50 “check questions” which have been previously verified by two in-house annotators. After removing annotations which failed the check questions, and data points for which at least half of the annotators selected “Can't Tell”, we are left with 3,553 labeled data points from 2,929 different posts. We take the annotators' majority vote as the label for each segment and record the percentage of annotators who agreed. The resulting dataset is nearly balanced, with 52.3% of the data (1,857 instances) labeled stressful. Our agreement on all labeled data is $\kappa =0.47$, using Fleiss's Kappa BIBREF10, considered “moderate agreement” by BIBREF11. We observe that annotators achieved perfect agreement on 39% of the data, and for another 32% the majority was 3/5 or less. This suggests that our data displays significant variation in how stress is expressed, which we explore in the next section. Data Analysis While all our data has the same genre and personal narrative style, we find distinctions among domains with which classification systems must contend in order to perform well, and distinctions between stressful and non-stressful data which may be useful when developing such systems. Posters in each subreddit express stress, but we expect that their different functions and stressors lead to differences in how they do so in each subreddit, domain, and broad category. By domain. We examine the vocabulary patterns of each domain on our training data only, not including unlabeled data so that we may extend our analysis to the label level. First, we use the word categories from the Linguistic Inquiry and Word Count (LIWC) BIBREF12, a lexicon-based tool that gives scores for psychologically relevant categories such as sadness or cognitive processes, as a proxy for topic prevalence and expression variety. We calculate both the percentage of tokens per domain which are included in a specific LIWC word list, and the percentage of words in a specific LIWC word list that appear in each domain (“coverage” of the domain). Results of the analysis are highlighted in tab:domain-liwc. We first note that variety of expression depends on domain and topic; for example, the variety in the expression of negative emotions is particularly low in the financial domain (with 1.54% of words being negative emotion (“negemo”) words and only 31% of “negemo” words used). We also see clear topic shifts among domains: the interpersonal domains contain roughly 1.5 times as many social words, proportionally, as the others; and domains are stratified by their coverage of the anxiety word list (with the most in the mental illness domains and the least in the financial domain). We also examine the overall lexical diversity of each domain by calculating Yule's I measure BIBREF13. fig:domain-yule shows the lexical diversity of our data, both for all words in the vocabulary and for only words in LIWC's “negemo” word list. Yule's I measure reflects the repetitiveness of the data (as opposed to the broader coverage measured by our LIWC analysis). We notice exceptionally low lexical diversity for the mental illness domains, which we believe is due to the structured, clinical language surrounding mental illnesses. For example, posters in these domains discuss topics such as symptoms, medical care, and diagnoses (fig:stress-example, tab:data-examples). When we restrict our analysis to negative emotion words, this pattern persists only for anxiety; the PTSD domain has comparatively little lexical variety, but what it does have contributes to its variety of expression for negative emotions. By label. We perform similar analyses on data labeled stressful or non-stressful by a majority of annotators. We confirm some common results in the mental health literature, including that stressful data uses more first-person pronouns (perhaps reflecting increased self-focus) and that non-stressful data uses more social words (perhaps reflecting a better social support network). Additionally, we calculate measures of syntactic complexity, including the percentage of words that are conjunctions, average number of tokens per labeled segment, average number of clauses per sentence, Flesch-Kincaid Grade Level BIBREF14, and Automated Readability Index BIBREF15. These scores are comparable for all splits of our data; however, as shown in tab:label-complexity, we do see non-significant but persistent differences between stressful and non-stressful data, with stressful data being generally longer and more complex but also rated simpler by readability indices. These findings are intriguing and can be explored in future work. By agreement. Finally, we examine the differences among annotator agreement levels. We find an inverse relationship between the lexical variety and the proportion of annotators who agree, as shown in fig:agreement-diversity. While the amount of data and lexical variety seem to be related, Yule's I measure controls for length, so we believe that this trend reflects a difference in the type of data that encourages high or low agreement. Methods In order to train supervised models, we group the labeled segments by post and randomly select 10% of the posts ($\approx $ 10% of the labeled segments) to form a test set. This ensures that while there is a reasonable distribution of labels and domains in the train and test set, the two do not explicitly share any of the same content. This results in a total of 2,838 train data points (51.6% labeled stressful) and 715 test data points (52.4% labeled stressful). Because our data is relatively small, we train our traditional supervised models with 10-fold cross-validation; for our neural models, we break off a further random 10% of the training data for validation and average the predictions of 10 randomly-initialized trained models. In addition to the words of the posts (both as bag-of-n-grams and distributed word embeddings), we include features in three categories: Lexical features. Average, maximum, and minimum scores for pleasantness, activation, and imagery from the Dictionary of Affect in Language (DAL) BIBREF16; the full suite of 93 LIWC features; and sentiment calculated using the Pattern sentiment library BIBREF17. Syntactic features. Part-of-speech unigrams and bigrams, the Flesch-Kincaid Grade Level, and the Automated Readability Index. Social media features. The UTC timestamp of the post; the ratio of upvotes to downvotes on the post, where an upvote roughly corresponds to a reaction of “like” and a downvote to “dislike” (upvote ratio); the net score of the post (karma) (calculated by Reddit, $n_\text{upvotes} - n_\text{downvotes}$); and the total number of comments in the entire thread under the post. Methods ::: Supervised Models We first experiment with a suite of non-neural models, including Support Vector Machines (SVMs), logistic regression, Naïve Bayes, Perceptron, and decision trees. We tune the parameters for these models using grid search and 10-fold cross-validation, and obtain results for different combinations of input and features. For input representation, we experiment with bag-of-n-grams (for $n \in \lbrace 1..3\rbrace $), Google News pre-trained Word2Vec embeddings (300-dimensional) BIBREF18, Word2Vec embeddings trained on our large unlabeled corpus (300-dimensional, to match), and BERT embeddings trained on our unlabeled corpus (768-dimensional, the top-level [CLS] embedding) BIBREF19. We experiment with subsets of the above features, including separating the features by category (lexical, syntactic, social) and by magnitude of the Pearson correlation coefficient ($r$) with the training labels. Finally, we stratify the training data by annotator agreement, including separate experiments on only data for which all annotators agreed, data for which at least 4/5 annotators agreed, and so on. We finally experiment with neural models, although our dataset is relatively small. We train both a two-layer bidirectional Gated Recurrent Neural Network (GRNN) BIBREF20 and Convolutional Neural Network (CNN) (as designed in BIBREF21) with parallel filters of size 2 and 3, as these have been shown to be effective in the literature on emotion detection in text (e.g., BIBREF22, BIBREF23). Because neural models require large amounts of data, we do not cull the data by annotator agreement for these experiments and use all the labeled data we have. We experiment with training embeddings with random initialization as well as initializing with our domain-specific Word2Vec embeddings, and we also concatenate the best feature set from our non-neural experiments onto the representations after the recurrent and convolutional/pooling layers respectively. Finally, we apply BERT directly to our task, fine-tuning the pretrained BERT-base on our classification task for three epochs (as performed in BIBREF19 when applying BERT to any task). Our parameter settings for our various models are available in the appendix. Results and Discussion We present our results in tab:supervised-results. Our best model is a logistic regression classifier with Word2Vec embeddings trained on our unlabeled corpus, high-correlation features ($\ge $ 0.4 absolute Pearson's $r$), and high-agreement data (at least 4/5 annotators agreed); this model achieves an F-score of 79.8 on our test set, a significant improvement over the majority baseline, the n-gram baseline, and the pre-trained embedding model, (all by the approximate randomization test, $p < 0.01$). The high-correlation features used by this model are LIWC's clout, tone, and “I” pronoun features, and we investigate the use of these features in the other model types. Particularly, we apply different architectures (GRNN and CNN) and different input representations (pretrained Word2Vec, domain-specific BERT). We find that our logistic regression classifier described above achieves comparable performance to BERT-base (approximate randomization test, $p > 0.5$) with the added benefits of increased interpretability and less intensive training. Additionally, domain-specific word embeddings trained on our unlabeled corpus (Word2Vec, BERT) significantly outperform n-grams or pretrained embeddings, as expected, signaling the importance of domain knowledge in this problem. We note that our basic deep learning models do not perform as well as our traditional supervised models or BERT, although they consistently, significantly outperform the majority baseline. We believe this is due to a serious lack of data; our labeled dataset is orders of magnitude smaller than neural models typically require to perform well. We expect that neural models can make good use of our large unlabeled dataset, which we plan to explore in future work. We believe that the superior performance of the pretrained BERT-base model (which uses no additional features) on our dataset supports this hypothesis as well. In tab:data-and-feat-comparison, we examine the impact of different feature sets and levels of annotator agreement on our logistic regressor with domain-specific Word2Vec embeddings and find consistent patterns supporting this model. First, we see a tradeoff between data size and data quality, where lower-agreement data (which can be seen as lower-quality) results in worse performance, but the larger 80% agreement data consistently outperforms the smaller perfect agreement data. Additionally, LIWC features consistently perform well while syntactic features consistently do not, and we see a trend towards the quality of features over their quantity; those with the highest Pearson correlation with the train set (which all happen to be LIWC features) outperform sets with lower correlations, which in turn outperform the set of all features. This suggests that stress detection is a highly lexical problem, and in particular, resources developed with psychological applications in mind, like LIWC, are very helpful. Finally, we perform an error analysis of the two best-performing models. Although the dataset is nearly balanced, both BERT-base and our best logistic regression model greatly overclassify stress, as shown in tab:confusion-matrices, and they broadly overlap but do differ in their predictions (disagreeing with one another on approximately 100 instances). We note that the examples misclassified by both models are often, though not always, ones with low annotator agreement (with the average percent agreement for misclassified examples being 0.55 for BERT and 0.61 for logistic regression). Both models seem to have trouble with less explicit expressions of stress, framing negative experiences in a positive or retrospective way, and stories where another person aside from the poster is the focus; these types of errors are difficult to capture with the features we used (primarily lexical), and further work should be aware of them. We include some examples of these errors in tab:error-analysis-paper, and further illustrative examples are available in the appendix. Conclusion and Future Work In this paper, we present a new dataset, Dreaddit, for stress classification in social media, and find the current baseline at 80% F-score on the binary stress classification problem. We believe this dataset has the potential to spur development of sophisticated, interpretable models of psychological stress. Analysis of our data and our models shows that stress detection is a highly lexical problem benefitting from domain knowledge, but we note there is still room for improvement, especially in incorporating the framing and intentions of the writer. We intend for our future work to use this dataset to contextualize stress and offer explanations using the content features of the text. Additional interesting problems applicable to this dataset include the development of effective distant labeling schemes, which is a significant first step to developing a quantitative model of stress. Acknowledgements We would like to thank Fei-Tzin Lee, Christopher Hidey, Diana Abagyan, and our anonymous reviewers for their insightful comments during the writing of this paper. This research was funded in part by a Presidential Fellowship from the Fu Foundation School of Engineering and Applied Science at Columbia University. Data Samples We include several full posts (with identifying information removed and whitespace collapsed) in fig:data-appendix-1,fig:data-appendix-2,fig:data-appendix-3,fig:data-appendix-4. Posts are otherwise reproduced exactly as obtained (with spelling errors, etc.). The selected examples are deliberately of a reasonable but fairly typical length for readability and space concerns; recall that our average post length is 420 tokens, longer for interpersonal subreddits and shorter for other subreddits. Full Annotation Guidelines We provide our annotation instructions in full in fig:annotation. Mechanical Turk Workers were given these instructions and examples followed by five text segments (one of which was one of our 50 check questions) and allowed to select “Stress”, “Not Stress', or “Can't Tell” for each. Workers were given one hour to complete the HIT and paid $0.12 for each HIT where they correctly answered the check question, with a limit of 30 total submissions per Worker. Parameter Settings We tune our traditional supervised models' parameters using grid search, all as implemented in Python's scikit-learn library BIBREF25. Our best model uses unbalanced class weights, L2 penalty, and a constant term C=10, with other parameters at their default values. All cross-validation runs were initialized with the same random seed for comparability and reproducibility. We train each of our neural models with the Adam optimizer BIBREF24 for up to ten epochs with early stopping measured on the validation set. We apply a dropout rate of 0.5 during training in the recurrent layers and after the convolutional layers. We set our hidden sizes (i.e., the output of the recurrent and pooling layers) as well as our batch size to 128, and tune our learning rate to $5\cdot 10^{-4}$; we set these parameters relatively small to try to work with our small data. We also experiment with scheduling the learning rate on plateau of the validation loss, and with pre-training the models on a much larger sentiment dataset, the Stanford Sentiment Treebank BIBREF26, to help combat the problem of small data, but this does not improve the performance of our neural networks. Error Analysis Examples As a supplement to our error analysis discussion in sec:results, we provide additional examples of test data points which one or both of our best models (BERT-base or our best logistic regressor with embeddings trained on our unlabeled corpus and high-correlation discrete features) failed to classify correctly in tab:error-analysis-appendix.
binary label of stress or not stress
1ccfd288f746c35006f5847297ab52020729f523
1ccfd288f746c35006f5847297ab52020729f523_0
Q: What categories does the dataset come from? Text: Introduction In our online world, social media users tweet, post, and message an incredible number of times each day, and the interconnected, information-heavy nature of our lives makes stress more prominent and easily observable than ever before. With many platforms such as Twitter, Reddit, and Facebook, the scientific community has access to a massive amount of data to study the daily worries and stresses of people across the world. Stress is a nearly universal phenomenon, and we have some evidence of its prevalence and recent increase. For example, the American Psychological Association (APA) has performed annual studies assessing stress in the United States since 2007 which demonstrate widespread experiences of chronic stress. Stress is a subjective experience whose effects and even definition can vary from person to person; as a baseline, the APA defines stress as a reaction to extant and future demands and pressures, which can be positive in moderation. Health and psychology researchers have extensively studied the connection between too much stress and physical and mental health BIBREF0, BIBREF1. In this work, we present a corpus of social media text for detecting the presence of stress. We hope this corpus will facilitate the development of models for this problem, which has diverse applications in areas such as diagnosing physical and mental illness, gauging public mood and worries in politics and economics, and tracking the effects of disasters. Our contributions are as follows: Dreaddit, a dataset of lengthy social media posts in five categories, each including stressful and non-stressful text and different ways of expressing stress, with a subset of the data annotated by human annotators; Supervised models, both discrete and neural, for predicting stress, providing benchmarks to stimulate further work in the area; and Analysis of the content of our dataset and the performance of our models, which provides insight into the problem of stress detection. In the remainder of this paper, we will review relevant work, describe our dataset and its annotation, provide some analysis of the data and stress detection problem, present and discuss results of some supervised models on our dataset, and finally conclude with our summary and future work. Related Work Because of the subjective nature of stress, relevant research tends to focus on physical signals, such as cortisol levels in saliva BIBREF2, electroencephalogram (EEG) readings BIBREF3, or speech data BIBREF4. This work captures important aspects of the human reaction to stress, but has the disadvantage that hardware or physical presence is required. However, because of the aforementioned proliferation of stress on social media, we believe that stress can be observed and studied purely from text. Other threads of research have also made this observation and generally use microblog data (e.g., Twitter). The most similar work to ours includes BIBREF5, who use Long Short-Term Memory Networks (LSTMs) to detect stress in speech and Twitter data; BIBREF6, who examine the Facebook and Twitter posts of users who score highly on a diagnostic stress questionnaire; and BIBREF7, who detect stress on microblogging websites using a Convolutional Neural Network (CNN) and factor graph model with a suite of discrete features. Our work is unique in that it uses data from Reddit, which is both typically longer and not typically as conducive to distant labeling as microblogs (which are labeled in the above work with hashtags or pattern matching, such as “I feel stressed”). The length of our posts will ultimately enable research into the causes of stress and will allow us to identify more implicit indicators. We also limit ourselves to text data and metadata (e.g., posting time, number of replies), whereas BIBREF5 also train on speech data and BIBREF7 include information from photos, neither of which is always available. Finally, we label individual parts of longer posts for acute stress using human annotators, while BIBREF6 label users themselves for chronic stress with the users' voluntary answers to a psychological questionnaire. Researchers have used Reddit data to examine a variety of mental health conditions such as depression BIBREF8 and other clinical diagnoses such as general anxiety BIBREF9, but to our knowledge, our corpus is the first to focus on stress as a general experience, not only a clinical concept. Dataset ::: Reddit Data Reddit is a social media website where users post in topic-specific communities called subreddits, and other users comment and vote on these posts. The lengthy nature of these posts makes Reddit an ideal source of data for studying the nuances of phenomena like stress. To collect expressions of stress, we select categories of subreddits where members are likely to discuss stressful topics: Interpersonal conflict: abuse and social domains. Posters in the abuse subreddits are largely survivors of an abusive relationship or situation sharing stories and support, while posters in the social subreddit post about any difficulty in a relationship (often but not exclusively romantic) and seek advice for how to handle the situation. Mental illness: anxiety and Post-Traumatic Stress Disorder (PTSD) domains. Posters in these subreddits seek advice about coping with mental illness and its symptoms, share support and successes, seek diagnoses, and so on. Financial need: financial domain. Posters in the financial subreddits generally seek financial or material help from other posters. We include ten subreddits in the five domains of abuse, social, anxiety, PTSD, and financial, as detailed in tab:data-spread, and our analysis focuses on the domain level. Using the PRAW API, we scrape all available posts on these subreddits between January 1, 2017 and November 19, 2018; in total, 187,444 posts. As we will describe in sec:annotation, we assign binary stress labels to 3,553 segments of these posts to form a supervised and semi-supervised training set. An example segment is shown in fig:stress-example. Highlighted phrases are indicators that the writer is stressed: the writer mentions common physical symptoms (nausea), explicitly names fear and dread, and uses language indicating helplessness and help-seeking behavior. The average length of a post in our dataset is 420 tokens, much longer than most microblog data (e.g., Twitter's character limit as of this writing is 280 characters). While we label segments that are about 100 tokens long, we still have much additional data from the author on which to draw. We feel this is important because, while our goal in this paper is to predict stress, having longer posts will ultimately allow more detailed study of the causes and effects of stress. In tab:data-examples, we provide examples of labeled segments from the various domains in our dataset. The samples are fairly typical; the dataset contains mostly first-person narrative accounts of personal experiences and requests for assistance or advice. Our data displays a range of topics, language, and agreement levels among annotators, and we provide only a few examples. Lengthier examples are available in the appendix. Dataset ::: Data Annotation We annotate a subset of the data using Amazon Mechanical Turk in order to begin exploring the characteristics of stress. We partition the posts into contiguous five-sentence chunks for labeling; we wish to annotate segments of the posts because we are ultimately interested in what parts of the post depict stress, but we find through manual inspection that some amount of context is important. Our posts, however, are quite long, and it would be difficult for annotators to read and annotate entire posts. This type of data will allow us in the future not only to classify the presence of stress, but also to locate its expressions in the text, even if they are diffused throughout the post. We set up an annotation task in which English-speaking Mechanical Turk Workers are asked to label five randomly selected text segments (of five sentences each) after taking a qualification test; Workers are allowed to select “Stress”, “Not Stress”, or “Can't Tell” for each segment. In our instructions, we define stress as follows: “The Oxford English Dictionary defines stress as `a state of mental or emotional strain or tension resulting from adverse or demanding circumstances'. This means that stress results from someone being uncertain that they can handle some threatening situation. We are interested in cases where that someone also feels negatively about it (sometimes we can find an event stressful, but also find it exciting and positive, like a first date or an interview).”. We specifically ask Workers to decide whether the author is expressing both stress and a negative attitude about it, not whether the situation itself seems stressful. Our full instructions are available in the appendix. We submit 4,000 segments, sampled equally from each domain and uniformly within domains, to Mechanical Turk to be annotated by at least five Workers each and include in each batch one of 50 “check questions” which have been previously verified by two in-house annotators. After removing annotations which failed the check questions, and data points for which at least half of the annotators selected “Can't Tell”, we are left with 3,553 labeled data points from 2,929 different posts. We take the annotators' majority vote as the label for each segment and record the percentage of annotators who agreed. The resulting dataset is nearly balanced, with 52.3% of the data (1,857 instances) labeled stressful. Our agreement on all labeled data is $\kappa =0.47$, using Fleiss's Kappa BIBREF10, considered “moderate agreement” by BIBREF11. We observe that annotators achieved perfect agreement on 39% of the data, and for another 32% the majority was 3/5 or less. This suggests that our data displays significant variation in how stress is expressed, which we explore in the next section. Data Analysis While all our data has the same genre and personal narrative style, we find distinctions among domains with which classification systems must contend in order to perform well, and distinctions between stressful and non-stressful data which may be useful when developing such systems. Posters in each subreddit express stress, but we expect that their different functions and stressors lead to differences in how they do so in each subreddit, domain, and broad category. By domain. We examine the vocabulary patterns of each domain on our training data only, not including unlabeled data so that we may extend our analysis to the label level. First, we use the word categories from the Linguistic Inquiry and Word Count (LIWC) BIBREF12, a lexicon-based tool that gives scores for psychologically relevant categories such as sadness or cognitive processes, as a proxy for topic prevalence and expression variety. We calculate both the percentage of tokens per domain which are included in a specific LIWC word list, and the percentage of words in a specific LIWC word list that appear in each domain (“coverage” of the domain). Results of the analysis are highlighted in tab:domain-liwc. We first note that variety of expression depends on domain and topic; for example, the variety in the expression of negative emotions is particularly low in the financial domain (with 1.54% of words being negative emotion (“negemo”) words and only 31% of “negemo” words used). We also see clear topic shifts among domains: the interpersonal domains contain roughly 1.5 times as many social words, proportionally, as the others; and domains are stratified by their coverage of the anxiety word list (with the most in the mental illness domains and the least in the financial domain). We also examine the overall lexical diversity of each domain by calculating Yule's I measure BIBREF13. fig:domain-yule shows the lexical diversity of our data, both for all words in the vocabulary and for only words in LIWC's “negemo” word list. Yule's I measure reflects the repetitiveness of the data (as opposed to the broader coverage measured by our LIWC analysis). We notice exceptionally low lexical diversity for the mental illness domains, which we believe is due to the structured, clinical language surrounding mental illnesses. For example, posters in these domains discuss topics such as symptoms, medical care, and diagnoses (fig:stress-example, tab:data-examples). When we restrict our analysis to negative emotion words, this pattern persists only for anxiety; the PTSD domain has comparatively little lexical variety, but what it does have contributes to its variety of expression for negative emotions. By label. We perform similar analyses on data labeled stressful or non-stressful by a majority of annotators. We confirm some common results in the mental health literature, including that stressful data uses more first-person pronouns (perhaps reflecting increased self-focus) and that non-stressful data uses more social words (perhaps reflecting a better social support network). Additionally, we calculate measures of syntactic complexity, including the percentage of words that are conjunctions, average number of tokens per labeled segment, average number of clauses per sentence, Flesch-Kincaid Grade Level BIBREF14, and Automated Readability Index BIBREF15. These scores are comparable for all splits of our data; however, as shown in tab:label-complexity, we do see non-significant but persistent differences between stressful and non-stressful data, with stressful data being generally longer and more complex but also rated simpler by readability indices. These findings are intriguing and can be explored in future work. By agreement. Finally, we examine the differences among annotator agreement levels. We find an inverse relationship between the lexical variety and the proportion of annotators who agree, as shown in fig:agreement-diversity. While the amount of data and lexical variety seem to be related, Yule's I measure controls for length, so we believe that this trend reflects a difference in the type of data that encourages high or low agreement. Methods In order to train supervised models, we group the labeled segments by post and randomly select 10% of the posts ($\approx $ 10% of the labeled segments) to form a test set. This ensures that while there is a reasonable distribution of labels and domains in the train and test set, the two do not explicitly share any of the same content. This results in a total of 2,838 train data points (51.6% labeled stressful) and 715 test data points (52.4% labeled stressful). Because our data is relatively small, we train our traditional supervised models with 10-fold cross-validation; for our neural models, we break off a further random 10% of the training data for validation and average the predictions of 10 randomly-initialized trained models. In addition to the words of the posts (both as bag-of-n-grams and distributed word embeddings), we include features in three categories: Lexical features. Average, maximum, and minimum scores for pleasantness, activation, and imagery from the Dictionary of Affect in Language (DAL) BIBREF16; the full suite of 93 LIWC features; and sentiment calculated using the Pattern sentiment library BIBREF17. Syntactic features. Part-of-speech unigrams and bigrams, the Flesch-Kincaid Grade Level, and the Automated Readability Index. Social media features. The UTC timestamp of the post; the ratio of upvotes to downvotes on the post, where an upvote roughly corresponds to a reaction of “like” and a downvote to “dislike” (upvote ratio); the net score of the post (karma) (calculated by Reddit, $n_\text{upvotes} - n_\text{downvotes}$); and the total number of comments in the entire thread under the post. Methods ::: Supervised Models We first experiment with a suite of non-neural models, including Support Vector Machines (SVMs), logistic regression, Naïve Bayes, Perceptron, and decision trees. We tune the parameters for these models using grid search and 10-fold cross-validation, and obtain results for different combinations of input and features. For input representation, we experiment with bag-of-n-grams (for $n \in \lbrace 1..3\rbrace $), Google News pre-trained Word2Vec embeddings (300-dimensional) BIBREF18, Word2Vec embeddings trained on our large unlabeled corpus (300-dimensional, to match), and BERT embeddings trained on our unlabeled corpus (768-dimensional, the top-level [CLS] embedding) BIBREF19. We experiment with subsets of the above features, including separating the features by category (lexical, syntactic, social) and by magnitude of the Pearson correlation coefficient ($r$) with the training labels. Finally, we stratify the training data by annotator agreement, including separate experiments on only data for which all annotators agreed, data for which at least 4/5 annotators agreed, and so on. We finally experiment with neural models, although our dataset is relatively small. We train both a two-layer bidirectional Gated Recurrent Neural Network (GRNN) BIBREF20 and Convolutional Neural Network (CNN) (as designed in BIBREF21) with parallel filters of size 2 and 3, as these have been shown to be effective in the literature on emotion detection in text (e.g., BIBREF22, BIBREF23). Because neural models require large amounts of data, we do not cull the data by annotator agreement for these experiments and use all the labeled data we have. We experiment with training embeddings with random initialization as well as initializing with our domain-specific Word2Vec embeddings, and we also concatenate the best feature set from our non-neural experiments onto the representations after the recurrent and convolutional/pooling layers respectively. Finally, we apply BERT directly to our task, fine-tuning the pretrained BERT-base on our classification task for three epochs (as performed in BIBREF19 when applying BERT to any task). Our parameter settings for our various models are available in the appendix. Results and Discussion We present our results in tab:supervised-results. Our best model is a logistic regression classifier with Word2Vec embeddings trained on our unlabeled corpus, high-correlation features ($\ge $ 0.4 absolute Pearson's $r$), and high-agreement data (at least 4/5 annotators agreed); this model achieves an F-score of 79.8 on our test set, a significant improvement over the majority baseline, the n-gram baseline, and the pre-trained embedding model, (all by the approximate randomization test, $p < 0.01$). The high-correlation features used by this model are LIWC's clout, tone, and “I” pronoun features, and we investigate the use of these features in the other model types. Particularly, we apply different architectures (GRNN and CNN) and different input representations (pretrained Word2Vec, domain-specific BERT). We find that our logistic regression classifier described above achieves comparable performance to BERT-base (approximate randomization test, $p > 0.5$) with the added benefits of increased interpretability and less intensive training. Additionally, domain-specific word embeddings trained on our unlabeled corpus (Word2Vec, BERT) significantly outperform n-grams or pretrained embeddings, as expected, signaling the importance of domain knowledge in this problem. We note that our basic deep learning models do not perform as well as our traditional supervised models or BERT, although they consistently, significantly outperform the majority baseline. We believe this is due to a serious lack of data; our labeled dataset is orders of magnitude smaller than neural models typically require to perform well. We expect that neural models can make good use of our large unlabeled dataset, which we plan to explore in future work. We believe that the superior performance of the pretrained BERT-base model (which uses no additional features) on our dataset supports this hypothesis as well. In tab:data-and-feat-comparison, we examine the impact of different feature sets and levels of annotator agreement on our logistic regressor with domain-specific Word2Vec embeddings and find consistent patterns supporting this model. First, we see a tradeoff between data size and data quality, where lower-agreement data (which can be seen as lower-quality) results in worse performance, but the larger 80% agreement data consistently outperforms the smaller perfect agreement data. Additionally, LIWC features consistently perform well while syntactic features consistently do not, and we see a trend towards the quality of features over their quantity; those with the highest Pearson correlation with the train set (which all happen to be LIWC features) outperform sets with lower correlations, which in turn outperform the set of all features. This suggests that stress detection is a highly lexical problem, and in particular, resources developed with psychological applications in mind, like LIWC, are very helpful. Finally, we perform an error analysis of the two best-performing models. Although the dataset is nearly balanced, both BERT-base and our best logistic regression model greatly overclassify stress, as shown in tab:confusion-matrices, and they broadly overlap but do differ in their predictions (disagreeing with one another on approximately 100 instances). We note that the examples misclassified by both models are often, though not always, ones with low annotator agreement (with the average percent agreement for misclassified examples being 0.55 for BERT and 0.61 for logistic regression). Both models seem to have trouble with less explicit expressions of stress, framing negative experiences in a positive or retrospective way, and stories where another person aside from the poster is the focus; these types of errors are difficult to capture with the features we used (primarily lexical), and further work should be aware of them. We include some examples of these errors in tab:error-analysis-paper, and further illustrative examples are available in the appendix. Conclusion and Future Work In this paper, we present a new dataset, Dreaddit, for stress classification in social media, and find the current baseline at 80% F-score on the binary stress classification problem. We believe this dataset has the potential to spur development of sophisticated, interpretable models of psychological stress. Analysis of our data and our models shows that stress detection is a highly lexical problem benefitting from domain knowledge, but we note there is still room for improvement, especially in incorporating the framing and intentions of the writer. We intend for our future work to use this dataset to contextualize stress and offer explanations using the content features of the text. Additional interesting problems applicable to this dataset include the development of effective distant labeling schemes, which is a significant first step to developing a quantitative model of stress. Acknowledgements We would like to thank Fei-Tzin Lee, Christopher Hidey, Diana Abagyan, and our anonymous reviewers for their insightful comments during the writing of this paper. This research was funded in part by a Presidential Fellowship from the Fu Foundation School of Engineering and Applied Science at Columbia University. Data Samples We include several full posts (with identifying information removed and whitespace collapsed) in fig:data-appendix-1,fig:data-appendix-2,fig:data-appendix-3,fig:data-appendix-4. Posts are otherwise reproduced exactly as obtained (with spelling errors, etc.). The selected examples are deliberately of a reasonable but fairly typical length for readability and space concerns; recall that our average post length is 420 tokens, longer for interpersonal subreddits and shorter for other subreddits. Full Annotation Guidelines We provide our annotation instructions in full in fig:annotation. Mechanical Turk Workers were given these instructions and examples followed by five text segments (one of which was one of our 50 check questions) and allowed to select “Stress”, “Not Stress', or “Can't Tell” for each. Workers were given one hour to complete the HIT and paid $0.12 for each HIT where they correctly answered the check question, with a limit of 30 total submissions per Worker. Parameter Settings We tune our traditional supervised models' parameters using grid search, all as implemented in Python's scikit-learn library BIBREF25. Our best model uses unbalanced class weights, L2 penalty, and a constant term C=10, with other parameters at their default values. All cross-validation runs were initialized with the same random seed for comparability and reproducibility. We train each of our neural models with the Adam optimizer BIBREF24 for up to ten epochs with early stopping measured on the validation set. We apply a dropout rate of 0.5 during training in the recurrent layers and after the convolutional layers. We set our hidden sizes (i.e., the output of the recurrent and pooling layers) as well as our batch size to 128, and tune our learning rate to $5\cdot 10^{-4}$; we set these parameters relatively small to try to work with our small data. We also experiment with scheduling the learning rate on plateau of the validation loss, and with pre-training the models on a much larger sentiment dataset, the Stanford Sentiment Treebank BIBREF26, to help combat the problem of small data, but this does not improve the performance of our neural networks. Error Analysis Examples As a supplement to our error analysis discussion in sec:results, we provide additional examples of test data points which one or both of our best models (BERT-base or our best logistic regressor with embeddings trained on our unlabeled corpus and high-correlation discrete features) failed to classify correctly in tab:error-analysis-appendix.
abuse, social, anxiety, PTSD, and financial
ed6462da17c553bda112ef35917fefe6942fce3c
ed6462da17c553bda112ef35917fefe6942fce3c_0
Q: What are all machine learning approaches compared in this work? Text: Introduction In an organization, the Information Technology (IT) support help desk operation is an important unit which handles the IT services of a business. Many large scale organizations would have a comprehensive IT support team to handle engagement and requests with employees on a 24$\times $7 basis. As any routinized tasks, most processes of the support help desk unit are considered repetitive in nature BIBREF0. Some may occur on a daily basis and others may occur more frequently. Many support engineers and agent would spend time on these repetitive task such as entering information to an application, resetting passwords, unlocking applications, creating credentials, activating services, preparing documentation, etc. The industry has now come realize that many repetitive business processes and tasks can be automated by using Robotic Process Automation (RPA) bots or robotic processes automotive software bots BIBREF1. The idea is to take the repetitive workload and hand it over to the RPA bots so that the employees could focus on more value adding tasks and decision making to the organization. The RPA bot would also help to reduce the human errors and make processes more efficient, which would finally intent results in cost saving and productivity increase. Our proposed automated approach is not only focused on automating repetitive tasks but also looking at historical data, enabling IT support desk process to identify unforeseen insights and patterns. Analyzing the data from various sources such as email communications, service request information generated from support ticketing applications and even conversational data from chats has helped us to identify the type of Service Requests (SR) raised and their respective solutions, as well as fixes done by the support agents. This approach has helped us create a classification model to identify the issue types and provide quick fixes and resolutions from the collected data. Related Work WrÃblewska has conducted a project on the topic of RPA of unstructured data which was focused on building an Artificial Intelligence (AI) system dedicated to tasks regarding the processing of formal documents used in different kinds of business procedures BIBREF2. His approach was introduced to automate the debt collecting process. Possible applications of Machine Learning (ML) methods to improve the efficacy of these processes were described. In the case study done by Aguirre, it was concluded that companies should consider RPA to be more suitable for high volume standardized tasks that are rule-driven, with no requirement for subjective judgement, creativity or interpretation skills BIBREF3. Back office business processes such as accounts payable, accounts receivable, billing, travel and expenses, fixed assets and human resource administration are good candidates for RPA. Extreme multi-class and multi-label text classification problems are solved by the methodology named Hierarchical Label Set Expansion (HLSE) BIBREF4. This paper presents the deep Learning architecture devoted to text classification, in which the data labels are regularized, the hierarchical label set is defined and different word embeddings are used BIBREF3, BIBREF5, BIBREF6. The traditional model performed better than the the deep learning models for 8,841 emails collected over 3 years, because this particular classification task carried out by Haoran may not require the ordered sequence representation of tokens that deep learning models provide BIBREF7. This paper claims that a bagged voting model surpasses the performance of any individual models. In their survey, Kamran and other researchers analyzed text feature extractions BIBREF8, BIBREF9, dimentionality reduction methods, existing algorithms and techniques, evaluation methods and limitations BIBREF6 and advantages based on applications. Paramesh et al and Seongwook et al compare the different classification algorithms such as multinomial naive bayes logistic regression, K-Nearest neighbour and Support Vector Machines (SVM) on real-world IT infrastructure ticket classifier system data, using different evaluation metrics in their research BIBREF10, BIBREF11. They claimed that SVM to have performed well on all the data samples. Random forest (RF) or naive bayes (NB) performed best in terms of correctly uncovering human intuitions. Hartmann et al and his team present in their study that RF exhibits high performance in sentiment classification research done on 41 social media data sets covering major social media platforms, where the SVM never outperforms the RF BIBREF12. Cognitive RPA is efficiently undertaken as a low cost solution with Microsoft Azure Language Understanding Intelligent Service (LUIS) BIBREF8 and Azure machine learning studio. Section III of this paper elaborates the process of automation. The section IV explains about the email classification approach, and the section V illustrates the results and their respective analysis. Finally, section VI contains the conclusion of the results. Method We are proposing a hybrid-process automation, in which we are introducing the automation architecture while adopting the manual process methodology. Incoming emails, that cannot be classified or understood by the knowledge base of the automation system will be sent for manual classification solution. Method ::: Manual Process Providing technical support for large firms around the world has many challenges such as coordinating a vast amounts of mails and matching experts with employees who are in need of that expertise. When a technical issue is raised from a base level employee who works with applications, it is sent to the middle level and then to the higher level management of the respective regional branches throughout the hierarchical business architecture. Once it is approved by the branch manager, the issue email is forwarded to the technical coordinator to categorize the issue based on the priority level and technical requirements. Technical coordinator is responsible for the issues raised from the regional branches all over the world. Each regional branch is given a unique name such as New York, Sydney, London, Beijing and Toronto mentioned as Category1 (cat1). Category1 is identified by looking at the email address of the sender. Each regional branch has different plant applications that need different experts' consultation. Plant applications such as SAP, Darwin and infrastructure are mentioned as Category2 (cat2). The possible plot of the issue emails such as computer, manufacturing, userID, userunlock, financial, planning, purchasing issue generated by employees working in various plant applications across various regions are mentioned as Category3. Mapping table is created with the plants placed in the regional offices and the issues created by the plants. Category1, Category2, Category3 contains 84, 8 and 77 unique categories to be classified. Table I shows some examples for each categories. Once all three categories are finalized by the technical coordinator, email tickets will be created and assigned to the admin-groups. Respective technical people in the admin-groups will provide consultancy and solve the issues. Not only one technician can handle issues assigned to many different admin groups allocated to him, but also particular admin category can be handled by many technicians as a group as well. Method ::: Proposed Automation System In addition to replacing the technical coordinator role with AI bot to classify the raised email-issue tickets for respective admin groups, we propose instant quick fixes for some emails in an automated manner. High level workflow is described in Fig. 1. The AI bot has three main stages Quick fixes Static rules Email classifier All the incoming mails are preprocessed for better quality of inputs. Signatures, greetings, Uniform Resource Locators (URL) are removed. Key body is extracted from the forwarded mails by digging deep into the mail contents. If an email contains attachments, Optical Character Recognition (OCR) is used to extract the text contents from the attachments. Method ::: Proposed Automation System ::: Quickfixes Microsoft LUIS is used for instant quick fixes to provide solution based on prioritized emails. Fig. 2 shows the bot framework LUIS architecture that handles the quick fixes. Quick fixes are trained with most occurring samples that need quick solutions. LUIS is a model that artificial intelligence applications use to predict the intention of phrases spoke. There are 3 main key phases categorized as defining phase, training phase and publishing phase. Natural language is extremely flexible with LUIS. Intents are the type of defined words that are supported by utterances. An action the user wants to perform can be defined by an intent. Fig. 3 elaborates the intent matching breakdown mechanism. Entities are identified form the sentences. Suitable entity will be selected for generating tickets. If an incoming email is identified with the matched intent, cat1, cat2, cat3 will be allocated. Tickets will be created for admin-groups. The issue will be solved using automated messages through a chat bot solution. If the issue is solved, then the ticket will be closed by the quick fixes. If it is too complicated for the knowledge of the BOT then it creates ticket for adminGroup for the assistance of consultants. The emails identified by static rules and keywords are classified with the highest accuracy. The knowledge base of static rules and keywords are gathered using feature engineering and the insights from the technical coordinator. Remaining emails are sent to a complex ensemble machine learning model to be classified. Different types of emails are treated in a different way for efficient execution and to reduce the error. Method ::: Proposed Automation System ::: First mail Fig. 4 shows the flow of email categorization response for new incoming emails. If an incoming mail is a fresh new mail, it is initially subjected to cleaning. OCR will extract the texts from the attachment depending on the attachments' availability. Cat1 is assigned according to the knowledge of the database and sender details. According to the priority, emails are passed through LUIS. Thereafter if LUIS fails to solve the issue ML model will assign the cat2, cat3, Admin group for ticket creation. Method ::: Proposed Automation System ::: Forwarded mail If incoming mail is a continuation of previous email, it is directly handled by LUIS question and answer self automated support. Then it follows the normal procedure of categorization. Fig. 5 clearly illustrates the flow. Fig. 6 explains the overall architecture. Static rules are mentioned as T-codes. Every categorized mails has to be assigned to respective consultant denoted as assignTo. Email classifier using machine learning ::: Preprocessing Preprocessing is necessary to increase the accuracy of a text classification model, because it avoids the classification model focusing attention on unwanted sentences and intents. Emails are fed into Microsoft-Bot services. It handles the headers and outputs the processed channel-data in JavaScript Object Notation (JSON) format. The channel data summarizes the information such like sender, receiver, body, subject and important metadata. Regular expression (regex) can be used for searching strings by defining a search pattern. Regex findings are created to remove unwanted words from the channel data queries for further processing of the emails. OCR has to be accurate in detecting text in an image. Microsoft-OCR is used for text recognition of this automation process. It extracts the recognized characters into a machine-usable character stream. Accuracy of the text recognition depends on the image quality such as blurry images, small text size, complex background, shadows and handwritten text. Since most of the image attachments are computer generated images and screen shots of error messages, Microsoft-OCR capabilities fits for the use case. 260000 emails are taken from past history. Extracted preprocessed data from Microsoft-Bot and OCR services are saved as Comma-separated Values (CSV) files. It is further processed before feeding to machine learning model. Unwanted words are removed from the context using nltk library stopwords and manually collected stopwords. URLs, punctuation marks are removed. Every word is tokenized, lemmatized and normalized, i.e. title, body, OCR, from, to, CC, Cat1, Cat2, and Cat3. Email classifier using machine learning ::: Feature selection Since the sender and receiver varies with time because of new employees' arrivals and old employees' resignations. In order to handle this fluctuating situation, To, CC, From columns are dropped from the input data. Cat1 is known from the email address. Cat2, Cat3 for specific cat1 is described in the table1. Cat2 and Cat3 are merged and defined as target category for classification. Nearly 180 custom features are created based on the plant's availability and region mapping. It is embedded to understand the availability of plant and the issue for the given region denoted as Unique-Category. Based on mapping table (extension of table1), custom features ensures that whether the plant application (cat2) and the technical issue (cat3) belongs to the regional plant (cat1). By the analysis made from the existing samples and from the human semantic knowledge of the technical coordinator, it is sensed that not only the title of the email is enough to predict the category, but also the attachment and body play a major role. Email classifier using machine learning ::: Machine learning approach Even though labelled data set was provided, initially unsupervised learning algorithm K-Nearest Neighbor (KNN) clustering was applied to the data set to observe the possibility of clusters BIBREF13. Since number of unique categories of the target field (Unique-Cat) is 77, there are many common words between categories. It is too confusing and not showing promising categories and accuracies. Multi class multi label classification supervised algorithms such as random forest, XGBoost are used as benchmarks. Email classifier using machine learning ::: Machine learning approach ::: Feature selection Ngrams are a continuous sequence of n items from a given sample of text. From title, body and OCR text words are selected. Ngrams of 3 nearby words are extracted with Term Frequency-Inverse Document Frequency (TF-IDF) vectorizing, then features are filtered using chi squared the feature scoring method. Feature hashing is a method to extract the features from the text. It allows to use variable size of feature vectors with standard learning algorithms. 12000 features are hashed from the text, OCR and title. Then using chi-squared statistical analysis 200 best features that fits with target unique-category are selected. Email classifier using machine learning ::: Machine learning approach ::: Random forest Random Forest is a bagging Algorithm, an ensemble learning method for classification that operates by constructing a multitude of decision trees at training time and outputting the class that has highest mean majority vote of the classesBIBREF14. Email classifier using machine learning ::: Machine learning approach ::: XGBoost XGBoost is a decision-tree-based ensemble Machine Learning algorithm that uses a gradient boosting framework. It is used commonly in the classification problems involving unstructured dataBIBREF5. Email classifier using machine learning ::: Machine learning approach ::: Hierarchical Model Since the number of target labels are high, achieving the higher accuracy is difficult, while keeping all the categories under same feature selection method. Some categories performs well with lower TF-IDF vectorizing range and higher n grams features even though they showed lower accuracy in the overall single model. Therefore, hierarchical machine learning models are built to classify 31 categories in the first classification model and remaining categories are named as low-accu and predicted as one category. In the next model, predicted low-accu categories are again classified into 47 categories. Comparatively this hierarchical model works well since various feature selection methods are used for various categoriesBIBREF5. Email classifier using machine learning ::: Deep learning approach ::: LSTM Long short term memory is an artificial neural network architecture which outperforms most of the machine learning algorithms. In the deep learning approach, feature selection is done in neurons weight matrix by itself. Bidirectional long short term memory (LSTM) is used with glove word embedding to predict the categoriesBIBREF15. Email classifier using machine learning ::: Deep learning approach ::: BERT Even though Bert is the state of the art model, for the considered data set it hasn't shown-up with the maximum breach of accuracy for the expected automationBIBREF16. When we consider the commercial model for the inference, having a dedicated Kubernetes cluster with high performance computer is costly. So complex models with high computation power are not considered as abetter solution. Email classifier using machine learning ::: Threshold Selection In order to classify only higher confident emails, the thresholds for each and every 73 categories are defined. For an incoming email, the probability of assigning each category will be calculated. Best category will be selected based on the maximum probability out of those 73 probabilities. By looking at overall F-score, thresholding decisions are made. For the low accuracy categories (accuracy less than 75 percentage) higher threshold level is set. For middle accuracy categories (accuracy less than 90 percentage) min probability of correctly classified samples are taken. Higher accuracy categories (accuracy greater than 90 percentage) are left free with 0 threshold to classify all the incoming emails. The threshold techniques as a bottle neck decreases the number of samples classified by the autonomous process, but it increases the accuracy of the classified samples. The proposed thresholds satisfy the expected manual workload reduction as well as the accuracy percentage. In this paper Randomforest, XGBoost, LSTM, Bidirectional LSTM with embeddings are analyzed with different input features. Complex deep-learning models such like transformers are not used in order to go for low cost inference solution. Train set and test set are divided as 80:20 percentage. Precision, recall, F-score are taken as evaluation metrics. Results and Analysis Automation of quick email replies for technical queries increase the overall efficiency of day to day processes by 3 percentage. Even though replacing the manual Human email-assigner entirely with AI bot is not possible, yet the automation ML model handles 61 percentage of incoming emails correctly. It is reducing massive human effort per day. For generalization purpose email's title, body, attachments are considered in increasing accuracy, while ignoring sender, receiver, carbon copy information. Table II shows the accuracy percentages for different models with different feature selection methods. An accuracy of 77.3 percentage was obtained without any thresholding techniques for 73 classes multiclasss multi label classification problem. With threshold adjustments for each categories, it was increased to 85.6 percentage. Increasing threshold values results in reducing the number of mails classified by ML-model. It is necessary to handle limited number of high confident emails by the ML-model due to ensure the promising accuracy levels. Feature Engineering for custom feature selection and, Hierarchical cascade modelling increases the accuracy of the XGBoost machine learning model to reach accuracy of the LSTM models. By cascading model1 (mod1) with 83.2 accuracy for 31 classes and model2 (mod2) with 71.1 accuracy for 47 low-accuracy classes, overall hierarchical model exhibited 76.5 accuracy. All the accuracy terms refers F-score. Selected keywords were used as static rules accurate classification. Since accuracy is considerably satisfactory for the automation process, the system was deployed. The incorrectly classified mails are handled manually after the proper notification by the technical consultant. Fig. 7 Shows emails classified by the ML, static rules and manual process represented in daily basis. Incoming emails per day varies between 30 to 120. It clearly illustrates the effect of retraining. After 10-April, the percentages of emails classified per day was increased as well as accuracy. Fig. 8 shows average monthly analysis of incoming mails after each retraining. Average Monthly incoming mails are calculated as 1467 per month by considering a 4 months period. Initial training was done on august 2018 with 170,000 samples, model was able to classify nearly 50 percentage of incoming emails. After the second retraining on january 2019 with 200,000 sample, model classified 58 percentage of incoming mails per month. Third retraining was done on April 2019 with 260000 samples. Results stated that nearly 61 percentage of incoming mails were handled by ML model. Nearly 20 percentage of incoming emails were handled by static rules. Automation bot was proved to handle 81 percentage of the total incoming mails per month including ML and static rules, leading to efficient human-machine interaction, Instant problem solving and fast process. Conclusion Quick fixes from Microsoft LUIS Bot framework provides instant solutions for the raised email queries. Input text features of emails such as title, body, attachment OCR text and the feature engineered custom features all together outperform for the considered real word email data set. Sure-shot Static rules and hierarchical machine learning model with statistically calculated threshold enhances the accuracy of the overall system to an acceptable percentage. Bidirectional LSTM with word embedding techniques are implemented finally with thresholding techniques. Less complex Machine learning models lead to low cost virtual machine solutions for serving. Robotic Process Automation Architecture reduces human effort of email support desk by 81 percentage while having a reasonable accuracy of 85.6 percentage.
Feature selection, Random forest, XGBoost, Hierarchical Model
b8cee4782e05afaeb9647efdb8858554490feba5
b8cee4782e05afaeb9647efdb8858554490feba5_0
Q: Do they evaluate only on English datasets? Text: Introduction The need for real-time, efficient, and reliable customer service has grown in recent years. Twitter has emerged as a popular medium for customer service dialogue, allowing customers to make inquiries and receive instant live support in the public domain. In order to provide useful information to customers, agents must first understand the requirements of the conversation, and offer customers the appropriate feedback. While this may be feasible at the level of a single conversation for a human agent, automatic analysis of conversations is essential for data-driven approaches towards the design of automated customer support agents and systems. Analyzing the dialogic structure of a conversation in terms of the "dialogue acts" used, such as statements or questions, can give important meta-information about conversation flow and content, and can be used as a first step to developing automated agents. Traditional dialogue act taxonomies used to label turns in a conversation are very generic, in order to allow for broad coverage of the majority of dialogue acts possible in a conversation BIBREF0 , BIBREF1 , BIBREF2 . However, for the purpose of understanding and analyzing customer service conversations, generic taxonomies fall short. Table TABREF1 shows a sample customer service conversation between a human agent and customer on Twitter, where the customer and agent take alternating "turns" to discuss the problem. As shown from the dialogue acts used at each turn, simply knowing that a turn is a Statement or Request, as is possible with generic taxonomies, is not enough information to allow for automated handling or response to a problem. We need more fine-grained dialogue acts, such as Informative Statement, Complaint, or Request for Information to capture the speaker's intent, and act accordingly. Likewise, turns often include multiple overlapping dialogue acts, such that a multi-label approach to classification is often more informative than a single-label approach. Dialogue act prediction can be used to guide automatic response generation, and to develop diagnostic tools for the fine-tuning of automatic agents. For example, in Table TABREF1 , the customer's first turn (Turn 1) is categorized as a Complaint, Negative Expressive Statement, and Sarcasm, and the agent's response (Turn 2) is tagged as a Request for Information, Yes-No Question, and Apology. Prediction of these dialogue acts in a real-time setting can be leveraged to generate appropriate automated agent responses to similar situations. Additionally, important patterns can emerge from analysis of the fine-grained acts in a dialogue in a post-prediction setting. For example, if an agent does not follow-up with certain actions in response to a customer's question dialogue act, this could be found to be a violation of a best practice pattern. By analyzing large numbers of dialogue act sequences correlated with specific outcomes, various rules can be derived, i.e. "Continuing to request information late in a conversation often leads to customer dissatisfaction." This can then be codified into a best practice pattern rules for automated systems, such as "A request for information act should be issued early in a conversation, followed by an answer, informative statement, or apology towards the end of the conversation." In this work, we are motivated to predict the dialogue acts in conversations with the intent of identifying problem spots that can be addressed in real-time, and to allow for post-conversation analysis to derive rules about conversation outcomes indicating successful/unsuccessful interactions, namely, customer satisfaction, customer frustration, and problem resolution. We focus on analysis of the dialogue acts used in customer service conversations as a first step to fully automating the interaction. We address various different challenges: dialogue act annotated data is not available for customer service on Twitter, the task of dialogue act annotation is subjective, existing taxonomies do not capture the fine-grained information we believe is valuable to our task, and tweets, although concise in nature, often consist of overlapping dialogue acts to characterize their full intent. The novelty of our work comes from the development of our fine-grained dialogue act taxonomy and multi-label approach for act prediction, as well as our analysis of the customer service domain on Twitter. Our goal is to offer useful analytics to improve outcome-oriented conversational systems. We first expand upon previous work and generic dialogue act taxonomies, developing a fine-grained set of dialogue acts for customer service, and conducting a systematic user study to identify these acts in a dataset of 800 conversations from four Twitter customer service accounts (i.e. four different companies in the telecommunication, electronics, and insurance industries). We then aim to understand the conversation flow between customers and agents using our taxonomy, so we develop a real-time sequential SVM-HMM model to predict our fine-grained dialogue acts while a conversation is in progress, using a novel multi-label scheme to classify each turn. Finally, using our dialogue act predictions, we classify conversations based on the outcomes of customer satisfaction, frustration, and overall problem resolution, then provide actionable guidelines for the development of automated customer service systems and intelligent agents aimed at desired customer outcomes BIBREF3 , BIBREF4 . We begin with a discussion of related work, followed by an overview of our methodology. Next, we describe our conversation modeling framework, and explain our outcome analysis experiments, to show how we derive useful patterns for designing automated customer service agents. Finally, we present conclusions and directions for future work. Related Work Developing computational speech and dialogue act models has long been a topic of interest BIBREF5 , BIBREF6 , BIBREF7 , BIBREF8 , with researchers from many different backgrounds studying human conversations and developing theories around conversational analysis and interpretation on intent. Modern intelligent conversational BIBREF3 , BIBREF4 and dialogue systems draw principles from many disciplines, including philosophy, linguistics, computer science, and sociology. In this section, we describe relevant previous work on speech and dialogue act modeling, general conversation modeling on Twitter, and speech and dialogue act modeling of customer service in other data sources. Previous work has explored speech act modeling in different domains (as a predecessor to dialogue act modeling). Zhang et al. present work on recognition of speech acts on Twitter, following up with a study on scalable speech act recognition given the difficulty of obtaining labeled training data BIBREF9 . They use a simple taxonomy of four main speech acts (Statement, Question, Suggestion, Comment, and a Miscellaneous category). More recently, Vosoughi et al. develop BIBREF10 a speech act classifier for Twitter, using a modification of the taxonomy defined by Searle in 1975, including six acts they observe to commonly occur on Twitter: Assertion, Recommendation Expression, Question, Request, again plus a Miscellaneous category. They describe good features for speech act classification and the application of such a system to detect stories on social media BIBREF11 . In this work, we are interested in the dialogic characteristics of Twitter conversations, rather than speech acts in stand-alone tweets. Different dialogue act taxonomies have been developed to characterize conversational acts. Core and Allen present the Dialogue Act Marking in Several Layers (DAMSL), a standard for discourse annotation that was developed in 1997 BIBREF0 . The taxonomy contains a total of 220 tags, divided into four main categories: communicative status, information level, forward-looking function, and backward-looking function. Jurafsky, Shriberg, and Biasca develop a less fine-grained taxonomy of 42 tags based on DAMSL BIBREF1 . Stolcke et al. employ a similar set for general conversation BIBREF2 , citing that "content- and task-related distinctions will always play an important role in effective DA [Dialogue Act] labeling." Many researchers have tackled the task of developing different speech and dialogue act taxonomies and coding schemes BIBREF12 , BIBREF13 , BIBREF14 , BIBREF15 . For the purposes of our own research, we require a set of dialogue acts that is more closely representative of customer service domain interactions - thus we expand upon previously defined taxonomies and develop a more fine-grained set. Modeling general conversation on Twitter has also been a topic of interest in previous work. Honeycutt and Herring study conversation and collaboration on Twitter using individual tweets containing "@" mentions BIBREF16 . Ritter et al. explore unsupervised modeling of Twitter conversations, using clustering methods on a corpus of 1.3 million Twitter conversations to define a model of transitional flow between in a general Twitter dialogue BIBREF17 . While these approaches are relevant to understanding the nature of interactions on Twitter, we find that the customer service domain presents its own interesting characteristics that are worth exploring further. The most related previous work has explored speech and dialogue act modeling in customer service, however, no previous work has focused on Twitter as a data source. In 2005, Ivanovic uses an abridged set of 12 course-grained dialogue acts (detailed in the Taxonomy section) to describe interactions between customers and agents in instant messaging chats BIBREF18 , BIBREF19 , leading to a proposal on response suggestion using the proposed dialogue acts BIBREF20 . Follow-up work using the taxonomy selected by Ivanovic comes from Kim et al., where they focus on classifying dialogue acts in both one-on-one and multi-party live instant messaging chats BIBREF21 , BIBREF22 . These works are similar to ours in the nature of the problem addressed, but we use a much more fine-grained taxonomy to define the interactions possible in the customer service domain, and focus on Twitter conversations, which are unique in their brevity and the nature of the public interactions. The most similar work to our own is that of Herzig et al. on classifying emotions in customer support dialogues on Twitter BIBREF23 . They explore how agent responses should be tailored to the detected emotional response in customers, in order to improve the quality of service agents can provide. Rather than focusing on emotional response, we seek to model the dialogic structure and intents of the speakers using dialogue acts, with emotion included as features in our model, to characterize the emotional intent within each act. Methodology The underlying goal of this work is to show how a well-defined taxonomy of dialogue acts can be used to summarize semantic information in real-time about the flow of a conversation to derive meaningful insights into the success/failure of the interaction, and then to develop actionable rules to be used in automating customer service interactions. We focus on the customer service domain on Twitter, which has not previously been explored in the context of dialogue act classification. In this new domain, we can provide meaningful recommendations about good communicative practices, based on real data. Our methodology pipeline is shown in Figure FIGREF2 . Taxonomy Definition As described in the related work, the taxonomy of 12 acts to classify dialogue acts in an instant-messaging scenario, developed by Ivanovic in 2005, has been used by previous work when approaching the task of dialogue act classification for customer service BIBREF18 , BIBREF20 , BIBREF19 , BIBREF21 , BIBREF22 . The dataset used consisted of eight conversations from chat logs in the MSN Shopping Service (around 550 turns spanning around 4,500 words) BIBREF19 . The conversations were gathered by asking five volunteers to use the platform to inquire for help regarding various hypothetical situations (i.e. buying an item for someone) BIBREF19 . The process of selection of tags to develop the taxonomy, beginning with the 42 tags from the DAMSL set BIBREF0 , involved removing tags inappropriate for written text, and collapsing sets of tags into a more coarse-grained label BIBREF18 . The final taxonomy consists of the following 12 dialogue acts (sorted by frequency in the dataset): Statement (36%), Thanking (14.7%), Yes-No Question (13.9%), Response-Acknowledgement (7.2%), Request (5.9%), Open-Question (5.3%), Yes-Answer (5.1%), Conventional-Closing (2.9%), No-Answer (2.5%), Conventional-Opening (2.3%), Expressive (2.3%) and Downplayer (1.9%). For the purposes of our own research, focused on customer service on Twitter, we found that the course-grained nature of the taxonomy presented a natural shortcoming in terms of what information could be learned by performing classification at this level. We observe that while having a smaller set of dialogue acts may be helpful for achieving good agreement between annotators (Ivanovic cites kappas of 0.87 between the three expert annotators using this tag set on his data BIBREF18 ), it is unable to offer deeper semantic insight into the specific intent behind each act for many of the categories. For example, the Statement act, which comprises the largest percentage (36% of turns), is an extremely broad category that fails to provide useful information from an analytical perspective. Likewise, the Request category also does not specify any intent behind the act, and leaves much room for improvement. For this reason, and motivated by previous work seeking to develop dialogue act taxonomies appropriate for different domains BIBREF19 , BIBREF21 , we convert the list of dialogue acts presented by the literature into a hierarchical taxonomy, shown in Figure FIGREF6 . We first organize the taxonomy into six high-level dialogue acts: Greeting, Statement, Request, Question, Answer, and Social Act. Then, we update the taxonomy using two main steps: restructuring and adding additional fine-grained acts. We base our changes upon the taxonomy used by Ivanovic and Kim et al. in their work on instant messaging chat dialogues BIBREF19 , BIBREF21 , but also on general dialogue acts observed in the customer service domain, including complaints and suggestions. Our taxonomy does not make any specific restrictions on which party in the dialogue may perform each act, but we do observe that some acts are far more frequent (and sometimes non-existent) in usage, depending on whether the customer or agent is the speaker (for example, the Statement Complaint category never shows up in Agent turns). In order to account for gaps in available act selections for annotators, we include an Other act in the broadest categories. While our taxonomy fills in many gaps from previous work in our domain, we do not claim to have handled coverage of all possible acts in this domain. Our taxonomy allows us to more closely specify the intent and motivation behind each turn, and ultimately how to address different situations. Data Collection Given our taxonomy of fine-grained dialogue acts that expands upon previous work, we set out to gather annotations for Twitter customer service conversations. For our data collection phase, we begin with conversations from the Twitter customer service pages of four different companies, from the electronics, telecommunications, and insurance industries. We perform several forms of pre-processing to the conversations. We filter out conversations if they contain more than one customer or agent speaker, do not have alternating customer/agent speaking turns (single turn per speaker), have less than 5 or more than 10 turns, have less than 70 words in total, and if any turn in the conversation ends in an ellipses followed by a link (indicating that the turn has been cut off due to length, and spans another tweet). Additionally, we remove any references to the company names (substituting with "Agent"), any references to customer usernames (substituting with "Customer"), and replacing and links or image references with INLINEFORM0 link INLINEFORM1 and INLINEFORM2 img INLINEFORM3 tokens. Using these filters as pre-processing methods, we end up with a set of 800 conversations, spanning 5,327 turns. We conduct our annotation study on Amazon Mechanical Turk, presenting Turkers with Human Intelligence Tasks (henceforth, HITs) consisting of a single conversation between a customer and an agent. In each HIT, we present Turkers with a definition of each dialogue act, as well as a sample annotated dialogue for reference. For each turn in the conversation, we allow Turkers to select as many labels from our taxonomy as required to fully characterize the intent of the turn. Additionally, annotators are asked three questions at the end of each conversation HIT, to which they could respond that they agreed, disagreed, or could not tell: We ask 5 Turkers to annotate each conversation HIT, and pay $0.20 per HIT. We find the list of "majority dialogue acts" for each tweet by finding any acts that have received majority-vote labels (at least 3 out of 5 judgements). It is important to note at this point that we make an important choice as to how we will handle dialogue act tagging for each turn. We note that each turn may contain more than one dialogue act vital to carry its full meaning. Thus, we choose not to carry out a specific segmentation task on our tweets, contrary to previous work BIBREF24 , BIBREF25 , opting to characterize each tweet as a single unit composed of different, often overlapping, dialogue acts. Table TABREF16 shows examples of tweets that receive majority vote on more than one label, where the act boundaries are overlapping and not necessarily distinguishable. It is clear that the lines differentiating these acts are not very well defined, and that segmentation would not necessarily aid in clearly separating out each intent. For these reasons, and due to the overall brevity of tweets in general, we choose to avoid the overhead of requiring annotators to provide segment boundaries, and instead ask for all appropriate dialogue acts. Annotation Results Figure FIGREF17 shows the distribution of the number of times each dialogue act in our taxonomy is selected a majority act by the annotators (recall that each turn is annotated by 5 annotators). From the distribution, we see that the largest class is Statement Info which is part of the majority vote list for 2,152 of the 5,327 total turns, followed by Request Info, which appears in 1,088 of the total turns. Although Statement Informative comprises the largest set of majority labels in the data (as did Statement in Ivanovic's distribution), we do observe that other fine-grained categories of Statement occur in the most frequent labels as well, including Statement Complaint, Statement Expressive Negative, and Statement Suggestion – giving more useful information as to what form of statement is most frequently occurring. We find that 147 tweets receive no majority label (i.e. no single act received 3 or more votes out of 5). At the tail of the distribution, we see less frequent acts, such as Statement Sarcasm, Social Act Downplayer, Statement Promise, Greeting Closing, and Request Other. It is also interesting to note that both opening and closing greetings occur infrequently in the data – which is understandable given the nature of Twitter conversation, where formal greeting is not generally required. Table TABREF19 shows a more detailed summary of the distribution of our top 12 dialogue acts according to the annotation experiments, as presented by Ivanovic BIBREF18 . Since each turn has an overlapping set of labels, the column % of Turns (5,327) represents what fraction of the total 5,327 turns contain that dialogue act label (these values do not sum to 1, since there is overlap). To give a better sense of the percentage appearance of each dialogue act class in terms of the total number of annotated labels given, we also present column % of Annotations (10,343) (these values are percentages). We measure agreement in our annotations using a few different techniques. Since each item in our annotation experiments allows for multiple labels, we first design an agreement measure that accounts for how frequently each annotator selects the acts that agree with the majority-selected labels for the turns they annotated. To calculate this for each annotator, we find the number of majority-selected acts for each conversation they annotated (call this MAJ), and the number of subset those acts that they selected (call this SUBS), and find the ratio (SUBS/MAJ). We use this ratio to systematically fine-tune our set of annotators by running our annotation in four batches, restricting our pool of annotators to those that have above a 0.60 ratio of agreement with the majority from the previous batch, as a sort of quality assurance test. We also measure Fleiss' Kappa BIBREF26 agreement between annotators in two ways: first by normalizing our annotation results into binary-valued items indicating annotators' votes for each label contain within each turn. We find an average Fleiss- INLINEFORM0 for the full dataset, including all turn-and-label items, representing moderate agreement on the 24-label problem. We also calculate the Fleiss- INLINEFORM0 values for each label, and use the categories defined by Landis and Koch to bin our speech acts based on agreement BIBREF27 . As shown in Table TABREF18 , we find that the per-label agreement varies from "almost perfect" agreement of INLINEFORM1 for lexically defined categories such as Apology and Thanks, with only slight agreement of INLINEFORM2 for less clearly-defined categories, such as Statement (Other), Answer Response Acknowledgement and Request (Other). For the conversation-level questions, we calculate the agreement across the "Agree" label for all annotators, finding an average Fleiss- INLINEFORM3 , with question-level results of INLINEFORM4 for customer satisfaction, INLINEFORM5 for problem resolution, and INLINEFORM6 for customer frustration. These results suggest room for improvement for further development of the taxonomy, to address problem areas for annotators and remedy areas of lower agreement. Motivation for Multi-Label Classification We test our hypothesis that tweet turns are often characterized by more than one distinct dialogue act label by measuring the percentage overlap between frequent pairs of labels. Of the 5,327 turns annotated, across the 800 conversations, we find that 3,593 of those turns (67.4%) contained more than one majority-act label. Table TABREF22 shows the distribution percentage of the most frequent pairs. For example, we observe that answering with informative statements is the most frequent pair, followed by complaints coupled with negative sentiment or informative statements. We also observe that requests are usually formed as questions, but also co-occur frequently with apologies. This experiment validates our intuition that the majority of turns do contain more than a single label, and motivates our use of a multi-label classification method for characterizing each turn in the conversation modeling experiments we present in the next section. Conversation Modeling In this section, we describe the setup and results of our conversational modeling experiments on the data we collected using our fine-grained taxonomy of customer service dialogue acts. We begin with an overview of the features and classes used, followed by our experimental setup and results for each experiment performed. Features The following list describes the set of features used for our dialogue act classification tasks: Word/Punctuation: binary bag-of-word unigrams, binary existence of a question mark, binary existence of an exclamation mark in a turn Temporal: response time of a turn (time in seconds elapsed between the posting time of the previous turn and that of the current turn) Second-Person Reference: existence of an explicit second-person reference in the turn (you, your, you're) Emotion: count of words in each of the 8 emotion classes from the NRC emotion lexicon BIBREF28 (anger, anticipation, disgust, fear, joy, negative, positive, sadness, surprise, and trust) Dialogue: lexical indicators in the turn: opening greetings (hi, hello, greetings, etc), closing greetings (bye, goodbye), yes-no questions (turns with questions starting with do, did, can, could, etc), wh- questions (turns with questions starting with who, what, where, etc), thanking (thank*), apology (sorry, apolog*), yes-answer, and no-answer Classes Table TABREF30 shows the division of classes we use for each of our experiments. We select our classes using the distribution of annotations we observe in our data collection phase (see Table TABREF19 ), selecting the top 12 classes as candidates. While iteratively selecting the most frequently-occurring classes helps to ensure that classes with the most data are represented in our experiments, it also introduces the problem of including classes that are very well-defined lexically, and may not require learning for classification, such as Social Act Apology and Social Act Thanking in the first 10-Class set. For this reason, we call this set 10-Class (Easy), and also experiment using a 10-Class (Hard) set, where we add in the next two less-defined and more semantically rich labels, such as Statement Offer and Question Open. When using each set of classes, a turn is either classified as one of the classes in the set, or it is classified as "other" (i.e. any of the other classes). We discuss our experiments in more detail and comment on performance differences in the experiment section. Experiments Following previous work on conversation modeling BIBREF23 , we use a sequential SVM-HMM (using the INLINEFORM0 toolkit BIBREF29 ) for our conversation modeling experiments. We hypothesize that a sequential model is most suited to our dialogic data, and that we will be able to concisely capture conversational attributes such as the order in which dialogue acts often occur (i.e. some Answer act after Question a question act, or Apology acts after Complaints). We note that with default settings for a sequence of length INLINEFORM0 , an SVM-HMM model will be able to refine its answers for any turn INLINEFORM1 as information becomes available for turns INLINEFORM2 . However, we opt to design our classifier under a real-time setting, where turn-by-turn classification is required without future knowledge or adaptation of prediction at any given stage. In our setup, turns are predicted in a real-time setting to fairly model conversation available to an intelligent agent in a conversational system. At any point, a turn INLINEFORM3 is predicted using information from turns INLINEFORM4 , and where a prediction is not changed when new information is available. We test our hypothesis by comparing our real-time sequential SVM-HMM model to non-sequential baselines from the NLTK BIBREF30 and Scikit-Learn BIBREF31 toolkits. We use our selected feature set (described above) to be generic enough to apply to both our sequential and non-sequential models, in order to allow us to fairly compare performance. We shuffle and divide our data into 70% for training and development (560 conversations, using 10-fold cross-validation for parameter tuning), and hold out 30% of the data (240 conversations) for test. Motivated by the prevalent overlap of dialogue acts, we conduct our learning experiments using a multi-label setup. For each of the sets of classes, we conduct binary classification task for each label: for each INLINEFORM0 -class classification task, a turn is labeled as either belonging to the current label, or not (i.e. "other"). In this setup, each turn is assigned a binary value for each label (i.e. for the 6-class experiment, each turn receives a value of 0/1 for each indicating whether the classifier predicts it to be relevant to the each of the 6 labels). Thus, for each INLINEFORM1 -class experiment, we end up with INLINEFORM2 binary labels, for example, whether the turn is a Statement Informative or Other, Request Information or Other, etc. We aggregate the INLINEFORM3 binary predictions for each turn, then compare the resultant prediction matrix for all turns to our majority-vote ground-truth labels, where at least 3 out of 5 annotators have selected a label to be true for a given turn. The difficulty of the task increases as the number of classes INLINEFORM4 increases, as there are more classifications done for each turn (i.e., for the 6-class problem, there are 6 classification tasks per turn, while for the 8-class problem, there are 8, etc). Due to the inherent imbalance of label-distribution in the data (shown in Figure FIGREF17 ), we use weighted F-macro to calculate our final scores for each feature set (which finds the average of the metrics for each label, weighted by the number of true instances for that label) BIBREF31 . Our first experiment sets out to compare the use of a non-sequential classification algorithm versus a sequential model for dialogue act classification on our dataset. We experiment with the default Naive Bayes (NB) and Linear SVC algorithms from Scikit-Learn BIBREF31 , comparing with our sequential SVM-HMM model. We test each classifier on each of our four class sets, reporting weighted F-macro for each experiment. Figure FIGREF33 shows the results of the experiments. From this experiment, we observe that our sequential SVM-HMM outperforms each non-sequential baseline, for each of the four class sets. We select the sequential SVM-HMM model for our preferred model for subsequent experiments. We observe that while performance may be expected to drop as the number of classes increases, we instead get a spike in performance for the 10-Class (Easy) setting. This increase occurs due to the addition of the lexically well-defined classes of Statement Apology and Statement Thanks, which are much simpler for our model to predict. Their addition results in a performance boost, comparable to that of the simpler 6-Class problem. When we remove the two well-defined classes and add in the next two broader dialogue act classes of Statement Offer and Question Open (as defined by the 10-Class (Hard) set), we observe a drop in performance, and an overall result comparable to our 8-Class problem. This result is still strong, since the number of classes has increased, but the overall performance does not drop. We also observe that while NB and LinearSVC have the same performance trend for the smaller number of classes, Linear SVC rapidly improves in performance as the number of classes increases, following the same trend as SVM-HMM. The smallest margin of difference between SVM-HMM and Linear SVC also occurs at the 10-Class (Easy) setting, where the addition of highly-lexical classes makes for a more differentiable set of turns. Our next experiment tests the differences in performance when training and testing our real-time sequential SVM-HMM model using only a single type of speaker's turns (i.e. only Customer or only Agent turns). Figure FIGREF35 shows the relative performance of using only speaker-specific turns, versus our standard results using all turns. We observe that using Customer-only turns gives us lower prediction performance than using both speakers' turns, but that Agent-only turns actually gives us higher performance. Since agents are put through training on how to interact with customers (often using templates), agent behavior is significantly more predictable than customer behavior, and it is easier to predict agent turns even without utilizing any customer turn information (which is more varied, and thus more difficult to predict). We again observe a boost in performance at out 10-Class (Easy) set, due to the inclusion of lexically well-defined classes. Notably, we achieve best performance for the 10-Class (Easy) set using only agent turns, where the use of the Apology and Thanks classes are both prevalent and predictable. In our final experiment, we explore the changes in performance we get by splitting the training and test data based on company domain. We compare this performance with our standard setup for SVM-HMM from our baseline experiments (Figure FIGREF33 ), where our train-test data splitting is company-independent (i.e. all conversations are randomized, and no information is used to differentiate different companies or domains). To recap, our data consists of conversations from four companies from three different industrial domains (one from the telecommunication domain, two from the electronics domain, and one from the insurance domain). We create four different versions of our 6-class real-time sequential SVM-HMM, where we train on the data from three of the companies, and test on the remaining company. We present our findings in Table TABREF37 . From the table, we see that our real-time model achieves best prediction results when we use one of the electronics companies in the test fold, even though the number of training samples is smallest in these cases. On the other hand, when we assign insurance company in the test fold, our model's prediction performance is comparatively low. Upon further investigation, we find that customer-agent conversations in the telecommunication and electronics domains are more similar than those in the insurance domain. Our findings show that our model is robust to different domains as our test set size increases, and that our more generic, company-independent experiment gives us better performance than any domain-specific experiments. Conversation Outcome Analysis Given our observation that Agent turns are more predictable, and that we achieve best performance in a company-independent setting, we question whether the training that agents receive is actually reliable in terms of resulting in overall "satisfied customers", regardless of company domain. Ultimately, our goal is to discover whether we can use the insight we derive from our predicted dialogue acts to better inform conversational systems aimed at offering customer support. Our next set of experiments aims to show the utility of our real-time dialogue act classification as a method for summarizing semantic intent in a conversation into rules that can be used to guide automated systems. Classifying Problem Outcomes We conduct three supervised classification experiments to better understand full conversation outcome, using the default Linear SVC classifier in Scikit-Learn BIBREF31 (which gave us our best baseline for the dialogue classification task). Each classification experiments centers around one of three problem outcomes: customer satisfaction, problem resolution, and customer frustration. For each outcome, we remove any conversation that did not receive majority consensus for a label, or received majority vote of "can't tell". Our final conversation sets consist of 216 satisfied and 500 unsatisfied customer conversations, 271 resolved and 425 unresolved problem conversations, and 534 frustrated and 229 not frustrated customer conversations. We retain the inherent imbalance in the data to match the natural distribution observed. The clear excess of consensus of responses that indicate negative outcomes further motivates us to understand what sorts of dialogic patterns results in such outcomes. We run the experiment for each conversation outcome using 10-fold cross-validation, under each of our four class settings: 6-Class, 8-Class, 10-Class (Easy), and 10-Class (Hard). The first feature set we use is Best_Features (from the original dialogue act classification experiments), which we run as a baseline. Our second feature set is our Dialogue_Acts predictions for each turn – we choose the most probable dialogue act prediction for each turn using our dialogue act classification framework to avoid sparsity. In this way, for each class size INLINEFORM0 , each conversation is converted into a vector of INLINEFORM1 (up to 10) features that describe the most strongly associated dialogue act from the dialogue act classification experiments for each turn, and the corresponding turn number. For example, a conversation feature vector may look as follows: INLINEFORM2 Thus, our classifier can then learn patterns based on these features (for example, that specific acts appearing at the end of a conversation are strong indicators of customer satisfaction) that allow us to derive rules about successful/unsuccessful interactions. Figure FIGREF38 shows the results of our binary classification experiments for each outcome. For each experiment, the Best_Features set is constant over each class size, while the Dialogue_Act features are affected by class size (since the predicted act for each turn will change based on the set of acts available for that class size). Our first observation is that we achieve high performance on the binary classification task, reaching F-measures of 0.70, 0.65, and 0.83 for the satisfaction, resolution, and frustration outcomes, respectively. Also, we observe that the performance of our predicted dialogue act features is comparable to that of the much larger set of best features for each label (almost identical in the case of frustration). In more detail, we note interesting differences comparing the performance of the small set of dialogue act features that "summarize" the large, sparse set of best features for each label, as a form of data-driven feature selection. For satisfaction, we see that the best feature set outperforms the dialogue acts for each class set except for 10-Class (Easy), where the dialogue acts are more effective. The existence of the very lexically well-defined Social Act Thanking and Social Act Apology classes makes the dialogue acts ideal for summarization. In the case of problem resolution, we see that the performance of the dialogue acts approaches that of the best feature set as the number of classes increases, showing that the dialogue features are able to express the full intent of the turns well, even at more difficult class settings. Finally, for the frustration experiment, we observe negligible different between the best features and dialogue act features, and very high classification results overall. Actionable Rules for Automated Customer Support While these experiments highlight how we can use dialogue act predictions as a means to greatly reduce feature sparsity and predict conversation outcome, our main aim is to gain good insight from the use of the dialogue acts to inform and automate customer service interactions. We conduct deeper analysis by taking a closer look at the most informative dialogue act features in each experiment. Table TABREF44 shows the most informative features and weights for each of our three conversation outcomes. To help guide our analysis, we divide the features into positions based on where they occur in the conversation: start (turns 1-3), middle (turns 4-6), and end (turns 7-10). Desirable outcomes (customers that are satisfied/not frustrated and resolved problems) are shown at the top rows of the table, and undesirable outcomes (unsatisfied/frustrated customers and unresolved problems) are shown at the bottom rows. Our analysis helps zone in on how the use of certain dialogue acts may be likely to result in different outcomes. The weights we observe vary in the amount of insight provided: for example, offering extra help at the end of a conversation, or thanking the customer yields more satisfied customers, and more resolved problems (with ratios of above 6:1). However, some outcomes are much more subtle: for example, asking yes-no questions early-on in a conversation is highly associated with problem resolution (ratio 3:1), but asking them at the end of a conversation has as similarly strong association with unsatisfied customers. Giving elaborate answers that are not a simple affirmative, negative, or response acknowledgement (i.e. Answer (Other)) towards the middle of a conversation leads to satisfied customers that are not frustrated. Likewise, requesting information towards the end of a conversation (implying that more information is still necessary at the termination of the dialogue) leads to unsatisfied and unresolved customers, with ratios of at least 4:1. By using the feature weights we derive from using our predicted dialogue acts in our outcome classification experiments, we can thus derive data-driven patterns that offer useful insight into good/bad practices. Our goal is to then use these rules as guidelines, serving as a basis for automated response planning in the customer service domain. For example, these rules can be used to recommend certain dialogue act responses given the position in a conversation, and based previous turns. This information, derived from correlation with conversation outcomes, gives a valuable addition to conversational flow for intelligent agents, and is more useful than canned responses. Conclusions In this paper, we explore how we can analyze dialogic trends in customer service conversations on Twitter to offer insight into good/bad practices with respect to conversation outcomes. We design a novel taxonomy of fine-grained dialogue acts, tailored for the customer service domain, and gather annotations for 800 Twitter conversations. We show that dialogue acts are often semantically overlapping, and conduct multi-label supervised learning experiments to predict multiple appropriate dialogue act labels for each turn in real-time, under varying class sizes. We show that our sequential SVM-HMM model outperforms all non-sequential baselines, and plan to continue our exploration of other sequential models including Conditional Random Fields (CRF) BIBREF32 and Long Short-Term Memory (LSTM) BIBREF33 , as well as of dialogue modeling using different Markov Decision Process (MDP) BIBREF34 models such as the Partially-Observed MDP (POMDP) BIBREF35 . We establish that agents are more predictable than customers in terms of the dialogue acts they utilize, and set out to understand whether the conversation strategies agents employ are well-correlated with desirable conversation outcomes. We conduct binary classification experiments to analyze how our predicted dialogue acts can be used to classify conversations as ending in customer satisfaction, customer frustration, and problem resolution. We observe interesting correlations between the dialogue acts agents use and the outcomes, offering insights into good/bad practices that are more useful for creating context-aware automated customer service systems than generating canned response templates. Future directions for this work revolve around the integration of the insights derived in the design of automated customer service systems. To this end, we aim to improve the taxonomy and annotation design by consulting domain-experts and using annotator feedback and agreement information, derive more powerful features for dialogue act prediction, and automate ranking and selection of best-practice rules based on domain requirements for automated customer service system design.
Yes
915cf3d481164217290d7b1eb9d48ed3e249196d
915cf3d481164217290d7b1eb9d48ed3e249196d_0
Q: Which patterns and rules are derived? Text: Introduction The need for real-time, efficient, and reliable customer service has grown in recent years. Twitter has emerged as a popular medium for customer service dialogue, allowing customers to make inquiries and receive instant live support in the public domain. In order to provide useful information to customers, agents must first understand the requirements of the conversation, and offer customers the appropriate feedback. While this may be feasible at the level of a single conversation for a human agent, automatic analysis of conversations is essential for data-driven approaches towards the design of automated customer support agents and systems. Analyzing the dialogic structure of a conversation in terms of the "dialogue acts" used, such as statements or questions, can give important meta-information about conversation flow and content, and can be used as a first step to developing automated agents. Traditional dialogue act taxonomies used to label turns in a conversation are very generic, in order to allow for broad coverage of the majority of dialogue acts possible in a conversation BIBREF0 , BIBREF1 , BIBREF2 . However, for the purpose of understanding and analyzing customer service conversations, generic taxonomies fall short. Table TABREF1 shows a sample customer service conversation between a human agent and customer on Twitter, where the customer and agent take alternating "turns" to discuss the problem. As shown from the dialogue acts used at each turn, simply knowing that a turn is a Statement or Request, as is possible with generic taxonomies, is not enough information to allow for automated handling or response to a problem. We need more fine-grained dialogue acts, such as Informative Statement, Complaint, or Request for Information to capture the speaker's intent, and act accordingly. Likewise, turns often include multiple overlapping dialogue acts, such that a multi-label approach to classification is often more informative than a single-label approach. Dialogue act prediction can be used to guide automatic response generation, and to develop diagnostic tools for the fine-tuning of automatic agents. For example, in Table TABREF1 , the customer's first turn (Turn 1) is categorized as a Complaint, Negative Expressive Statement, and Sarcasm, and the agent's response (Turn 2) is tagged as a Request for Information, Yes-No Question, and Apology. Prediction of these dialogue acts in a real-time setting can be leveraged to generate appropriate automated agent responses to similar situations. Additionally, important patterns can emerge from analysis of the fine-grained acts in a dialogue in a post-prediction setting. For example, if an agent does not follow-up with certain actions in response to a customer's question dialogue act, this could be found to be a violation of a best practice pattern. By analyzing large numbers of dialogue act sequences correlated with specific outcomes, various rules can be derived, i.e. "Continuing to request information late in a conversation often leads to customer dissatisfaction." This can then be codified into a best practice pattern rules for automated systems, such as "A request for information act should be issued early in a conversation, followed by an answer, informative statement, or apology towards the end of the conversation." In this work, we are motivated to predict the dialogue acts in conversations with the intent of identifying problem spots that can be addressed in real-time, and to allow for post-conversation analysis to derive rules about conversation outcomes indicating successful/unsuccessful interactions, namely, customer satisfaction, customer frustration, and problem resolution. We focus on analysis of the dialogue acts used in customer service conversations as a first step to fully automating the interaction. We address various different challenges: dialogue act annotated data is not available for customer service on Twitter, the task of dialogue act annotation is subjective, existing taxonomies do not capture the fine-grained information we believe is valuable to our task, and tweets, although concise in nature, often consist of overlapping dialogue acts to characterize their full intent. The novelty of our work comes from the development of our fine-grained dialogue act taxonomy and multi-label approach for act prediction, as well as our analysis of the customer service domain on Twitter. Our goal is to offer useful analytics to improve outcome-oriented conversational systems. We first expand upon previous work and generic dialogue act taxonomies, developing a fine-grained set of dialogue acts for customer service, and conducting a systematic user study to identify these acts in a dataset of 800 conversations from four Twitter customer service accounts (i.e. four different companies in the telecommunication, electronics, and insurance industries). We then aim to understand the conversation flow between customers and agents using our taxonomy, so we develop a real-time sequential SVM-HMM model to predict our fine-grained dialogue acts while a conversation is in progress, using a novel multi-label scheme to classify each turn. Finally, using our dialogue act predictions, we classify conversations based on the outcomes of customer satisfaction, frustration, and overall problem resolution, then provide actionable guidelines for the development of automated customer service systems and intelligent agents aimed at desired customer outcomes BIBREF3 , BIBREF4 . We begin with a discussion of related work, followed by an overview of our methodology. Next, we describe our conversation modeling framework, and explain our outcome analysis experiments, to show how we derive useful patterns for designing automated customer service agents. Finally, we present conclusions and directions for future work. Related Work Developing computational speech and dialogue act models has long been a topic of interest BIBREF5 , BIBREF6 , BIBREF7 , BIBREF8 , with researchers from many different backgrounds studying human conversations and developing theories around conversational analysis and interpretation on intent. Modern intelligent conversational BIBREF3 , BIBREF4 and dialogue systems draw principles from many disciplines, including philosophy, linguistics, computer science, and sociology. In this section, we describe relevant previous work on speech and dialogue act modeling, general conversation modeling on Twitter, and speech and dialogue act modeling of customer service in other data sources. Previous work has explored speech act modeling in different domains (as a predecessor to dialogue act modeling). Zhang et al. present work on recognition of speech acts on Twitter, following up with a study on scalable speech act recognition given the difficulty of obtaining labeled training data BIBREF9 . They use a simple taxonomy of four main speech acts (Statement, Question, Suggestion, Comment, and a Miscellaneous category). More recently, Vosoughi et al. develop BIBREF10 a speech act classifier for Twitter, using a modification of the taxonomy defined by Searle in 1975, including six acts they observe to commonly occur on Twitter: Assertion, Recommendation Expression, Question, Request, again plus a Miscellaneous category. They describe good features for speech act classification and the application of such a system to detect stories on social media BIBREF11 . In this work, we are interested in the dialogic characteristics of Twitter conversations, rather than speech acts in stand-alone tweets. Different dialogue act taxonomies have been developed to characterize conversational acts. Core and Allen present the Dialogue Act Marking in Several Layers (DAMSL), a standard for discourse annotation that was developed in 1997 BIBREF0 . The taxonomy contains a total of 220 tags, divided into four main categories: communicative status, information level, forward-looking function, and backward-looking function. Jurafsky, Shriberg, and Biasca develop a less fine-grained taxonomy of 42 tags based on DAMSL BIBREF1 . Stolcke et al. employ a similar set for general conversation BIBREF2 , citing that "content- and task-related distinctions will always play an important role in effective DA [Dialogue Act] labeling." Many researchers have tackled the task of developing different speech and dialogue act taxonomies and coding schemes BIBREF12 , BIBREF13 , BIBREF14 , BIBREF15 . For the purposes of our own research, we require a set of dialogue acts that is more closely representative of customer service domain interactions - thus we expand upon previously defined taxonomies and develop a more fine-grained set. Modeling general conversation on Twitter has also been a topic of interest in previous work. Honeycutt and Herring study conversation and collaboration on Twitter using individual tweets containing "@" mentions BIBREF16 . Ritter et al. explore unsupervised modeling of Twitter conversations, using clustering methods on a corpus of 1.3 million Twitter conversations to define a model of transitional flow between in a general Twitter dialogue BIBREF17 . While these approaches are relevant to understanding the nature of interactions on Twitter, we find that the customer service domain presents its own interesting characteristics that are worth exploring further. The most related previous work has explored speech and dialogue act modeling in customer service, however, no previous work has focused on Twitter as a data source. In 2005, Ivanovic uses an abridged set of 12 course-grained dialogue acts (detailed in the Taxonomy section) to describe interactions between customers and agents in instant messaging chats BIBREF18 , BIBREF19 , leading to a proposal on response suggestion using the proposed dialogue acts BIBREF20 . Follow-up work using the taxonomy selected by Ivanovic comes from Kim et al., where they focus on classifying dialogue acts in both one-on-one and multi-party live instant messaging chats BIBREF21 , BIBREF22 . These works are similar to ours in the nature of the problem addressed, but we use a much more fine-grained taxonomy to define the interactions possible in the customer service domain, and focus on Twitter conversations, which are unique in their brevity and the nature of the public interactions. The most similar work to our own is that of Herzig et al. on classifying emotions in customer support dialogues on Twitter BIBREF23 . They explore how agent responses should be tailored to the detected emotional response in customers, in order to improve the quality of service agents can provide. Rather than focusing on emotional response, we seek to model the dialogic structure and intents of the speakers using dialogue acts, with emotion included as features in our model, to characterize the emotional intent within each act. Methodology The underlying goal of this work is to show how a well-defined taxonomy of dialogue acts can be used to summarize semantic information in real-time about the flow of a conversation to derive meaningful insights into the success/failure of the interaction, and then to develop actionable rules to be used in automating customer service interactions. We focus on the customer service domain on Twitter, which has not previously been explored in the context of dialogue act classification. In this new domain, we can provide meaningful recommendations about good communicative practices, based on real data. Our methodology pipeline is shown in Figure FIGREF2 . Taxonomy Definition As described in the related work, the taxonomy of 12 acts to classify dialogue acts in an instant-messaging scenario, developed by Ivanovic in 2005, has been used by previous work when approaching the task of dialogue act classification for customer service BIBREF18 , BIBREF20 , BIBREF19 , BIBREF21 , BIBREF22 . The dataset used consisted of eight conversations from chat logs in the MSN Shopping Service (around 550 turns spanning around 4,500 words) BIBREF19 . The conversations were gathered by asking five volunteers to use the platform to inquire for help regarding various hypothetical situations (i.e. buying an item for someone) BIBREF19 . The process of selection of tags to develop the taxonomy, beginning with the 42 tags from the DAMSL set BIBREF0 , involved removing tags inappropriate for written text, and collapsing sets of tags into a more coarse-grained label BIBREF18 . The final taxonomy consists of the following 12 dialogue acts (sorted by frequency in the dataset): Statement (36%), Thanking (14.7%), Yes-No Question (13.9%), Response-Acknowledgement (7.2%), Request (5.9%), Open-Question (5.3%), Yes-Answer (5.1%), Conventional-Closing (2.9%), No-Answer (2.5%), Conventional-Opening (2.3%), Expressive (2.3%) and Downplayer (1.9%). For the purposes of our own research, focused on customer service on Twitter, we found that the course-grained nature of the taxonomy presented a natural shortcoming in terms of what information could be learned by performing classification at this level. We observe that while having a smaller set of dialogue acts may be helpful for achieving good agreement between annotators (Ivanovic cites kappas of 0.87 between the three expert annotators using this tag set on his data BIBREF18 ), it is unable to offer deeper semantic insight into the specific intent behind each act for many of the categories. For example, the Statement act, which comprises the largest percentage (36% of turns), is an extremely broad category that fails to provide useful information from an analytical perspective. Likewise, the Request category also does not specify any intent behind the act, and leaves much room for improvement. For this reason, and motivated by previous work seeking to develop dialogue act taxonomies appropriate for different domains BIBREF19 , BIBREF21 , we convert the list of dialogue acts presented by the literature into a hierarchical taxonomy, shown in Figure FIGREF6 . We first organize the taxonomy into six high-level dialogue acts: Greeting, Statement, Request, Question, Answer, and Social Act. Then, we update the taxonomy using two main steps: restructuring and adding additional fine-grained acts. We base our changes upon the taxonomy used by Ivanovic and Kim et al. in their work on instant messaging chat dialogues BIBREF19 , BIBREF21 , but also on general dialogue acts observed in the customer service domain, including complaints and suggestions. Our taxonomy does not make any specific restrictions on which party in the dialogue may perform each act, but we do observe that some acts are far more frequent (and sometimes non-existent) in usage, depending on whether the customer or agent is the speaker (for example, the Statement Complaint category never shows up in Agent turns). In order to account for gaps in available act selections for annotators, we include an Other act in the broadest categories. While our taxonomy fills in many gaps from previous work in our domain, we do not claim to have handled coverage of all possible acts in this domain. Our taxonomy allows us to more closely specify the intent and motivation behind each turn, and ultimately how to address different situations. Data Collection Given our taxonomy of fine-grained dialogue acts that expands upon previous work, we set out to gather annotations for Twitter customer service conversations. For our data collection phase, we begin with conversations from the Twitter customer service pages of four different companies, from the electronics, telecommunications, and insurance industries. We perform several forms of pre-processing to the conversations. We filter out conversations if they contain more than one customer or agent speaker, do not have alternating customer/agent speaking turns (single turn per speaker), have less than 5 or more than 10 turns, have less than 70 words in total, and if any turn in the conversation ends in an ellipses followed by a link (indicating that the turn has been cut off due to length, and spans another tweet). Additionally, we remove any references to the company names (substituting with "Agent"), any references to customer usernames (substituting with "Customer"), and replacing and links or image references with INLINEFORM0 link INLINEFORM1 and INLINEFORM2 img INLINEFORM3 tokens. Using these filters as pre-processing methods, we end up with a set of 800 conversations, spanning 5,327 turns. We conduct our annotation study on Amazon Mechanical Turk, presenting Turkers with Human Intelligence Tasks (henceforth, HITs) consisting of a single conversation between a customer and an agent. In each HIT, we present Turkers with a definition of each dialogue act, as well as a sample annotated dialogue for reference. For each turn in the conversation, we allow Turkers to select as many labels from our taxonomy as required to fully characterize the intent of the turn. Additionally, annotators are asked three questions at the end of each conversation HIT, to which they could respond that they agreed, disagreed, or could not tell: We ask 5 Turkers to annotate each conversation HIT, and pay $0.20 per HIT. We find the list of "majority dialogue acts" for each tweet by finding any acts that have received majority-vote labels (at least 3 out of 5 judgements). It is important to note at this point that we make an important choice as to how we will handle dialogue act tagging for each turn. We note that each turn may contain more than one dialogue act vital to carry its full meaning. Thus, we choose not to carry out a specific segmentation task on our tweets, contrary to previous work BIBREF24 , BIBREF25 , opting to characterize each tweet as a single unit composed of different, often overlapping, dialogue acts. Table TABREF16 shows examples of tweets that receive majority vote on more than one label, where the act boundaries are overlapping and not necessarily distinguishable. It is clear that the lines differentiating these acts are not very well defined, and that segmentation would not necessarily aid in clearly separating out each intent. For these reasons, and due to the overall brevity of tweets in general, we choose to avoid the overhead of requiring annotators to provide segment boundaries, and instead ask for all appropriate dialogue acts. Annotation Results Figure FIGREF17 shows the distribution of the number of times each dialogue act in our taxonomy is selected a majority act by the annotators (recall that each turn is annotated by 5 annotators). From the distribution, we see that the largest class is Statement Info which is part of the majority vote list for 2,152 of the 5,327 total turns, followed by Request Info, which appears in 1,088 of the total turns. Although Statement Informative comprises the largest set of majority labels in the data (as did Statement in Ivanovic's distribution), we do observe that other fine-grained categories of Statement occur in the most frequent labels as well, including Statement Complaint, Statement Expressive Negative, and Statement Suggestion – giving more useful information as to what form of statement is most frequently occurring. We find that 147 tweets receive no majority label (i.e. no single act received 3 or more votes out of 5). At the tail of the distribution, we see less frequent acts, such as Statement Sarcasm, Social Act Downplayer, Statement Promise, Greeting Closing, and Request Other. It is also interesting to note that both opening and closing greetings occur infrequently in the data – which is understandable given the nature of Twitter conversation, where formal greeting is not generally required. Table TABREF19 shows a more detailed summary of the distribution of our top 12 dialogue acts according to the annotation experiments, as presented by Ivanovic BIBREF18 . Since each turn has an overlapping set of labels, the column % of Turns (5,327) represents what fraction of the total 5,327 turns contain that dialogue act label (these values do not sum to 1, since there is overlap). To give a better sense of the percentage appearance of each dialogue act class in terms of the total number of annotated labels given, we also present column % of Annotations (10,343) (these values are percentages). We measure agreement in our annotations using a few different techniques. Since each item in our annotation experiments allows for multiple labels, we first design an agreement measure that accounts for how frequently each annotator selects the acts that agree with the majority-selected labels for the turns they annotated. To calculate this for each annotator, we find the number of majority-selected acts for each conversation they annotated (call this MAJ), and the number of subset those acts that they selected (call this SUBS), and find the ratio (SUBS/MAJ). We use this ratio to systematically fine-tune our set of annotators by running our annotation in four batches, restricting our pool of annotators to those that have above a 0.60 ratio of agreement with the majority from the previous batch, as a sort of quality assurance test. We also measure Fleiss' Kappa BIBREF26 agreement between annotators in two ways: first by normalizing our annotation results into binary-valued items indicating annotators' votes for each label contain within each turn. We find an average Fleiss- INLINEFORM0 for the full dataset, including all turn-and-label items, representing moderate agreement on the 24-label problem. We also calculate the Fleiss- INLINEFORM0 values for each label, and use the categories defined by Landis and Koch to bin our speech acts based on agreement BIBREF27 . As shown in Table TABREF18 , we find that the per-label agreement varies from "almost perfect" agreement of INLINEFORM1 for lexically defined categories such as Apology and Thanks, with only slight agreement of INLINEFORM2 for less clearly-defined categories, such as Statement (Other), Answer Response Acknowledgement and Request (Other). For the conversation-level questions, we calculate the agreement across the "Agree" label for all annotators, finding an average Fleiss- INLINEFORM3 , with question-level results of INLINEFORM4 for customer satisfaction, INLINEFORM5 for problem resolution, and INLINEFORM6 for customer frustration. These results suggest room for improvement for further development of the taxonomy, to address problem areas for annotators and remedy areas of lower agreement. Motivation for Multi-Label Classification We test our hypothesis that tweet turns are often characterized by more than one distinct dialogue act label by measuring the percentage overlap between frequent pairs of labels. Of the 5,327 turns annotated, across the 800 conversations, we find that 3,593 of those turns (67.4%) contained more than one majority-act label. Table TABREF22 shows the distribution percentage of the most frequent pairs. For example, we observe that answering with informative statements is the most frequent pair, followed by complaints coupled with negative sentiment or informative statements. We also observe that requests are usually formed as questions, but also co-occur frequently with apologies. This experiment validates our intuition that the majority of turns do contain more than a single label, and motivates our use of a multi-label classification method for characterizing each turn in the conversation modeling experiments we present in the next section. Conversation Modeling In this section, we describe the setup and results of our conversational modeling experiments on the data we collected using our fine-grained taxonomy of customer service dialogue acts. We begin with an overview of the features and classes used, followed by our experimental setup and results for each experiment performed. Features The following list describes the set of features used for our dialogue act classification tasks: Word/Punctuation: binary bag-of-word unigrams, binary existence of a question mark, binary existence of an exclamation mark in a turn Temporal: response time of a turn (time in seconds elapsed between the posting time of the previous turn and that of the current turn) Second-Person Reference: existence of an explicit second-person reference in the turn (you, your, you're) Emotion: count of words in each of the 8 emotion classes from the NRC emotion lexicon BIBREF28 (anger, anticipation, disgust, fear, joy, negative, positive, sadness, surprise, and trust) Dialogue: lexical indicators in the turn: opening greetings (hi, hello, greetings, etc), closing greetings (bye, goodbye), yes-no questions (turns with questions starting with do, did, can, could, etc), wh- questions (turns with questions starting with who, what, where, etc), thanking (thank*), apology (sorry, apolog*), yes-answer, and no-answer Classes Table TABREF30 shows the division of classes we use for each of our experiments. We select our classes using the distribution of annotations we observe in our data collection phase (see Table TABREF19 ), selecting the top 12 classes as candidates. While iteratively selecting the most frequently-occurring classes helps to ensure that classes with the most data are represented in our experiments, it also introduces the problem of including classes that are very well-defined lexically, and may not require learning for classification, such as Social Act Apology and Social Act Thanking in the first 10-Class set. For this reason, we call this set 10-Class (Easy), and also experiment using a 10-Class (Hard) set, where we add in the next two less-defined and more semantically rich labels, such as Statement Offer and Question Open. When using each set of classes, a turn is either classified as one of the classes in the set, or it is classified as "other" (i.e. any of the other classes). We discuss our experiments in more detail and comment on performance differences in the experiment section. Experiments Following previous work on conversation modeling BIBREF23 , we use a sequential SVM-HMM (using the INLINEFORM0 toolkit BIBREF29 ) for our conversation modeling experiments. We hypothesize that a sequential model is most suited to our dialogic data, and that we will be able to concisely capture conversational attributes such as the order in which dialogue acts often occur (i.e. some Answer act after Question a question act, or Apology acts after Complaints). We note that with default settings for a sequence of length INLINEFORM0 , an SVM-HMM model will be able to refine its answers for any turn INLINEFORM1 as information becomes available for turns INLINEFORM2 . However, we opt to design our classifier under a real-time setting, where turn-by-turn classification is required without future knowledge or adaptation of prediction at any given stage. In our setup, turns are predicted in a real-time setting to fairly model conversation available to an intelligent agent in a conversational system. At any point, a turn INLINEFORM3 is predicted using information from turns INLINEFORM4 , and where a prediction is not changed when new information is available. We test our hypothesis by comparing our real-time sequential SVM-HMM model to non-sequential baselines from the NLTK BIBREF30 and Scikit-Learn BIBREF31 toolkits. We use our selected feature set (described above) to be generic enough to apply to both our sequential and non-sequential models, in order to allow us to fairly compare performance. We shuffle and divide our data into 70% for training and development (560 conversations, using 10-fold cross-validation for parameter tuning), and hold out 30% of the data (240 conversations) for test. Motivated by the prevalent overlap of dialogue acts, we conduct our learning experiments using a multi-label setup. For each of the sets of classes, we conduct binary classification task for each label: for each INLINEFORM0 -class classification task, a turn is labeled as either belonging to the current label, or not (i.e. "other"). In this setup, each turn is assigned a binary value for each label (i.e. for the 6-class experiment, each turn receives a value of 0/1 for each indicating whether the classifier predicts it to be relevant to the each of the 6 labels). Thus, for each INLINEFORM1 -class experiment, we end up with INLINEFORM2 binary labels, for example, whether the turn is a Statement Informative or Other, Request Information or Other, etc. We aggregate the INLINEFORM3 binary predictions for each turn, then compare the resultant prediction matrix for all turns to our majority-vote ground-truth labels, where at least 3 out of 5 annotators have selected a label to be true for a given turn. The difficulty of the task increases as the number of classes INLINEFORM4 increases, as there are more classifications done for each turn (i.e., for the 6-class problem, there are 6 classification tasks per turn, while for the 8-class problem, there are 8, etc). Due to the inherent imbalance of label-distribution in the data (shown in Figure FIGREF17 ), we use weighted F-macro to calculate our final scores for each feature set (which finds the average of the metrics for each label, weighted by the number of true instances for that label) BIBREF31 . Our first experiment sets out to compare the use of a non-sequential classification algorithm versus a sequential model for dialogue act classification on our dataset. We experiment with the default Naive Bayes (NB) and Linear SVC algorithms from Scikit-Learn BIBREF31 , comparing with our sequential SVM-HMM model. We test each classifier on each of our four class sets, reporting weighted F-macro for each experiment. Figure FIGREF33 shows the results of the experiments. From this experiment, we observe that our sequential SVM-HMM outperforms each non-sequential baseline, for each of the four class sets. We select the sequential SVM-HMM model for our preferred model for subsequent experiments. We observe that while performance may be expected to drop as the number of classes increases, we instead get a spike in performance for the 10-Class (Easy) setting. This increase occurs due to the addition of the lexically well-defined classes of Statement Apology and Statement Thanks, which are much simpler for our model to predict. Their addition results in a performance boost, comparable to that of the simpler 6-Class problem. When we remove the two well-defined classes and add in the next two broader dialogue act classes of Statement Offer and Question Open (as defined by the 10-Class (Hard) set), we observe a drop in performance, and an overall result comparable to our 8-Class problem. This result is still strong, since the number of classes has increased, but the overall performance does not drop. We also observe that while NB and LinearSVC have the same performance trend for the smaller number of classes, Linear SVC rapidly improves in performance as the number of classes increases, following the same trend as SVM-HMM. The smallest margin of difference between SVM-HMM and Linear SVC also occurs at the 10-Class (Easy) setting, where the addition of highly-lexical classes makes for a more differentiable set of turns. Our next experiment tests the differences in performance when training and testing our real-time sequential SVM-HMM model using only a single type of speaker's turns (i.e. only Customer or only Agent turns). Figure FIGREF35 shows the relative performance of using only speaker-specific turns, versus our standard results using all turns. We observe that using Customer-only turns gives us lower prediction performance than using both speakers' turns, but that Agent-only turns actually gives us higher performance. Since agents are put through training on how to interact with customers (often using templates), agent behavior is significantly more predictable than customer behavior, and it is easier to predict agent turns even without utilizing any customer turn information (which is more varied, and thus more difficult to predict). We again observe a boost in performance at out 10-Class (Easy) set, due to the inclusion of lexically well-defined classes. Notably, we achieve best performance for the 10-Class (Easy) set using only agent turns, where the use of the Apology and Thanks classes are both prevalent and predictable. In our final experiment, we explore the changes in performance we get by splitting the training and test data based on company domain. We compare this performance with our standard setup for SVM-HMM from our baseline experiments (Figure FIGREF33 ), where our train-test data splitting is company-independent (i.e. all conversations are randomized, and no information is used to differentiate different companies or domains). To recap, our data consists of conversations from four companies from three different industrial domains (one from the telecommunication domain, two from the electronics domain, and one from the insurance domain). We create four different versions of our 6-class real-time sequential SVM-HMM, where we train on the data from three of the companies, and test on the remaining company. We present our findings in Table TABREF37 . From the table, we see that our real-time model achieves best prediction results when we use one of the electronics companies in the test fold, even though the number of training samples is smallest in these cases. On the other hand, when we assign insurance company in the test fold, our model's prediction performance is comparatively low. Upon further investigation, we find that customer-agent conversations in the telecommunication and electronics domains are more similar than those in the insurance domain. Our findings show that our model is robust to different domains as our test set size increases, and that our more generic, company-independent experiment gives us better performance than any domain-specific experiments. Conversation Outcome Analysis Given our observation that Agent turns are more predictable, and that we achieve best performance in a company-independent setting, we question whether the training that agents receive is actually reliable in terms of resulting in overall "satisfied customers", regardless of company domain. Ultimately, our goal is to discover whether we can use the insight we derive from our predicted dialogue acts to better inform conversational systems aimed at offering customer support. Our next set of experiments aims to show the utility of our real-time dialogue act classification as a method for summarizing semantic intent in a conversation into rules that can be used to guide automated systems. Classifying Problem Outcomes We conduct three supervised classification experiments to better understand full conversation outcome, using the default Linear SVC classifier in Scikit-Learn BIBREF31 (which gave us our best baseline for the dialogue classification task). Each classification experiments centers around one of three problem outcomes: customer satisfaction, problem resolution, and customer frustration. For each outcome, we remove any conversation that did not receive majority consensus for a label, or received majority vote of "can't tell". Our final conversation sets consist of 216 satisfied and 500 unsatisfied customer conversations, 271 resolved and 425 unresolved problem conversations, and 534 frustrated and 229 not frustrated customer conversations. We retain the inherent imbalance in the data to match the natural distribution observed. The clear excess of consensus of responses that indicate negative outcomes further motivates us to understand what sorts of dialogic patterns results in such outcomes. We run the experiment for each conversation outcome using 10-fold cross-validation, under each of our four class settings: 6-Class, 8-Class, 10-Class (Easy), and 10-Class (Hard). The first feature set we use is Best_Features (from the original dialogue act classification experiments), which we run as a baseline. Our second feature set is our Dialogue_Acts predictions for each turn – we choose the most probable dialogue act prediction for each turn using our dialogue act classification framework to avoid sparsity. In this way, for each class size INLINEFORM0 , each conversation is converted into a vector of INLINEFORM1 (up to 10) features that describe the most strongly associated dialogue act from the dialogue act classification experiments for each turn, and the corresponding turn number. For example, a conversation feature vector may look as follows: INLINEFORM2 Thus, our classifier can then learn patterns based on these features (for example, that specific acts appearing at the end of a conversation are strong indicators of customer satisfaction) that allow us to derive rules about successful/unsuccessful interactions. Figure FIGREF38 shows the results of our binary classification experiments for each outcome. For each experiment, the Best_Features set is constant over each class size, while the Dialogue_Act features are affected by class size (since the predicted act for each turn will change based on the set of acts available for that class size). Our first observation is that we achieve high performance on the binary classification task, reaching F-measures of 0.70, 0.65, and 0.83 for the satisfaction, resolution, and frustration outcomes, respectively. Also, we observe that the performance of our predicted dialogue act features is comparable to that of the much larger set of best features for each label (almost identical in the case of frustration). In more detail, we note interesting differences comparing the performance of the small set of dialogue act features that "summarize" the large, sparse set of best features for each label, as a form of data-driven feature selection. For satisfaction, we see that the best feature set outperforms the dialogue acts for each class set except for 10-Class (Easy), where the dialogue acts are more effective. The existence of the very lexically well-defined Social Act Thanking and Social Act Apology classes makes the dialogue acts ideal for summarization. In the case of problem resolution, we see that the performance of the dialogue acts approaches that of the best feature set as the number of classes increases, showing that the dialogue features are able to express the full intent of the turns well, even at more difficult class settings. Finally, for the frustration experiment, we observe negligible different between the best features and dialogue act features, and very high classification results overall. Actionable Rules for Automated Customer Support While these experiments highlight how we can use dialogue act predictions as a means to greatly reduce feature sparsity and predict conversation outcome, our main aim is to gain good insight from the use of the dialogue acts to inform and automate customer service interactions. We conduct deeper analysis by taking a closer look at the most informative dialogue act features in each experiment. Table TABREF44 shows the most informative features and weights for each of our three conversation outcomes. To help guide our analysis, we divide the features into positions based on where they occur in the conversation: start (turns 1-3), middle (turns 4-6), and end (turns 7-10). Desirable outcomes (customers that are satisfied/not frustrated and resolved problems) are shown at the top rows of the table, and undesirable outcomes (unsatisfied/frustrated customers and unresolved problems) are shown at the bottom rows. Our analysis helps zone in on how the use of certain dialogue acts may be likely to result in different outcomes. The weights we observe vary in the amount of insight provided: for example, offering extra help at the end of a conversation, or thanking the customer yields more satisfied customers, and more resolved problems (with ratios of above 6:1). However, some outcomes are much more subtle: for example, asking yes-no questions early-on in a conversation is highly associated with problem resolution (ratio 3:1), but asking them at the end of a conversation has as similarly strong association with unsatisfied customers. Giving elaborate answers that are not a simple affirmative, negative, or response acknowledgement (i.e. Answer (Other)) towards the middle of a conversation leads to satisfied customers that are not frustrated. Likewise, requesting information towards the end of a conversation (implying that more information is still necessary at the termination of the dialogue) leads to unsatisfied and unresolved customers, with ratios of at least 4:1. By using the feature weights we derive from using our predicted dialogue acts in our outcome classification experiments, we can thus derive data-driven patterns that offer useful insight into good/bad practices. Our goal is to then use these rules as guidelines, serving as a basis for automated response planning in the customer service domain. For example, these rules can be used to recommend certain dialogue act responses given the position in a conversation, and based previous turns. This information, derived from correlation with conversation outcomes, gives a valuable addition to conversational flow for intelligent agents, and is more useful than canned responses. Conclusions In this paper, we explore how we can analyze dialogic trends in customer service conversations on Twitter to offer insight into good/bad practices with respect to conversation outcomes. We design a novel taxonomy of fine-grained dialogue acts, tailored for the customer service domain, and gather annotations for 800 Twitter conversations. We show that dialogue acts are often semantically overlapping, and conduct multi-label supervised learning experiments to predict multiple appropriate dialogue act labels for each turn in real-time, under varying class sizes. We show that our sequential SVM-HMM model outperforms all non-sequential baselines, and plan to continue our exploration of other sequential models including Conditional Random Fields (CRF) BIBREF32 and Long Short-Term Memory (LSTM) BIBREF33 , as well as of dialogue modeling using different Markov Decision Process (MDP) BIBREF34 models such as the Partially-Observed MDP (POMDP) BIBREF35 . We establish that agents are more predictable than customers in terms of the dialogue acts they utilize, and set out to understand whether the conversation strategies agents employ are well-correlated with desirable conversation outcomes. We conduct binary classification experiments to analyze how our predicted dialogue acts can be used to classify conversations as ending in customer satisfaction, customer frustration, and problem resolution. We observe interesting correlations between the dialogue acts agents use and the outcomes, offering insights into good/bad practices that are more useful for creating context-aware automated customer service systems than generating canned response templates. Future directions for this work revolve around the integration of the insights derived in the design of automated customer service systems. To this end, we aim to improve the taxonomy and annotation design by consulting domain-experts and using annotator feedback and agreement information, derive more powerful features for dialogue act prediction, and automate ranking and selection of best-practice rules based on domain requirements for automated customer service system design.
A request for information act should be issued early in a conversation, followed by an answer, informative statement, or apology towards the end of the conversation, offering extra help at the end of a conversation, or thanking the customer yields more satisfied customers, and more resolved problems , asking yes-no questions early-on in a conversation is highly associated with problem resolution (ratio 3:1), but asking them at the end of a conversation has as similarly strong association with unsatisfied customers, Giving elaborate answers that are not a simple affirmative, negative, or response acknowledgement (i.e. Answer (Other)) towards the middle of a conversation leads to satisfied customers that are not frustrated, requesting information towards the end of a conversation (implying that more information is still necessary at the termination of the dialogue) leads to unsatisfied and unresolved customers
d6e8b32048ff83c052e978ff3b8f1cb097377786
d6e8b32048ff83c052e978ff3b8f1cb097377786_0
Q: How are customer satisfaction, customer frustration and overall problem resolution data collected? Text: Introduction The need for real-time, efficient, and reliable customer service has grown in recent years. Twitter has emerged as a popular medium for customer service dialogue, allowing customers to make inquiries and receive instant live support in the public domain. In order to provide useful information to customers, agents must first understand the requirements of the conversation, and offer customers the appropriate feedback. While this may be feasible at the level of a single conversation for a human agent, automatic analysis of conversations is essential for data-driven approaches towards the design of automated customer support agents and systems. Analyzing the dialogic structure of a conversation in terms of the "dialogue acts" used, such as statements or questions, can give important meta-information about conversation flow and content, and can be used as a first step to developing automated agents. Traditional dialogue act taxonomies used to label turns in a conversation are very generic, in order to allow for broad coverage of the majority of dialogue acts possible in a conversation BIBREF0 , BIBREF1 , BIBREF2 . However, for the purpose of understanding and analyzing customer service conversations, generic taxonomies fall short. Table TABREF1 shows a sample customer service conversation between a human agent and customer on Twitter, where the customer and agent take alternating "turns" to discuss the problem. As shown from the dialogue acts used at each turn, simply knowing that a turn is a Statement or Request, as is possible with generic taxonomies, is not enough information to allow for automated handling or response to a problem. We need more fine-grained dialogue acts, such as Informative Statement, Complaint, or Request for Information to capture the speaker's intent, and act accordingly. Likewise, turns often include multiple overlapping dialogue acts, such that a multi-label approach to classification is often more informative than a single-label approach. Dialogue act prediction can be used to guide automatic response generation, and to develop diagnostic tools for the fine-tuning of automatic agents. For example, in Table TABREF1 , the customer's first turn (Turn 1) is categorized as a Complaint, Negative Expressive Statement, and Sarcasm, and the agent's response (Turn 2) is tagged as a Request for Information, Yes-No Question, and Apology. Prediction of these dialogue acts in a real-time setting can be leveraged to generate appropriate automated agent responses to similar situations. Additionally, important patterns can emerge from analysis of the fine-grained acts in a dialogue in a post-prediction setting. For example, if an agent does not follow-up with certain actions in response to a customer's question dialogue act, this could be found to be a violation of a best practice pattern. By analyzing large numbers of dialogue act sequences correlated with specific outcomes, various rules can be derived, i.e. "Continuing to request information late in a conversation often leads to customer dissatisfaction." This can then be codified into a best practice pattern rules for automated systems, such as "A request for information act should be issued early in a conversation, followed by an answer, informative statement, or apology towards the end of the conversation." In this work, we are motivated to predict the dialogue acts in conversations with the intent of identifying problem spots that can be addressed in real-time, and to allow for post-conversation analysis to derive rules about conversation outcomes indicating successful/unsuccessful interactions, namely, customer satisfaction, customer frustration, and problem resolution. We focus on analysis of the dialogue acts used in customer service conversations as a first step to fully automating the interaction. We address various different challenges: dialogue act annotated data is not available for customer service on Twitter, the task of dialogue act annotation is subjective, existing taxonomies do not capture the fine-grained information we believe is valuable to our task, and tweets, although concise in nature, often consist of overlapping dialogue acts to characterize their full intent. The novelty of our work comes from the development of our fine-grained dialogue act taxonomy and multi-label approach for act prediction, as well as our analysis of the customer service domain on Twitter. Our goal is to offer useful analytics to improve outcome-oriented conversational systems. We first expand upon previous work and generic dialogue act taxonomies, developing a fine-grained set of dialogue acts for customer service, and conducting a systematic user study to identify these acts in a dataset of 800 conversations from four Twitter customer service accounts (i.e. four different companies in the telecommunication, electronics, and insurance industries). We then aim to understand the conversation flow between customers and agents using our taxonomy, so we develop a real-time sequential SVM-HMM model to predict our fine-grained dialogue acts while a conversation is in progress, using a novel multi-label scheme to classify each turn. Finally, using our dialogue act predictions, we classify conversations based on the outcomes of customer satisfaction, frustration, and overall problem resolution, then provide actionable guidelines for the development of automated customer service systems and intelligent agents aimed at desired customer outcomes BIBREF3 , BIBREF4 . We begin with a discussion of related work, followed by an overview of our methodology. Next, we describe our conversation modeling framework, and explain our outcome analysis experiments, to show how we derive useful patterns for designing automated customer service agents. Finally, we present conclusions and directions for future work. Related Work Developing computational speech and dialogue act models has long been a topic of interest BIBREF5 , BIBREF6 , BIBREF7 , BIBREF8 , with researchers from many different backgrounds studying human conversations and developing theories around conversational analysis and interpretation on intent. Modern intelligent conversational BIBREF3 , BIBREF4 and dialogue systems draw principles from many disciplines, including philosophy, linguistics, computer science, and sociology. In this section, we describe relevant previous work on speech and dialogue act modeling, general conversation modeling on Twitter, and speech and dialogue act modeling of customer service in other data sources. Previous work has explored speech act modeling in different domains (as a predecessor to dialogue act modeling). Zhang et al. present work on recognition of speech acts on Twitter, following up with a study on scalable speech act recognition given the difficulty of obtaining labeled training data BIBREF9 . They use a simple taxonomy of four main speech acts (Statement, Question, Suggestion, Comment, and a Miscellaneous category). More recently, Vosoughi et al. develop BIBREF10 a speech act classifier for Twitter, using a modification of the taxonomy defined by Searle in 1975, including six acts they observe to commonly occur on Twitter: Assertion, Recommendation Expression, Question, Request, again plus a Miscellaneous category. They describe good features for speech act classification and the application of such a system to detect stories on social media BIBREF11 . In this work, we are interested in the dialogic characteristics of Twitter conversations, rather than speech acts in stand-alone tweets. Different dialogue act taxonomies have been developed to characterize conversational acts. Core and Allen present the Dialogue Act Marking in Several Layers (DAMSL), a standard for discourse annotation that was developed in 1997 BIBREF0 . The taxonomy contains a total of 220 tags, divided into four main categories: communicative status, information level, forward-looking function, and backward-looking function. Jurafsky, Shriberg, and Biasca develop a less fine-grained taxonomy of 42 tags based on DAMSL BIBREF1 . Stolcke et al. employ a similar set for general conversation BIBREF2 , citing that "content- and task-related distinctions will always play an important role in effective DA [Dialogue Act] labeling." Many researchers have tackled the task of developing different speech and dialogue act taxonomies and coding schemes BIBREF12 , BIBREF13 , BIBREF14 , BIBREF15 . For the purposes of our own research, we require a set of dialogue acts that is more closely representative of customer service domain interactions - thus we expand upon previously defined taxonomies and develop a more fine-grained set. Modeling general conversation on Twitter has also been a topic of interest in previous work. Honeycutt and Herring study conversation and collaboration on Twitter using individual tweets containing "@" mentions BIBREF16 . Ritter et al. explore unsupervised modeling of Twitter conversations, using clustering methods on a corpus of 1.3 million Twitter conversations to define a model of transitional flow between in a general Twitter dialogue BIBREF17 . While these approaches are relevant to understanding the nature of interactions on Twitter, we find that the customer service domain presents its own interesting characteristics that are worth exploring further. The most related previous work has explored speech and dialogue act modeling in customer service, however, no previous work has focused on Twitter as a data source. In 2005, Ivanovic uses an abridged set of 12 course-grained dialogue acts (detailed in the Taxonomy section) to describe interactions between customers and agents in instant messaging chats BIBREF18 , BIBREF19 , leading to a proposal on response suggestion using the proposed dialogue acts BIBREF20 . Follow-up work using the taxonomy selected by Ivanovic comes from Kim et al., where they focus on classifying dialogue acts in both one-on-one and multi-party live instant messaging chats BIBREF21 , BIBREF22 . These works are similar to ours in the nature of the problem addressed, but we use a much more fine-grained taxonomy to define the interactions possible in the customer service domain, and focus on Twitter conversations, which are unique in their brevity and the nature of the public interactions. The most similar work to our own is that of Herzig et al. on classifying emotions in customer support dialogues on Twitter BIBREF23 . They explore how agent responses should be tailored to the detected emotional response in customers, in order to improve the quality of service agents can provide. Rather than focusing on emotional response, we seek to model the dialogic structure and intents of the speakers using dialogue acts, with emotion included as features in our model, to characterize the emotional intent within each act. Methodology The underlying goal of this work is to show how a well-defined taxonomy of dialogue acts can be used to summarize semantic information in real-time about the flow of a conversation to derive meaningful insights into the success/failure of the interaction, and then to develop actionable rules to be used in automating customer service interactions. We focus on the customer service domain on Twitter, which has not previously been explored in the context of dialogue act classification. In this new domain, we can provide meaningful recommendations about good communicative practices, based on real data. Our methodology pipeline is shown in Figure FIGREF2 . Taxonomy Definition As described in the related work, the taxonomy of 12 acts to classify dialogue acts in an instant-messaging scenario, developed by Ivanovic in 2005, has been used by previous work when approaching the task of dialogue act classification for customer service BIBREF18 , BIBREF20 , BIBREF19 , BIBREF21 , BIBREF22 . The dataset used consisted of eight conversations from chat logs in the MSN Shopping Service (around 550 turns spanning around 4,500 words) BIBREF19 . The conversations were gathered by asking five volunteers to use the platform to inquire for help regarding various hypothetical situations (i.e. buying an item for someone) BIBREF19 . The process of selection of tags to develop the taxonomy, beginning with the 42 tags from the DAMSL set BIBREF0 , involved removing tags inappropriate for written text, and collapsing sets of tags into a more coarse-grained label BIBREF18 . The final taxonomy consists of the following 12 dialogue acts (sorted by frequency in the dataset): Statement (36%), Thanking (14.7%), Yes-No Question (13.9%), Response-Acknowledgement (7.2%), Request (5.9%), Open-Question (5.3%), Yes-Answer (5.1%), Conventional-Closing (2.9%), No-Answer (2.5%), Conventional-Opening (2.3%), Expressive (2.3%) and Downplayer (1.9%). For the purposes of our own research, focused on customer service on Twitter, we found that the course-grained nature of the taxonomy presented a natural shortcoming in terms of what information could be learned by performing classification at this level. We observe that while having a smaller set of dialogue acts may be helpful for achieving good agreement between annotators (Ivanovic cites kappas of 0.87 between the three expert annotators using this tag set on his data BIBREF18 ), it is unable to offer deeper semantic insight into the specific intent behind each act for many of the categories. For example, the Statement act, which comprises the largest percentage (36% of turns), is an extremely broad category that fails to provide useful information from an analytical perspective. Likewise, the Request category also does not specify any intent behind the act, and leaves much room for improvement. For this reason, and motivated by previous work seeking to develop dialogue act taxonomies appropriate for different domains BIBREF19 , BIBREF21 , we convert the list of dialogue acts presented by the literature into a hierarchical taxonomy, shown in Figure FIGREF6 . We first organize the taxonomy into six high-level dialogue acts: Greeting, Statement, Request, Question, Answer, and Social Act. Then, we update the taxonomy using two main steps: restructuring and adding additional fine-grained acts. We base our changes upon the taxonomy used by Ivanovic and Kim et al. in their work on instant messaging chat dialogues BIBREF19 , BIBREF21 , but also on general dialogue acts observed in the customer service domain, including complaints and suggestions. Our taxonomy does not make any specific restrictions on which party in the dialogue may perform each act, but we do observe that some acts are far more frequent (and sometimes non-existent) in usage, depending on whether the customer or agent is the speaker (for example, the Statement Complaint category never shows up in Agent turns). In order to account for gaps in available act selections for annotators, we include an Other act in the broadest categories. While our taxonomy fills in many gaps from previous work in our domain, we do not claim to have handled coverage of all possible acts in this domain. Our taxonomy allows us to more closely specify the intent and motivation behind each turn, and ultimately how to address different situations. Data Collection Given our taxonomy of fine-grained dialogue acts that expands upon previous work, we set out to gather annotations for Twitter customer service conversations. For our data collection phase, we begin with conversations from the Twitter customer service pages of four different companies, from the electronics, telecommunications, and insurance industries. We perform several forms of pre-processing to the conversations. We filter out conversations if they contain more than one customer or agent speaker, do not have alternating customer/agent speaking turns (single turn per speaker), have less than 5 or more than 10 turns, have less than 70 words in total, and if any turn in the conversation ends in an ellipses followed by a link (indicating that the turn has been cut off due to length, and spans another tweet). Additionally, we remove any references to the company names (substituting with "Agent"), any references to customer usernames (substituting with "Customer"), and replacing and links or image references with INLINEFORM0 link INLINEFORM1 and INLINEFORM2 img INLINEFORM3 tokens. Using these filters as pre-processing methods, we end up with a set of 800 conversations, spanning 5,327 turns. We conduct our annotation study on Amazon Mechanical Turk, presenting Turkers with Human Intelligence Tasks (henceforth, HITs) consisting of a single conversation between a customer and an agent. In each HIT, we present Turkers with a definition of each dialogue act, as well as a sample annotated dialogue for reference. For each turn in the conversation, we allow Turkers to select as many labels from our taxonomy as required to fully characterize the intent of the turn. Additionally, annotators are asked three questions at the end of each conversation HIT, to which they could respond that they agreed, disagreed, or could not tell: We ask 5 Turkers to annotate each conversation HIT, and pay $0.20 per HIT. We find the list of "majority dialogue acts" for each tweet by finding any acts that have received majority-vote labels (at least 3 out of 5 judgements). It is important to note at this point that we make an important choice as to how we will handle dialogue act tagging for each turn. We note that each turn may contain more than one dialogue act vital to carry its full meaning. Thus, we choose not to carry out a specific segmentation task on our tweets, contrary to previous work BIBREF24 , BIBREF25 , opting to characterize each tweet as a single unit composed of different, often overlapping, dialogue acts. Table TABREF16 shows examples of tweets that receive majority vote on more than one label, where the act boundaries are overlapping and not necessarily distinguishable. It is clear that the lines differentiating these acts are not very well defined, and that segmentation would not necessarily aid in clearly separating out each intent. For these reasons, and due to the overall brevity of tweets in general, we choose to avoid the overhead of requiring annotators to provide segment boundaries, and instead ask for all appropriate dialogue acts. Annotation Results Figure FIGREF17 shows the distribution of the number of times each dialogue act in our taxonomy is selected a majority act by the annotators (recall that each turn is annotated by 5 annotators). From the distribution, we see that the largest class is Statement Info which is part of the majority vote list for 2,152 of the 5,327 total turns, followed by Request Info, which appears in 1,088 of the total turns. Although Statement Informative comprises the largest set of majority labels in the data (as did Statement in Ivanovic's distribution), we do observe that other fine-grained categories of Statement occur in the most frequent labels as well, including Statement Complaint, Statement Expressive Negative, and Statement Suggestion – giving more useful information as to what form of statement is most frequently occurring. We find that 147 tweets receive no majority label (i.e. no single act received 3 or more votes out of 5). At the tail of the distribution, we see less frequent acts, such as Statement Sarcasm, Social Act Downplayer, Statement Promise, Greeting Closing, and Request Other. It is also interesting to note that both opening and closing greetings occur infrequently in the data – which is understandable given the nature of Twitter conversation, where formal greeting is not generally required. Table TABREF19 shows a more detailed summary of the distribution of our top 12 dialogue acts according to the annotation experiments, as presented by Ivanovic BIBREF18 . Since each turn has an overlapping set of labels, the column % of Turns (5,327) represents what fraction of the total 5,327 turns contain that dialogue act label (these values do not sum to 1, since there is overlap). To give a better sense of the percentage appearance of each dialogue act class in terms of the total number of annotated labels given, we also present column % of Annotations (10,343) (these values are percentages). We measure agreement in our annotations using a few different techniques. Since each item in our annotation experiments allows for multiple labels, we first design an agreement measure that accounts for how frequently each annotator selects the acts that agree with the majority-selected labels for the turns they annotated. To calculate this for each annotator, we find the number of majority-selected acts for each conversation they annotated (call this MAJ), and the number of subset those acts that they selected (call this SUBS), and find the ratio (SUBS/MAJ). We use this ratio to systematically fine-tune our set of annotators by running our annotation in four batches, restricting our pool of annotators to those that have above a 0.60 ratio of agreement with the majority from the previous batch, as a sort of quality assurance test. We also measure Fleiss' Kappa BIBREF26 agreement between annotators in two ways: first by normalizing our annotation results into binary-valued items indicating annotators' votes for each label contain within each turn. We find an average Fleiss- INLINEFORM0 for the full dataset, including all turn-and-label items, representing moderate agreement on the 24-label problem. We also calculate the Fleiss- INLINEFORM0 values for each label, and use the categories defined by Landis and Koch to bin our speech acts based on agreement BIBREF27 . As shown in Table TABREF18 , we find that the per-label agreement varies from "almost perfect" agreement of INLINEFORM1 for lexically defined categories such as Apology and Thanks, with only slight agreement of INLINEFORM2 for less clearly-defined categories, such as Statement (Other), Answer Response Acknowledgement and Request (Other). For the conversation-level questions, we calculate the agreement across the "Agree" label for all annotators, finding an average Fleiss- INLINEFORM3 , with question-level results of INLINEFORM4 for customer satisfaction, INLINEFORM5 for problem resolution, and INLINEFORM6 for customer frustration. These results suggest room for improvement for further development of the taxonomy, to address problem areas for annotators and remedy areas of lower agreement. Motivation for Multi-Label Classification We test our hypothesis that tweet turns are often characterized by more than one distinct dialogue act label by measuring the percentage overlap between frequent pairs of labels. Of the 5,327 turns annotated, across the 800 conversations, we find that 3,593 of those turns (67.4%) contained more than one majority-act label. Table TABREF22 shows the distribution percentage of the most frequent pairs. For example, we observe that answering with informative statements is the most frequent pair, followed by complaints coupled with negative sentiment or informative statements. We also observe that requests are usually formed as questions, but also co-occur frequently with apologies. This experiment validates our intuition that the majority of turns do contain more than a single label, and motivates our use of a multi-label classification method for characterizing each turn in the conversation modeling experiments we present in the next section. Conversation Modeling In this section, we describe the setup and results of our conversational modeling experiments on the data we collected using our fine-grained taxonomy of customer service dialogue acts. We begin with an overview of the features and classes used, followed by our experimental setup and results for each experiment performed. Features The following list describes the set of features used for our dialogue act classification tasks: Word/Punctuation: binary bag-of-word unigrams, binary existence of a question mark, binary existence of an exclamation mark in a turn Temporal: response time of a turn (time in seconds elapsed between the posting time of the previous turn and that of the current turn) Second-Person Reference: existence of an explicit second-person reference in the turn (you, your, you're) Emotion: count of words in each of the 8 emotion classes from the NRC emotion lexicon BIBREF28 (anger, anticipation, disgust, fear, joy, negative, positive, sadness, surprise, and trust) Dialogue: lexical indicators in the turn: opening greetings (hi, hello, greetings, etc), closing greetings (bye, goodbye), yes-no questions (turns with questions starting with do, did, can, could, etc), wh- questions (turns with questions starting with who, what, where, etc), thanking (thank*), apology (sorry, apolog*), yes-answer, and no-answer Classes Table TABREF30 shows the division of classes we use for each of our experiments. We select our classes using the distribution of annotations we observe in our data collection phase (see Table TABREF19 ), selecting the top 12 classes as candidates. While iteratively selecting the most frequently-occurring classes helps to ensure that classes with the most data are represented in our experiments, it also introduces the problem of including classes that are very well-defined lexically, and may not require learning for classification, such as Social Act Apology and Social Act Thanking in the first 10-Class set. For this reason, we call this set 10-Class (Easy), and also experiment using a 10-Class (Hard) set, where we add in the next two less-defined and more semantically rich labels, such as Statement Offer and Question Open. When using each set of classes, a turn is either classified as one of the classes in the set, or it is classified as "other" (i.e. any of the other classes). We discuss our experiments in more detail and comment on performance differences in the experiment section. Experiments Following previous work on conversation modeling BIBREF23 , we use a sequential SVM-HMM (using the INLINEFORM0 toolkit BIBREF29 ) for our conversation modeling experiments. We hypothesize that a sequential model is most suited to our dialogic data, and that we will be able to concisely capture conversational attributes such as the order in which dialogue acts often occur (i.e. some Answer act after Question a question act, or Apology acts after Complaints). We note that with default settings for a sequence of length INLINEFORM0 , an SVM-HMM model will be able to refine its answers for any turn INLINEFORM1 as information becomes available for turns INLINEFORM2 . However, we opt to design our classifier under a real-time setting, where turn-by-turn classification is required without future knowledge or adaptation of prediction at any given stage. In our setup, turns are predicted in a real-time setting to fairly model conversation available to an intelligent agent in a conversational system. At any point, a turn INLINEFORM3 is predicted using information from turns INLINEFORM4 , and where a prediction is not changed when new information is available. We test our hypothesis by comparing our real-time sequential SVM-HMM model to non-sequential baselines from the NLTK BIBREF30 and Scikit-Learn BIBREF31 toolkits. We use our selected feature set (described above) to be generic enough to apply to both our sequential and non-sequential models, in order to allow us to fairly compare performance. We shuffle and divide our data into 70% for training and development (560 conversations, using 10-fold cross-validation for parameter tuning), and hold out 30% of the data (240 conversations) for test. Motivated by the prevalent overlap of dialogue acts, we conduct our learning experiments using a multi-label setup. For each of the sets of classes, we conduct binary classification task for each label: for each INLINEFORM0 -class classification task, a turn is labeled as either belonging to the current label, or not (i.e. "other"). In this setup, each turn is assigned a binary value for each label (i.e. for the 6-class experiment, each turn receives a value of 0/1 for each indicating whether the classifier predicts it to be relevant to the each of the 6 labels). Thus, for each INLINEFORM1 -class experiment, we end up with INLINEFORM2 binary labels, for example, whether the turn is a Statement Informative or Other, Request Information or Other, etc. We aggregate the INLINEFORM3 binary predictions for each turn, then compare the resultant prediction matrix for all turns to our majority-vote ground-truth labels, where at least 3 out of 5 annotators have selected a label to be true for a given turn. The difficulty of the task increases as the number of classes INLINEFORM4 increases, as there are more classifications done for each turn (i.e., for the 6-class problem, there are 6 classification tasks per turn, while for the 8-class problem, there are 8, etc). Due to the inherent imbalance of label-distribution in the data (shown in Figure FIGREF17 ), we use weighted F-macro to calculate our final scores for each feature set (which finds the average of the metrics for each label, weighted by the number of true instances for that label) BIBREF31 . Our first experiment sets out to compare the use of a non-sequential classification algorithm versus a sequential model for dialogue act classification on our dataset. We experiment with the default Naive Bayes (NB) and Linear SVC algorithms from Scikit-Learn BIBREF31 , comparing with our sequential SVM-HMM model. We test each classifier on each of our four class sets, reporting weighted F-macro for each experiment. Figure FIGREF33 shows the results of the experiments. From this experiment, we observe that our sequential SVM-HMM outperforms each non-sequential baseline, for each of the four class sets. We select the sequential SVM-HMM model for our preferred model for subsequent experiments. We observe that while performance may be expected to drop as the number of classes increases, we instead get a spike in performance for the 10-Class (Easy) setting. This increase occurs due to the addition of the lexically well-defined classes of Statement Apology and Statement Thanks, which are much simpler for our model to predict. Their addition results in a performance boost, comparable to that of the simpler 6-Class problem. When we remove the two well-defined classes and add in the next two broader dialogue act classes of Statement Offer and Question Open (as defined by the 10-Class (Hard) set), we observe a drop in performance, and an overall result comparable to our 8-Class problem. This result is still strong, since the number of classes has increased, but the overall performance does not drop. We also observe that while NB and LinearSVC have the same performance trend for the smaller number of classes, Linear SVC rapidly improves in performance as the number of classes increases, following the same trend as SVM-HMM. The smallest margin of difference between SVM-HMM and Linear SVC also occurs at the 10-Class (Easy) setting, where the addition of highly-lexical classes makes for a more differentiable set of turns. Our next experiment tests the differences in performance when training and testing our real-time sequential SVM-HMM model using only a single type of speaker's turns (i.e. only Customer or only Agent turns). Figure FIGREF35 shows the relative performance of using only speaker-specific turns, versus our standard results using all turns. We observe that using Customer-only turns gives us lower prediction performance than using both speakers' turns, but that Agent-only turns actually gives us higher performance. Since agents are put through training on how to interact with customers (often using templates), agent behavior is significantly more predictable than customer behavior, and it is easier to predict agent turns even without utilizing any customer turn information (which is more varied, and thus more difficult to predict). We again observe a boost in performance at out 10-Class (Easy) set, due to the inclusion of lexically well-defined classes. Notably, we achieve best performance for the 10-Class (Easy) set using only agent turns, where the use of the Apology and Thanks classes are both prevalent and predictable. In our final experiment, we explore the changes in performance we get by splitting the training and test data based on company domain. We compare this performance with our standard setup for SVM-HMM from our baseline experiments (Figure FIGREF33 ), where our train-test data splitting is company-independent (i.e. all conversations are randomized, and no information is used to differentiate different companies or domains). To recap, our data consists of conversations from four companies from three different industrial domains (one from the telecommunication domain, two from the electronics domain, and one from the insurance domain). We create four different versions of our 6-class real-time sequential SVM-HMM, where we train on the data from three of the companies, and test on the remaining company. We present our findings in Table TABREF37 . From the table, we see that our real-time model achieves best prediction results when we use one of the electronics companies in the test fold, even though the number of training samples is smallest in these cases. On the other hand, when we assign insurance company in the test fold, our model's prediction performance is comparatively low. Upon further investigation, we find that customer-agent conversations in the telecommunication and electronics domains are more similar than those in the insurance domain. Our findings show that our model is robust to different domains as our test set size increases, and that our more generic, company-independent experiment gives us better performance than any domain-specific experiments. Conversation Outcome Analysis Given our observation that Agent turns are more predictable, and that we achieve best performance in a company-independent setting, we question whether the training that agents receive is actually reliable in terms of resulting in overall "satisfied customers", regardless of company domain. Ultimately, our goal is to discover whether we can use the insight we derive from our predicted dialogue acts to better inform conversational systems aimed at offering customer support. Our next set of experiments aims to show the utility of our real-time dialogue act classification as a method for summarizing semantic intent in a conversation into rules that can be used to guide automated systems. Classifying Problem Outcomes We conduct three supervised classification experiments to better understand full conversation outcome, using the default Linear SVC classifier in Scikit-Learn BIBREF31 (which gave us our best baseline for the dialogue classification task). Each classification experiments centers around one of three problem outcomes: customer satisfaction, problem resolution, and customer frustration. For each outcome, we remove any conversation that did not receive majority consensus for a label, or received majority vote of "can't tell". Our final conversation sets consist of 216 satisfied and 500 unsatisfied customer conversations, 271 resolved and 425 unresolved problem conversations, and 534 frustrated and 229 not frustrated customer conversations. We retain the inherent imbalance in the data to match the natural distribution observed. The clear excess of consensus of responses that indicate negative outcomes further motivates us to understand what sorts of dialogic patterns results in such outcomes. We run the experiment for each conversation outcome using 10-fold cross-validation, under each of our four class settings: 6-Class, 8-Class, 10-Class (Easy), and 10-Class (Hard). The first feature set we use is Best_Features (from the original dialogue act classification experiments), which we run as a baseline. Our second feature set is our Dialogue_Acts predictions for each turn – we choose the most probable dialogue act prediction for each turn using our dialogue act classification framework to avoid sparsity. In this way, for each class size INLINEFORM0 , each conversation is converted into a vector of INLINEFORM1 (up to 10) features that describe the most strongly associated dialogue act from the dialogue act classification experiments for each turn, and the corresponding turn number. For example, a conversation feature vector may look as follows: INLINEFORM2 Thus, our classifier can then learn patterns based on these features (for example, that specific acts appearing at the end of a conversation are strong indicators of customer satisfaction) that allow us to derive rules about successful/unsuccessful interactions. Figure FIGREF38 shows the results of our binary classification experiments for each outcome. For each experiment, the Best_Features set is constant over each class size, while the Dialogue_Act features are affected by class size (since the predicted act for each turn will change based on the set of acts available for that class size). Our first observation is that we achieve high performance on the binary classification task, reaching F-measures of 0.70, 0.65, and 0.83 for the satisfaction, resolution, and frustration outcomes, respectively. Also, we observe that the performance of our predicted dialogue act features is comparable to that of the much larger set of best features for each label (almost identical in the case of frustration). In more detail, we note interesting differences comparing the performance of the small set of dialogue act features that "summarize" the large, sparse set of best features for each label, as a form of data-driven feature selection. For satisfaction, we see that the best feature set outperforms the dialogue acts for each class set except for 10-Class (Easy), where the dialogue acts are more effective. The existence of the very lexically well-defined Social Act Thanking and Social Act Apology classes makes the dialogue acts ideal for summarization. In the case of problem resolution, we see that the performance of the dialogue acts approaches that of the best feature set as the number of classes increases, showing that the dialogue features are able to express the full intent of the turns well, even at more difficult class settings. Finally, for the frustration experiment, we observe negligible different between the best features and dialogue act features, and very high classification results overall. Actionable Rules for Automated Customer Support While these experiments highlight how we can use dialogue act predictions as a means to greatly reduce feature sparsity and predict conversation outcome, our main aim is to gain good insight from the use of the dialogue acts to inform and automate customer service interactions. We conduct deeper analysis by taking a closer look at the most informative dialogue act features in each experiment. Table TABREF44 shows the most informative features and weights for each of our three conversation outcomes. To help guide our analysis, we divide the features into positions based on where they occur in the conversation: start (turns 1-3), middle (turns 4-6), and end (turns 7-10). Desirable outcomes (customers that are satisfied/not frustrated and resolved problems) are shown at the top rows of the table, and undesirable outcomes (unsatisfied/frustrated customers and unresolved problems) are shown at the bottom rows. Our analysis helps zone in on how the use of certain dialogue acts may be likely to result in different outcomes. The weights we observe vary in the amount of insight provided: for example, offering extra help at the end of a conversation, or thanking the customer yields more satisfied customers, and more resolved problems (with ratios of above 6:1). However, some outcomes are much more subtle: for example, asking yes-no questions early-on in a conversation is highly associated with problem resolution (ratio 3:1), but asking them at the end of a conversation has as similarly strong association with unsatisfied customers. Giving elaborate answers that are not a simple affirmative, negative, or response acknowledgement (i.e. Answer (Other)) towards the middle of a conversation leads to satisfied customers that are not frustrated. Likewise, requesting information towards the end of a conversation (implying that more information is still necessary at the termination of the dialogue) leads to unsatisfied and unresolved customers, with ratios of at least 4:1. By using the feature weights we derive from using our predicted dialogue acts in our outcome classification experiments, we can thus derive data-driven patterns that offer useful insight into good/bad practices. Our goal is to then use these rules as guidelines, serving as a basis for automated response planning in the customer service domain. For example, these rules can be used to recommend certain dialogue act responses given the position in a conversation, and based previous turns. This information, derived from correlation with conversation outcomes, gives a valuable addition to conversational flow for intelligent agents, and is more useful than canned responses. Conclusions In this paper, we explore how we can analyze dialogic trends in customer service conversations on Twitter to offer insight into good/bad practices with respect to conversation outcomes. We design a novel taxonomy of fine-grained dialogue acts, tailored for the customer service domain, and gather annotations for 800 Twitter conversations. We show that dialogue acts are often semantically overlapping, and conduct multi-label supervised learning experiments to predict multiple appropriate dialogue act labels for each turn in real-time, under varying class sizes. We show that our sequential SVM-HMM model outperforms all non-sequential baselines, and plan to continue our exploration of other sequential models including Conditional Random Fields (CRF) BIBREF32 and Long Short-Term Memory (LSTM) BIBREF33 , as well as of dialogue modeling using different Markov Decision Process (MDP) BIBREF34 models such as the Partially-Observed MDP (POMDP) BIBREF35 . We establish that agents are more predictable than customers in terms of the dialogue acts they utilize, and set out to understand whether the conversation strategies agents employ are well-correlated with desirable conversation outcomes. We conduct binary classification experiments to analyze how our predicted dialogue acts can be used to classify conversations as ending in customer satisfaction, customer frustration, and problem resolution. We observe interesting correlations between the dialogue acts agents use and the outcomes, offering insights into good/bad practices that are more useful for creating context-aware automated customer service systems than generating canned response templates. Future directions for this work revolve around the integration of the insights derived in the design of automated customer service systems. To this end, we aim to improve the taxonomy and annotation design by consulting domain-experts and using annotator feedback and agreement information, derive more powerful features for dialogue act prediction, and automate ranking and selection of best-practice rules based on domain requirements for automated customer service system design.
By annotators on Amazon Mechanical Turk.
e26e7e9bcd7e2cea561af596c59b98e823653a4b
e26e7e9bcd7e2cea561af596c59b98e823653a4b_0
Q: Which Twitter customer service industries are investigated? Text: Introduction The need for real-time, efficient, and reliable customer service has grown in recent years. Twitter has emerged as a popular medium for customer service dialogue, allowing customers to make inquiries and receive instant live support in the public domain. In order to provide useful information to customers, agents must first understand the requirements of the conversation, and offer customers the appropriate feedback. While this may be feasible at the level of a single conversation for a human agent, automatic analysis of conversations is essential for data-driven approaches towards the design of automated customer support agents and systems. Analyzing the dialogic structure of a conversation in terms of the "dialogue acts" used, such as statements or questions, can give important meta-information about conversation flow and content, and can be used as a first step to developing automated agents. Traditional dialogue act taxonomies used to label turns in a conversation are very generic, in order to allow for broad coverage of the majority of dialogue acts possible in a conversation BIBREF0 , BIBREF1 , BIBREF2 . However, for the purpose of understanding and analyzing customer service conversations, generic taxonomies fall short. Table TABREF1 shows a sample customer service conversation between a human agent and customer on Twitter, where the customer and agent take alternating "turns" to discuss the problem. As shown from the dialogue acts used at each turn, simply knowing that a turn is a Statement or Request, as is possible with generic taxonomies, is not enough information to allow for automated handling or response to a problem. We need more fine-grained dialogue acts, such as Informative Statement, Complaint, or Request for Information to capture the speaker's intent, and act accordingly. Likewise, turns often include multiple overlapping dialogue acts, such that a multi-label approach to classification is often more informative than a single-label approach. Dialogue act prediction can be used to guide automatic response generation, and to develop diagnostic tools for the fine-tuning of automatic agents. For example, in Table TABREF1 , the customer's first turn (Turn 1) is categorized as a Complaint, Negative Expressive Statement, and Sarcasm, and the agent's response (Turn 2) is tagged as a Request for Information, Yes-No Question, and Apology. Prediction of these dialogue acts in a real-time setting can be leveraged to generate appropriate automated agent responses to similar situations. Additionally, important patterns can emerge from analysis of the fine-grained acts in a dialogue in a post-prediction setting. For example, if an agent does not follow-up with certain actions in response to a customer's question dialogue act, this could be found to be a violation of a best practice pattern. By analyzing large numbers of dialogue act sequences correlated with specific outcomes, various rules can be derived, i.e. "Continuing to request information late in a conversation often leads to customer dissatisfaction." This can then be codified into a best practice pattern rules for automated systems, such as "A request for information act should be issued early in a conversation, followed by an answer, informative statement, or apology towards the end of the conversation." In this work, we are motivated to predict the dialogue acts in conversations with the intent of identifying problem spots that can be addressed in real-time, and to allow for post-conversation analysis to derive rules about conversation outcomes indicating successful/unsuccessful interactions, namely, customer satisfaction, customer frustration, and problem resolution. We focus on analysis of the dialogue acts used in customer service conversations as a first step to fully automating the interaction. We address various different challenges: dialogue act annotated data is not available for customer service on Twitter, the task of dialogue act annotation is subjective, existing taxonomies do not capture the fine-grained information we believe is valuable to our task, and tweets, although concise in nature, often consist of overlapping dialogue acts to characterize their full intent. The novelty of our work comes from the development of our fine-grained dialogue act taxonomy and multi-label approach for act prediction, as well as our analysis of the customer service domain on Twitter. Our goal is to offer useful analytics to improve outcome-oriented conversational systems. We first expand upon previous work and generic dialogue act taxonomies, developing a fine-grained set of dialogue acts for customer service, and conducting a systematic user study to identify these acts in a dataset of 800 conversations from four Twitter customer service accounts (i.e. four different companies in the telecommunication, electronics, and insurance industries). We then aim to understand the conversation flow between customers and agents using our taxonomy, so we develop a real-time sequential SVM-HMM model to predict our fine-grained dialogue acts while a conversation is in progress, using a novel multi-label scheme to classify each turn. Finally, using our dialogue act predictions, we classify conversations based on the outcomes of customer satisfaction, frustration, and overall problem resolution, then provide actionable guidelines for the development of automated customer service systems and intelligent agents aimed at desired customer outcomes BIBREF3 , BIBREF4 . We begin with a discussion of related work, followed by an overview of our methodology. Next, we describe our conversation modeling framework, and explain our outcome analysis experiments, to show how we derive useful patterns for designing automated customer service agents. Finally, we present conclusions and directions for future work. Related Work Developing computational speech and dialogue act models has long been a topic of interest BIBREF5 , BIBREF6 , BIBREF7 , BIBREF8 , with researchers from many different backgrounds studying human conversations and developing theories around conversational analysis and interpretation on intent. Modern intelligent conversational BIBREF3 , BIBREF4 and dialogue systems draw principles from many disciplines, including philosophy, linguistics, computer science, and sociology. In this section, we describe relevant previous work on speech and dialogue act modeling, general conversation modeling on Twitter, and speech and dialogue act modeling of customer service in other data sources. Previous work has explored speech act modeling in different domains (as a predecessor to dialogue act modeling). Zhang et al. present work on recognition of speech acts on Twitter, following up with a study on scalable speech act recognition given the difficulty of obtaining labeled training data BIBREF9 . They use a simple taxonomy of four main speech acts (Statement, Question, Suggestion, Comment, and a Miscellaneous category). More recently, Vosoughi et al. develop BIBREF10 a speech act classifier for Twitter, using a modification of the taxonomy defined by Searle in 1975, including six acts they observe to commonly occur on Twitter: Assertion, Recommendation Expression, Question, Request, again plus a Miscellaneous category. They describe good features for speech act classification and the application of such a system to detect stories on social media BIBREF11 . In this work, we are interested in the dialogic characteristics of Twitter conversations, rather than speech acts in stand-alone tweets. Different dialogue act taxonomies have been developed to characterize conversational acts. Core and Allen present the Dialogue Act Marking in Several Layers (DAMSL), a standard for discourse annotation that was developed in 1997 BIBREF0 . The taxonomy contains a total of 220 tags, divided into four main categories: communicative status, information level, forward-looking function, and backward-looking function. Jurafsky, Shriberg, and Biasca develop a less fine-grained taxonomy of 42 tags based on DAMSL BIBREF1 . Stolcke et al. employ a similar set for general conversation BIBREF2 , citing that "content- and task-related distinctions will always play an important role in effective DA [Dialogue Act] labeling." Many researchers have tackled the task of developing different speech and dialogue act taxonomies and coding schemes BIBREF12 , BIBREF13 , BIBREF14 , BIBREF15 . For the purposes of our own research, we require a set of dialogue acts that is more closely representative of customer service domain interactions - thus we expand upon previously defined taxonomies and develop a more fine-grained set. Modeling general conversation on Twitter has also been a topic of interest in previous work. Honeycutt and Herring study conversation and collaboration on Twitter using individual tweets containing "@" mentions BIBREF16 . Ritter et al. explore unsupervised modeling of Twitter conversations, using clustering methods on a corpus of 1.3 million Twitter conversations to define a model of transitional flow between in a general Twitter dialogue BIBREF17 . While these approaches are relevant to understanding the nature of interactions on Twitter, we find that the customer service domain presents its own interesting characteristics that are worth exploring further. The most related previous work has explored speech and dialogue act modeling in customer service, however, no previous work has focused on Twitter as a data source. In 2005, Ivanovic uses an abridged set of 12 course-grained dialogue acts (detailed in the Taxonomy section) to describe interactions between customers and agents in instant messaging chats BIBREF18 , BIBREF19 , leading to a proposal on response suggestion using the proposed dialogue acts BIBREF20 . Follow-up work using the taxonomy selected by Ivanovic comes from Kim et al., where they focus on classifying dialogue acts in both one-on-one and multi-party live instant messaging chats BIBREF21 , BIBREF22 . These works are similar to ours in the nature of the problem addressed, but we use a much more fine-grained taxonomy to define the interactions possible in the customer service domain, and focus on Twitter conversations, which are unique in their brevity and the nature of the public interactions. The most similar work to our own is that of Herzig et al. on classifying emotions in customer support dialogues on Twitter BIBREF23 . They explore how agent responses should be tailored to the detected emotional response in customers, in order to improve the quality of service agents can provide. Rather than focusing on emotional response, we seek to model the dialogic structure and intents of the speakers using dialogue acts, with emotion included as features in our model, to characterize the emotional intent within each act. Methodology The underlying goal of this work is to show how a well-defined taxonomy of dialogue acts can be used to summarize semantic information in real-time about the flow of a conversation to derive meaningful insights into the success/failure of the interaction, and then to develop actionable rules to be used in automating customer service interactions. We focus on the customer service domain on Twitter, which has not previously been explored in the context of dialogue act classification. In this new domain, we can provide meaningful recommendations about good communicative practices, based on real data. Our methodology pipeline is shown in Figure FIGREF2 . Taxonomy Definition As described in the related work, the taxonomy of 12 acts to classify dialogue acts in an instant-messaging scenario, developed by Ivanovic in 2005, has been used by previous work when approaching the task of dialogue act classification for customer service BIBREF18 , BIBREF20 , BIBREF19 , BIBREF21 , BIBREF22 . The dataset used consisted of eight conversations from chat logs in the MSN Shopping Service (around 550 turns spanning around 4,500 words) BIBREF19 . The conversations were gathered by asking five volunteers to use the platform to inquire for help regarding various hypothetical situations (i.e. buying an item for someone) BIBREF19 . The process of selection of tags to develop the taxonomy, beginning with the 42 tags from the DAMSL set BIBREF0 , involved removing tags inappropriate for written text, and collapsing sets of tags into a more coarse-grained label BIBREF18 . The final taxonomy consists of the following 12 dialogue acts (sorted by frequency in the dataset): Statement (36%), Thanking (14.7%), Yes-No Question (13.9%), Response-Acknowledgement (7.2%), Request (5.9%), Open-Question (5.3%), Yes-Answer (5.1%), Conventional-Closing (2.9%), No-Answer (2.5%), Conventional-Opening (2.3%), Expressive (2.3%) and Downplayer (1.9%). For the purposes of our own research, focused on customer service on Twitter, we found that the course-grained nature of the taxonomy presented a natural shortcoming in terms of what information could be learned by performing classification at this level. We observe that while having a smaller set of dialogue acts may be helpful for achieving good agreement between annotators (Ivanovic cites kappas of 0.87 between the three expert annotators using this tag set on his data BIBREF18 ), it is unable to offer deeper semantic insight into the specific intent behind each act for many of the categories. For example, the Statement act, which comprises the largest percentage (36% of turns), is an extremely broad category that fails to provide useful information from an analytical perspective. Likewise, the Request category also does not specify any intent behind the act, and leaves much room for improvement. For this reason, and motivated by previous work seeking to develop dialogue act taxonomies appropriate for different domains BIBREF19 , BIBREF21 , we convert the list of dialogue acts presented by the literature into a hierarchical taxonomy, shown in Figure FIGREF6 . We first organize the taxonomy into six high-level dialogue acts: Greeting, Statement, Request, Question, Answer, and Social Act. Then, we update the taxonomy using two main steps: restructuring and adding additional fine-grained acts. We base our changes upon the taxonomy used by Ivanovic and Kim et al. in their work on instant messaging chat dialogues BIBREF19 , BIBREF21 , but also on general dialogue acts observed in the customer service domain, including complaints and suggestions. Our taxonomy does not make any specific restrictions on which party in the dialogue may perform each act, but we do observe that some acts are far more frequent (and sometimes non-existent) in usage, depending on whether the customer or agent is the speaker (for example, the Statement Complaint category never shows up in Agent turns). In order to account for gaps in available act selections for annotators, we include an Other act in the broadest categories. While our taxonomy fills in many gaps from previous work in our domain, we do not claim to have handled coverage of all possible acts in this domain. Our taxonomy allows us to more closely specify the intent and motivation behind each turn, and ultimately how to address different situations. Data Collection Given our taxonomy of fine-grained dialogue acts that expands upon previous work, we set out to gather annotations for Twitter customer service conversations. For our data collection phase, we begin with conversations from the Twitter customer service pages of four different companies, from the electronics, telecommunications, and insurance industries. We perform several forms of pre-processing to the conversations. We filter out conversations if they contain more than one customer or agent speaker, do not have alternating customer/agent speaking turns (single turn per speaker), have less than 5 or more than 10 turns, have less than 70 words in total, and if any turn in the conversation ends in an ellipses followed by a link (indicating that the turn has been cut off due to length, and spans another tweet). Additionally, we remove any references to the company names (substituting with "Agent"), any references to customer usernames (substituting with "Customer"), and replacing and links or image references with INLINEFORM0 link INLINEFORM1 and INLINEFORM2 img INLINEFORM3 tokens. Using these filters as pre-processing methods, we end up with a set of 800 conversations, spanning 5,327 turns. We conduct our annotation study on Amazon Mechanical Turk, presenting Turkers with Human Intelligence Tasks (henceforth, HITs) consisting of a single conversation between a customer and an agent. In each HIT, we present Turkers with a definition of each dialogue act, as well as a sample annotated dialogue for reference. For each turn in the conversation, we allow Turkers to select as many labels from our taxonomy as required to fully characterize the intent of the turn. Additionally, annotators are asked three questions at the end of each conversation HIT, to which they could respond that they agreed, disagreed, or could not tell: We ask 5 Turkers to annotate each conversation HIT, and pay $0.20 per HIT. We find the list of "majority dialogue acts" for each tweet by finding any acts that have received majority-vote labels (at least 3 out of 5 judgements). It is important to note at this point that we make an important choice as to how we will handle dialogue act tagging for each turn. We note that each turn may contain more than one dialogue act vital to carry its full meaning. Thus, we choose not to carry out a specific segmentation task on our tweets, contrary to previous work BIBREF24 , BIBREF25 , opting to characterize each tweet as a single unit composed of different, often overlapping, dialogue acts. Table TABREF16 shows examples of tweets that receive majority vote on more than one label, where the act boundaries are overlapping and not necessarily distinguishable. It is clear that the lines differentiating these acts are not very well defined, and that segmentation would not necessarily aid in clearly separating out each intent. For these reasons, and due to the overall brevity of tweets in general, we choose to avoid the overhead of requiring annotators to provide segment boundaries, and instead ask for all appropriate dialogue acts. Annotation Results Figure FIGREF17 shows the distribution of the number of times each dialogue act in our taxonomy is selected a majority act by the annotators (recall that each turn is annotated by 5 annotators). From the distribution, we see that the largest class is Statement Info which is part of the majority vote list for 2,152 of the 5,327 total turns, followed by Request Info, which appears in 1,088 of the total turns. Although Statement Informative comprises the largest set of majority labels in the data (as did Statement in Ivanovic's distribution), we do observe that other fine-grained categories of Statement occur in the most frequent labels as well, including Statement Complaint, Statement Expressive Negative, and Statement Suggestion – giving more useful information as to what form of statement is most frequently occurring. We find that 147 tweets receive no majority label (i.e. no single act received 3 or more votes out of 5). At the tail of the distribution, we see less frequent acts, such as Statement Sarcasm, Social Act Downplayer, Statement Promise, Greeting Closing, and Request Other. It is also interesting to note that both opening and closing greetings occur infrequently in the data – which is understandable given the nature of Twitter conversation, where formal greeting is not generally required. Table TABREF19 shows a more detailed summary of the distribution of our top 12 dialogue acts according to the annotation experiments, as presented by Ivanovic BIBREF18 . Since each turn has an overlapping set of labels, the column % of Turns (5,327) represents what fraction of the total 5,327 turns contain that dialogue act label (these values do not sum to 1, since there is overlap). To give a better sense of the percentage appearance of each dialogue act class in terms of the total number of annotated labels given, we also present column % of Annotations (10,343) (these values are percentages). We measure agreement in our annotations using a few different techniques. Since each item in our annotation experiments allows for multiple labels, we first design an agreement measure that accounts for how frequently each annotator selects the acts that agree with the majority-selected labels for the turns they annotated. To calculate this for each annotator, we find the number of majority-selected acts for each conversation they annotated (call this MAJ), and the number of subset those acts that they selected (call this SUBS), and find the ratio (SUBS/MAJ). We use this ratio to systematically fine-tune our set of annotators by running our annotation in four batches, restricting our pool of annotators to those that have above a 0.60 ratio of agreement with the majority from the previous batch, as a sort of quality assurance test. We also measure Fleiss' Kappa BIBREF26 agreement between annotators in two ways: first by normalizing our annotation results into binary-valued items indicating annotators' votes for each label contain within each turn. We find an average Fleiss- INLINEFORM0 for the full dataset, including all turn-and-label items, representing moderate agreement on the 24-label problem. We also calculate the Fleiss- INLINEFORM0 values for each label, and use the categories defined by Landis and Koch to bin our speech acts based on agreement BIBREF27 . As shown in Table TABREF18 , we find that the per-label agreement varies from "almost perfect" agreement of INLINEFORM1 for lexically defined categories such as Apology and Thanks, with only slight agreement of INLINEFORM2 for less clearly-defined categories, such as Statement (Other), Answer Response Acknowledgement and Request (Other). For the conversation-level questions, we calculate the agreement across the "Agree" label for all annotators, finding an average Fleiss- INLINEFORM3 , with question-level results of INLINEFORM4 for customer satisfaction, INLINEFORM5 for problem resolution, and INLINEFORM6 for customer frustration. These results suggest room for improvement for further development of the taxonomy, to address problem areas for annotators and remedy areas of lower agreement. Motivation for Multi-Label Classification We test our hypothesis that tweet turns are often characterized by more than one distinct dialogue act label by measuring the percentage overlap between frequent pairs of labels. Of the 5,327 turns annotated, across the 800 conversations, we find that 3,593 of those turns (67.4%) contained more than one majority-act label. Table TABREF22 shows the distribution percentage of the most frequent pairs. For example, we observe that answering with informative statements is the most frequent pair, followed by complaints coupled with negative sentiment or informative statements. We also observe that requests are usually formed as questions, but also co-occur frequently with apologies. This experiment validates our intuition that the majority of turns do contain more than a single label, and motivates our use of a multi-label classification method for characterizing each turn in the conversation modeling experiments we present in the next section. Conversation Modeling In this section, we describe the setup and results of our conversational modeling experiments on the data we collected using our fine-grained taxonomy of customer service dialogue acts. We begin with an overview of the features and classes used, followed by our experimental setup and results for each experiment performed. Features The following list describes the set of features used for our dialogue act classification tasks: Word/Punctuation: binary bag-of-word unigrams, binary existence of a question mark, binary existence of an exclamation mark in a turn Temporal: response time of a turn (time in seconds elapsed between the posting time of the previous turn and that of the current turn) Second-Person Reference: existence of an explicit second-person reference in the turn (you, your, you're) Emotion: count of words in each of the 8 emotion classes from the NRC emotion lexicon BIBREF28 (anger, anticipation, disgust, fear, joy, negative, positive, sadness, surprise, and trust) Dialogue: lexical indicators in the turn: opening greetings (hi, hello, greetings, etc), closing greetings (bye, goodbye), yes-no questions (turns with questions starting with do, did, can, could, etc), wh- questions (turns with questions starting with who, what, where, etc), thanking (thank*), apology (sorry, apolog*), yes-answer, and no-answer Classes Table TABREF30 shows the division of classes we use for each of our experiments. We select our classes using the distribution of annotations we observe in our data collection phase (see Table TABREF19 ), selecting the top 12 classes as candidates. While iteratively selecting the most frequently-occurring classes helps to ensure that classes with the most data are represented in our experiments, it also introduces the problem of including classes that are very well-defined lexically, and may not require learning for classification, such as Social Act Apology and Social Act Thanking in the first 10-Class set. For this reason, we call this set 10-Class (Easy), and also experiment using a 10-Class (Hard) set, where we add in the next two less-defined and more semantically rich labels, such as Statement Offer and Question Open. When using each set of classes, a turn is either classified as one of the classes in the set, or it is classified as "other" (i.e. any of the other classes). We discuss our experiments in more detail and comment on performance differences in the experiment section. Experiments Following previous work on conversation modeling BIBREF23 , we use a sequential SVM-HMM (using the INLINEFORM0 toolkit BIBREF29 ) for our conversation modeling experiments. We hypothesize that a sequential model is most suited to our dialogic data, and that we will be able to concisely capture conversational attributes such as the order in which dialogue acts often occur (i.e. some Answer act after Question a question act, or Apology acts after Complaints). We note that with default settings for a sequence of length INLINEFORM0 , an SVM-HMM model will be able to refine its answers for any turn INLINEFORM1 as information becomes available for turns INLINEFORM2 . However, we opt to design our classifier under a real-time setting, where turn-by-turn classification is required without future knowledge or adaptation of prediction at any given stage. In our setup, turns are predicted in a real-time setting to fairly model conversation available to an intelligent agent in a conversational system. At any point, a turn INLINEFORM3 is predicted using information from turns INLINEFORM4 , and where a prediction is not changed when new information is available. We test our hypothesis by comparing our real-time sequential SVM-HMM model to non-sequential baselines from the NLTK BIBREF30 and Scikit-Learn BIBREF31 toolkits. We use our selected feature set (described above) to be generic enough to apply to both our sequential and non-sequential models, in order to allow us to fairly compare performance. We shuffle and divide our data into 70% for training and development (560 conversations, using 10-fold cross-validation for parameter tuning), and hold out 30% of the data (240 conversations) for test. Motivated by the prevalent overlap of dialogue acts, we conduct our learning experiments using a multi-label setup. For each of the sets of classes, we conduct binary classification task for each label: for each INLINEFORM0 -class classification task, a turn is labeled as either belonging to the current label, or not (i.e. "other"). In this setup, each turn is assigned a binary value for each label (i.e. for the 6-class experiment, each turn receives a value of 0/1 for each indicating whether the classifier predicts it to be relevant to the each of the 6 labels). Thus, for each INLINEFORM1 -class experiment, we end up with INLINEFORM2 binary labels, for example, whether the turn is a Statement Informative or Other, Request Information or Other, etc. We aggregate the INLINEFORM3 binary predictions for each turn, then compare the resultant prediction matrix for all turns to our majority-vote ground-truth labels, where at least 3 out of 5 annotators have selected a label to be true for a given turn. The difficulty of the task increases as the number of classes INLINEFORM4 increases, as there are more classifications done for each turn (i.e., for the 6-class problem, there are 6 classification tasks per turn, while for the 8-class problem, there are 8, etc). Due to the inherent imbalance of label-distribution in the data (shown in Figure FIGREF17 ), we use weighted F-macro to calculate our final scores for each feature set (which finds the average of the metrics for each label, weighted by the number of true instances for that label) BIBREF31 . Our first experiment sets out to compare the use of a non-sequential classification algorithm versus a sequential model for dialogue act classification on our dataset. We experiment with the default Naive Bayes (NB) and Linear SVC algorithms from Scikit-Learn BIBREF31 , comparing with our sequential SVM-HMM model. We test each classifier on each of our four class sets, reporting weighted F-macro for each experiment. Figure FIGREF33 shows the results of the experiments. From this experiment, we observe that our sequential SVM-HMM outperforms each non-sequential baseline, for each of the four class sets. We select the sequential SVM-HMM model for our preferred model for subsequent experiments. We observe that while performance may be expected to drop as the number of classes increases, we instead get a spike in performance for the 10-Class (Easy) setting. This increase occurs due to the addition of the lexically well-defined classes of Statement Apology and Statement Thanks, which are much simpler for our model to predict. Their addition results in a performance boost, comparable to that of the simpler 6-Class problem. When we remove the two well-defined classes and add in the next two broader dialogue act classes of Statement Offer and Question Open (as defined by the 10-Class (Hard) set), we observe a drop in performance, and an overall result comparable to our 8-Class problem. This result is still strong, since the number of classes has increased, but the overall performance does not drop. We also observe that while NB and LinearSVC have the same performance trend for the smaller number of classes, Linear SVC rapidly improves in performance as the number of classes increases, following the same trend as SVM-HMM. The smallest margin of difference between SVM-HMM and Linear SVC also occurs at the 10-Class (Easy) setting, where the addition of highly-lexical classes makes for a more differentiable set of turns. Our next experiment tests the differences in performance when training and testing our real-time sequential SVM-HMM model using only a single type of speaker's turns (i.e. only Customer or only Agent turns). Figure FIGREF35 shows the relative performance of using only speaker-specific turns, versus our standard results using all turns. We observe that using Customer-only turns gives us lower prediction performance than using both speakers' turns, but that Agent-only turns actually gives us higher performance. Since agents are put through training on how to interact with customers (often using templates), agent behavior is significantly more predictable than customer behavior, and it is easier to predict agent turns even without utilizing any customer turn information (which is more varied, and thus more difficult to predict). We again observe a boost in performance at out 10-Class (Easy) set, due to the inclusion of lexically well-defined classes. Notably, we achieve best performance for the 10-Class (Easy) set using only agent turns, where the use of the Apology and Thanks classes are both prevalent and predictable. In our final experiment, we explore the changes in performance we get by splitting the training and test data based on company domain. We compare this performance with our standard setup for SVM-HMM from our baseline experiments (Figure FIGREF33 ), where our train-test data splitting is company-independent (i.e. all conversations are randomized, and no information is used to differentiate different companies or domains). To recap, our data consists of conversations from four companies from three different industrial domains (one from the telecommunication domain, two from the electronics domain, and one from the insurance domain). We create four different versions of our 6-class real-time sequential SVM-HMM, where we train on the data from three of the companies, and test on the remaining company. We present our findings in Table TABREF37 . From the table, we see that our real-time model achieves best prediction results when we use one of the electronics companies in the test fold, even though the number of training samples is smallest in these cases. On the other hand, when we assign insurance company in the test fold, our model's prediction performance is comparatively low. Upon further investigation, we find that customer-agent conversations in the telecommunication and electronics domains are more similar than those in the insurance domain. Our findings show that our model is robust to different domains as our test set size increases, and that our more generic, company-independent experiment gives us better performance than any domain-specific experiments. Conversation Outcome Analysis Given our observation that Agent turns are more predictable, and that we achieve best performance in a company-independent setting, we question whether the training that agents receive is actually reliable in terms of resulting in overall "satisfied customers", regardless of company domain. Ultimately, our goal is to discover whether we can use the insight we derive from our predicted dialogue acts to better inform conversational systems aimed at offering customer support. Our next set of experiments aims to show the utility of our real-time dialogue act classification as a method for summarizing semantic intent in a conversation into rules that can be used to guide automated systems. Classifying Problem Outcomes We conduct three supervised classification experiments to better understand full conversation outcome, using the default Linear SVC classifier in Scikit-Learn BIBREF31 (which gave us our best baseline for the dialogue classification task). Each classification experiments centers around one of three problem outcomes: customer satisfaction, problem resolution, and customer frustration. For each outcome, we remove any conversation that did not receive majority consensus for a label, or received majority vote of "can't tell". Our final conversation sets consist of 216 satisfied and 500 unsatisfied customer conversations, 271 resolved and 425 unresolved problem conversations, and 534 frustrated and 229 not frustrated customer conversations. We retain the inherent imbalance in the data to match the natural distribution observed. The clear excess of consensus of responses that indicate negative outcomes further motivates us to understand what sorts of dialogic patterns results in such outcomes. We run the experiment for each conversation outcome using 10-fold cross-validation, under each of our four class settings: 6-Class, 8-Class, 10-Class (Easy), and 10-Class (Hard). The first feature set we use is Best_Features (from the original dialogue act classification experiments), which we run as a baseline. Our second feature set is our Dialogue_Acts predictions for each turn – we choose the most probable dialogue act prediction for each turn using our dialogue act classification framework to avoid sparsity. In this way, for each class size INLINEFORM0 , each conversation is converted into a vector of INLINEFORM1 (up to 10) features that describe the most strongly associated dialogue act from the dialogue act classification experiments for each turn, and the corresponding turn number. For example, a conversation feature vector may look as follows: INLINEFORM2 Thus, our classifier can then learn patterns based on these features (for example, that specific acts appearing at the end of a conversation are strong indicators of customer satisfaction) that allow us to derive rules about successful/unsuccessful interactions. Figure FIGREF38 shows the results of our binary classification experiments for each outcome. For each experiment, the Best_Features set is constant over each class size, while the Dialogue_Act features are affected by class size (since the predicted act for each turn will change based on the set of acts available for that class size). Our first observation is that we achieve high performance on the binary classification task, reaching F-measures of 0.70, 0.65, and 0.83 for the satisfaction, resolution, and frustration outcomes, respectively. Also, we observe that the performance of our predicted dialogue act features is comparable to that of the much larger set of best features for each label (almost identical in the case of frustration). In more detail, we note interesting differences comparing the performance of the small set of dialogue act features that "summarize" the large, sparse set of best features for each label, as a form of data-driven feature selection. For satisfaction, we see that the best feature set outperforms the dialogue acts for each class set except for 10-Class (Easy), where the dialogue acts are more effective. The existence of the very lexically well-defined Social Act Thanking and Social Act Apology classes makes the dialogue acts ideal for summarization. In the case of problem resolution, we see that the performance of the dialogue acts approaches that of the best feature set as the number of classes increases, showing that the dialogue features are able to express the full intent of the turns well, even at more difficult class settings. Finally, for the frustration experiment, we observe negligible different between the best features and dialogue act features, and very high classification results overall. Actionable Rules for Automated Customer Support While these experiments highlight how we can use dialogue act predictions as a means to greatly reduce feature sparsity and predict conversation outcome, our main aim is to gain good insight from the use of the dialogue acts to inform and automate customer service interactions. We conduct deeper analysis by taking a closer look at the most informative dialogue act features in each experiment. Table TABREF44 shows the most informative features and weights for each of our three conversation outcomes. To help guide our analysis, we divide the features into positions based on where they occur in the conversation: start (turns 1-3), middle (turns 4-6), and end (turns 7-10). Desirable outcomes (customers that are satisfied/not frustrated and resolved problems) are shown at the top rows of the table, and undesirable outcomes (unsatisfied/frustrated customers and unresolved problems) are shown at the bottom rows. Our analysis helps zone in on how the use of certain dialogue acts may be likely to result in different outcomes. The weights we observe vary in the amount of insight provided: for example, offering extra help at the end of a conversation, or thanking the customer yields more satisfied customers, and more resolved problems (with ratios of above 6:1). However, some outcomes are much more subtle: for example, asking yes-no questions early-on in a conversation is highly associated with problem resolution (ratio 3:1), but asking them at the end of a conversation has as similarly strong association with unsatisfied customers. Giving elaborate answers that are not a simple affirmative, negative, or response acknowledgement (i.e. Answer (Other)) towards the middle of a conversation leads to satisfied customers that are not frustrated. Likewise, requesting information towards the end of a conversation (implying that more information is still necessary at the termination of the dialogue) leads to unsatisfied and unresolved customers, with ratios of at least 4:1. By using the feature weights we derive from using our predicted dialogue acts in our outcome classification experiments, we can thus derive data-driven patterns that offer useful insight into good/bad practices. Our goal is to then use these rules as guidelines, serving as a basis for automated response planning in the customer service domain. For example, these rules can be used to recommend certain dialogue act responses given the position in a conversation, and based previous turns. This information, derived from correlation with conversation outcomes, gives a valuable addition to conversational flow for intelligent agents, and is more useful than canned responses. Conclusions In this paper, we explore how we can analyze dialogic trends in customer service conversations on Twitter to offer insight into good/bad practices with respect to conversation outcomes. We design a novel taxonomy of fine-grained dialogue acts, tailored for the customer service domain, and gather annotations for 800 Twitter conversations. We show that dialogue acts are often semantically overlapping, and conduct multi-label supervised learning experiments to predict multiple appropriate dialogue act labels for each turn in real-time, under varying class sizes. We show that our sequential SVM-HMM model outperforms all non-sequential baselines, and plan to continue our exploration of other sequential models including Conditional Random Fields (CRF) BIBREF32 and Long Short-Term Memory (LSTM) BIBREF33 , as well as of dialogue modeling using different Markov Decision Process (MDP) BIBREF34 models such as the Partially-Observed MDP (POMDP) BIBREF35 . We establish that agents are more predictable than customers in terms of the dialogue acts they utilize, and set out to understand whether the conversation strategies agents employ are well-correlated with desirable conversation outcomes. We conduct binary classification experiments to analyze how our predicted dialogue acts can be used to classify conversations as ending in customer satisfaction, customer frustration, and problem resolution. We observe interesting correlations between the dialogue acts agents use and the outcomes, offering insights into good/bad practices that are more useful for creating context-aware automated customer service systems than generating canned response templates. Future directions for this work revolve around the integration of the insights derived in the design of automated customer service systems. To this end, we aim to improve the taxonomy and annotation design by consulting domain-experts and using annotator feedback and agreement information, derive more powerful features for dialogue act prediction, and automate ranking and selection of best-practice rules based on domain requirements for automated customer service system design.
four different companies in the telecommunication, electronics, and insurance industries
b24767fe7e6620369063e646fd3048dc645a8348
b24767fe7e6620369063e646fd3048dc645a8348_0
Q: Which dialogue acts are more suited to the twitter domain? Text: Introduction The need for real-time, efficient, and reliable customer service has grown in recent years. Twitter has emerged as a popular medium for customer service dialogue, allowing customers to make inquiries and receive instant live support in the public domain. In order to provide useful information to customers, agents must first understand the requirements of the conversation, and offer customers the appropriate feedback. While this may be feasible at the level of a single conversation for a human agent, automatic analysis of conversations is essential for data-driven approaches towards the design of automated customer support agents and systems. Analyzing the dialogic structure of a conversation in terms of the "dialogue acts" used, such as statements or questions, can give important meta-information about conversation flow and content, and can be used as a first step to developing automated agents. Traditional dialogue act taxonomies used to label turns in a conversation are very generic, in order to allow for broad coverage of the majority of dialogue acts possible in a conversation BIBREF0 , BIBREF1 , BIBREF2 . However, for the purpose of understanding and analyzing customer service conversations, generic taxonomies fall short. Table TABREF1 shows a sample customer service conversation between a human agent and customer on Twitter, where the customer and agent take alternating "turns" to discuss the problem. As shown from the dialogue acts used at each turn, simply knowing that a turn is a Statement or Request, as is possible with generic taxonomies, is not enough information to allow for automated handling or response to a problem. We need more fine-grained dialogue acts, such as Informative Statement, Complaint, or Request for Information to capture the speaker's intent, and act accordingly. Likewise, turns often include multiple overlapping dialogue acts, such that a multi-label approach to classification is often more informative than a single-label approach. Dialogue act prediction can be used to guide automatic response generation, and to develop diagnostic tools for the fine-tuning of automatic agents. For example, in Table TABREF1 , the customer's first turn (Turn 1) is categorized as a Complaint, Negative Expressive Statement, and Sarcasm, and the agent's response (Turn 2) is tagged as a Request for Information, Yes-No Question, and Apology. Prediction of these dialogue acts in a real-time setting can be leveraged to generate appropriate automated agent responses to similar situations. Additionally, important patterns can emerge from analysis of the fine-grained acts in a dialogue in a post-prediction setting. For example, if an agent does not follow-up with certain actions in response to a customer's question dialogue act, this could be found to be a violation of a best practice pattern. By analyzing large numbers of dialogue act sequences correlated with specific outcomes, various rules can be derived, i.e. "Continuing to request information late in a conversation often leads to customer dissatisfaction." This can then be codified into a best practice pattern rules for automated systems, such as "A request for information act should be issued early in a conversation, followed by an answer, informative statement, or apology towards the end of the conversation." In this work, we are motivated to predict the dialogue acts in conversations with the intent of identifying problem spots that can be addressed in real-time, and to allow for post-conversation analysis to derive rules about conversation outcomes indicating successful/unsuccessful interactions, namely, customer satisfaction, customer frustration, and problem resolution. We focus on analysis of the dialogue acts used in customer service conversations as a first step to fully automating the interaction. We address various different challenges: dialogue act annotated data is not available for customer service on Twitter, the task of dialogue act annotation is subjective, existing taxonomies do not capture the fine-grained information we believe is valuable to our task, and tweets, although concise in nature, often consist of overlapping dialogue acts to characterize their full intent. The novelty of our work comes from the development of our fine-grained dialogue act taxonomy and multi-label approach for act prediction, as well as our analysis of the customer service domain on Twitter. Our goal is to offer useful analytics to improve outcome-oriented conversational systems. We first expand upon previous work and generic dialogue act taxonomies, developing a fine-grained set of dialogue acts for customer service, and conducting a systematic user study to identify these acts in a dataset of 800 conversations from four Twitter customer service accounts (i.e. four different companies in the telecommunication, electronics, and insurance industries). We then aim to understand the conversation flow between customers and agents using our taxonomy, so we develop a real-time sequential SVM-HMM model to predict our fine-grained dialogue acts while a conversation is in progress, using a novel multi-label scheme to classify each turn. Finally, using our dialogue act predictions, we classify conversations based on the outcomes of customer satisfaction, frustration, and overall problem resolution, then provide actionable guidelines for the development of automated customer service systems and intelligent agents aimed at desired customer outcomes BIBREF3 , BIBREF4 . We begin with a discussion of related work, followed by an overview of our methodology. Next, we describe our conversation modeling framework, and explain our outcome analysis experiments, to show how we derive useful patterns for designing automated customer service agents. Finally, we present conclusions and directions for future work. Related Work Developing computational speech and dialogue act models has long been a topic of interest BIBREF5 , BIBREF6 , BIBREF7 , BIBREF8 , with researchers from many different backgrounds studying human conversations and developing theories around conversational analysis and interpretation on intent. Modern intelligent conversational BIBREF3 , BIBREF4 and dialogue systems draw principles from many disciplines, including philosophy, linguistics, computer science, and sociology. In this section, we describe relevant previous work on speech and dialogue act modeling, general conversation modeling on Twitter, and speech and dialogue act modeling of customer service in other data sources. Previous work has explored speech act modeling in different domains (as a predecessor to dialogue act modeling). Zhang et al. present work on recognition of speech acts on Twitter, following up with a study on scalable speech act recognition given the difficulty of obtaining labeled training data BIBREF9 . They use a simple taxonomy of four main speech acts (Statement, Question, Suggestion, Comment, and a Miscellaneous category). More recently, Vosoughi et al. develop BIBREF10 a speech act classifier for Twitter, using a modification of the taxonomy defined by Searle in 1975, including six acts they observe to commonly occur on Twitter: Assertion, Recommendation Expression, Question, Request, again plus a Miscellaneous category. They describe good features for speech act classification and the application of such a system to detect stories on social media BIBREF11 . In this work, we are interested in the dialogic characteristics of Twitter conversations, rather than speech acts in stand-alone tweets. Different dialogue act taxonomies have been developed to characterize conversational acts. Core and Allen present the Dialogue Act Marking in Several Layers (DAMSL), a standard for discourse annotation that was developed in 1997 BIBREF0 . The taxonomy contains a total of 220 tags, divided into four main categories: communicative status, information level, forward-looking function, and backward-looking function. Jurafsky, Shriberg, and Biasca develop a less fine-grained taxonomy of 42 tags based on DAMSL BIBREF1 . Stolcke et al. employ a similar set for general conversation BIBREF2 , citing that "content- and task-related distinctions will always play an important role in effective DA [Dialogue Act] labeling." Many researchers have tackled the task of developing different speech and dialogue act taxonomies and coding schemes BIBREF12 , BIBREF13 , BIBREF14 , BIBREF15 . For the purposes of our own research, we require a set of dialogue acts that is more closely representative of customer service domain interactions - thus we expand upon previously defined taxonomies and develop a more fine-grained set. Modeling general conversation on Twitter has also been a topic of interest in previous work. Honeycutt and Herring study conversation and collaboration on Twitter using individual tweets containing "@" mentions BIBREF16 . Ritter et al. explore unsupervised modeling of Twitter conversations, using clustering methods on a corpus of 1.3 million Twitter conversations to define a model of transitional flow between in a general Twitter dialogue BIBREF17 . While these approaches are relevant to understanding the nature of interactions on Twitter, we find that the customer service domain presents its own interesting characteristics that are worth exploring further. The most related previous work has explored speech and dialogue act modeling in customer service, however, no previous work has focused on Twitter as a data source. In 2005, Ivanovic uses an abridged set of 12 course-grained dialogue acts (detailed in the Taxonomy section) to describe interactions between customers and agents in instant messaging chats BIBREF18 , BIBREF19 , leading to a proposal on response suggestion using the proposed dialogue acts BIBREF20 . Follow-up work using the taxonomy selected by Ivanovic comes from Kim et al., where they focus on classifying dialogue acts in both one-on-one and multi-party live instant messaging chats BIBREF21 , BIBREF22 . These works are similar to ours in the nature of the problem addressed, but we use a much more fine-grained taxonomy to define the interactions possible in the customer service domain, and focus on Twitter conversations, which are unique in their brevity and the nature of the public interactions. The most similar work to our own is that of Herzig et al. on classifying emotions in customer support dialogues on Twitter BIBREF23 . They explore how agent responses should be tailored to the detected emotional response in customers, in order to improve the quality of service agents can provide. Rather than focusing on emotional response, we seek to model the dialogic structure and intents of the speakers using dialogue acts, with emotion included as features in our model, to characterize the emotional intent within each act. Methodology The underlying goal of this work is to show how a well-defined taxonomy of dialogue acts can be used to summarize semantic information in real-time about the flow of a conversation to derive meaningful insights into the success/failure of the interaction, and then to develop actionable rules to be used in automating customer service interactions. We focus on the customer service domain on Twitter, which has not previously been explored in the context of dialogue act classification. In this new domain, we can provide meaningful recommendations about good communicative practices, based on real data. Our methodology pipeline is shown in Figure FIGREF2 . Taxonomy Definition As described in the related work, the taxonomy of 12 acts to classify dialogue acts in an instant-messaging scenario, developed by Ivanovic in 2005, has been used by previous work when approaching the task of dialogue act classification for customer service BIBREF18 , BIBREF20 , BIBREF19 , BIBREF21 , BIBREF22 . The dataset used consisted of eight conversations from chat logs in the MSN Shopping Service (around 550 turns spanning around 4,500 words) BIBREF19 . The conversations were gathered by asking five volunteers to use the platform to inquire for help regarding various hypothetical situations (i.e. buying an item for someone) BIBREF19 . The process of selection of tags to develop the taxonomy, beginning with the 42 tags from the DAMSL set BIBREF0 , involved removing tags inappropriate for written text, and collapsing sets of tags into a more coarse-grained label BIBREF18 . The final taxonomy consists of the following 12 dialogue acts (sorted by frequency in the dataset): Statement (36%), Thanking (14.7%), Yes-No Question (13.9%), Response-Acknowledgement (7.2%), Request (5.9%), Open-Question (5.3%), Yes-Answer (5.1%), Conventional-Closing (2.9%), No-Answer (2.5%), Conventional-Opening (2.3%), Expressive (2.3%) and Downplayer (1.9%). For the purposes of our own research, focused on customer service on Twitter, we found that the course-grained nature of the taxonomy presented a natural shortcoming in terms of what information could be learned by performing classification at this level. We observe that while having a smaller set of dialogue acts may be helpful for achieving good agreement between annotators (Ivanovic cites kappas of 0.87 between the three expert annotators using this tag set on his data BIBREF18 ), it is unable to offer deeper semantic insight into the specific intent behind each act for many of the categories. For example, the Statement act, which comprises the largest percentage (36% of turns), is an extremely broad category that fails to provide useful information from an analytical perspective. Likewise, the Request category also does not specify any intent behind the act, and leaves much room for improvement. For this reason, and motivated by previous work seeking to develop dialogue act taxonomies appropriate for different domains BIBREF19 , BIBREF21 , we convert the list of dialogue acts presented by the literature into a hierarchical taxonomy, shown in Figure FIGREF6 . We first organize the taxonomy into six high-level dialogue acts: Greeting, Statement, Request, Question, Answer, and Social Act. Then, we update the taxonomy using two main steps: restructuring and adding additional fine-grained acts. We base our changes upon the taxonomy used by Ivanovic and Kim et al. in their work on instant messaging chat dialogues BIBREF19 , BIBREF21 , but also on general dialogue acts observed in the customer service domain, including complaints and suggestions. Our taxonomy does not make any specific restrictions on which party in the dialogue may perform each act, but we do observe that some acts are far more frequent (and sometimes non-existent) in usage, depending on whether the customer or agent is the speaker (for example, the Statement Complaint category never shows up in Agent turns). In order to account for gaps in available act selections for annotators, we include an Other act in the broadest categories. While our taxonomy fills in many gaps from previous work in our domain, we do not claim to have handled coverage of all possible acts in this domain. Our taxonomy allows us to more closely specify the intent and motivation behind each turn, and ultimately how to address different situations. Data Collection Given our taxonomy of fine-grained dialogue acts that expands upon previous work, we set out to gather annotations for Twitter customer service conversations. For our data collection phase, we begin with conversations from the Twitter customer service pages of four different companies, from the electronics, telecommunications, and insurance industries. We perform several forms of pre-processing to the conversations. We filter out conversations if they contain more than one customer or agent speaker, do not have alternating customer/agent speaking turns (single turn per speaker), have less than 5 or more than 10 turns, have less than 70 words in total, and if any turn in the conversation ends in an ellipses followed by a link (indicating that the turn has been cut off due to length, and spans another tweet). Additionally, we remove any references to the company names (substituting with "Agent"), any references to customer usernames (substituting with "Customer"), and replacing and links or image references with INLINEFORM0 link INLINEFORM1 and INLINEFORM2 img INLINEFORM3 tokens. Using these filters as pre-processing methods, we end up with a set of 800 conversations, spanning 5,327 turns. We conduct our annotation study on Amazon Mechanical Turk, presenting Turkers with Human Intelligence Tasks (henceforth, HITs) consisting of a single conversation between a customer and an agent. In each HIT, we present Turkers with a definition of each dialogue act, as well as a sample annotated dialogue for reference. For each turn in the conversation, we allow Turkers to select as many labels from our taxonomy as required to fully characterize the intent of the turn. Additionally, annotators are asked three questions at the end of each conversation HIT, to which they could respond that they agreed, disagreed, or could not tell: We ask 5 Turkers to annotate each conversation HIT, and pay $0.20 per HIT. We find the list of "majority dialogue acts" for each tweet by finding any acts that have received majority-vote labels (at least 3 out of 5 judgements). It is important to note at this point that we make an important choice as to how we will handle dialogue act tagging for each turn. We note that each turn may contain more than one dialogue act vital to carry its full meaning. Thus, we choose not to carry out a specific segmentation task on our tweets, contrary to previous work BIBREF24 , BIBREF25 , opting to characterize each tweet as a single unit composed of different, often overlapping, dialogue acts. Table TABREF16 shows examples of tweets that receive majority vote on more than one label, where the act boundaries are overlapping and not necessarily distinguishable. It is clear that the lines differentiating these acts are not very well defined, and that segmentation would not necessarily aid in clearly separating out each intent. For these reasons, and due to the overall brevity of tweets in general, we choose to avoid the overhead of requiring annotators to provide segment boundaries, and instead ask for all appropriate dialogue acts. Annotation Results Figure FIGREF17 shows the distribution of the number of times each dialogue act in our taxonomy is selected a majority act by the annotators (recall that each turn is annotated by 5 annotators). From the distribution, we see that the largest class is Statement Info which is part of the majority vote list for 2,152 of the 5,327 total turns, followed by Request Info, which appears in 1,088 of the total turns. Although Statement Informative comprises the largest set of majority labels in the data (as did Statement in Ivanovic's distribution), we do observe that other fine-grained categories of Statement occur in the most frequent labels as well, including Statement Complaint, Statement Expressive Negative, and Statement Suggestion – giving more useful information as to what form of statement is most frequently occurring. We find that 147 tweets receive no majority label (i.e. no single act received 3 or more votes out of 5). At the tail of the distribution, we see less frequent acts, such as Statement Sarcasm, Social Act Downplayer, Statement Promise, Greeting Closing, and Request Other. It is also interesting to note that both opening and closing greetings occur infrequently in the data – which is understandable given the nature of Twitter conversation, where formal greeting is not generally required. Table TABREF19 shows a more detailed summary of the distribution of our top 12 dialogue acts according to the annotation experiments, as presented by Ivanovic BIBREF18 . Since each turn has an overlapping set of labels, the column % of Turns (5,327) represents what fraction of the total 5,327 turns contain that dialogue act label (these values do not sum to 1, since there is overlap). To give a better sense of the percentage appearance of each dialogue act class in terms of the total number of annotated labels given, we also present column % of Annotations (10,343) (these values are percentages). We measure agreement in our annotations using a few different techniques. Since each item in our annotation experiments allows for multiple labels, we first design an agreement measure that accounts for how frequently each annotator selects the acts that agree with the majority-selected labels for the turns they annotated. To calculate this for each annotator, we find the number of majority-selected acts for each conversation they annotated (call this MAJ), and the number of subset those acts that they selected (call this SUBS), and find the ratio (SUBS/MAJ). We use this ratio to systematically fine-tune our set of annotators by running our annotation in four batches, restricting our pool of annotators to those that have above a 0.60 ratio of agreement with the majority from the previous batch, as a sort of quality assurance test. We also measure Fleiss' Kappa BIBREF26 agreement between annotators in two ways: first by normalizing our annotation results into binary-valued items indicating annotators' votes for each label contain within each turn. We find an average Fleiss- INLINEFORM0 for the full dataset, including all turn-and-label items, representing moderate agreement on the 24-label problem. We also calculate the Fleiss- INLINEFORM0 values for each label, and use the categories defined by Landis and Koch to bin our speech acts based on agreement BIBREF27 . As shown in Table TABREF18 , we find that the per-label agreement varies from "almost perfect" agreement of INLINEFORM1 for lexically defined categories such as Apology and Thanks, with only slight agreement of INLINEFORM2 for less clearly-defined categories, such as Statement (Other), Answer Response Acknowledgement and Request (Other). For the conversation-level questions, we calculate the agreement across the "Agree" label for all annotators, finding an average Fleiss- INLINEFORM3 , with question-level results of INLINEFORM4 for customer satisfaction, INLINEFORM5 for problem resolution, and INLINEFORM6 for customer frustration. These results suggest room for improvement for further development of the taxonomy, to address problem areas for annotators and remedy areas of lower agreement. Motivation for Multi-Label Classification We test our hypothesis that tweet turns are often characterized by more than one distinct dialogue act label by measuring the percentage overlap between frequent pairs of labels. Of the 5,327 turns annotated, across the 800 conversations, we find that 3,593 of those turns (67.4%) contained more than one majority-act label. Table TABREF22 shows the distribution percentage of the most frequent pairs. For example, we observe that answering with informative statements is the most frequent pair, followed by complaints coupled with negative sentiment or informative statements. We also observe that requests are usually formed as questions, but also co-occur frequently with apologies. This experiment validates our intuition that the majority of turns do contain more than a single label, and motivates our use of a multi-label classification method for characterizing each turn in the conversation modeling experiments we present in the next section. Conversation Modeling In this section, we describe the setup and results of our conversational modeling experiments on the data we collected using our fine-grained taxonomy of customer service dialogue acts. We begin with an overview of the features and classes used, followed by our experimental setup and results for each experiment performed. Features The following list describes the set of features used for our dialogue act classification tasks: Word/Punctuation: binary bag-of-word unigrams, binary existence of a question mark, binary existence of an exclamation mark in a turn Temporal: response time of a turn (time in seconds elapsed between the posting time of the previous turn and that of the current turn) Second-Person Reference: existence of an explicit second-person reference in the turn (you, your, you're) Emotion: count of words in each of the 8 emotion classes from the NRC emotion lexicon BIBREF28 (anger, anticipation, disgust, fear, joy, negative, positive, sadness, surprise, and trust) Dialogue: lexical indicators in the turn: opening greetings (hi, hello, greetings, etc), closing greetings (bye, goodbye), yes-no questions (turns with questions starting with do, did, can, could, etc), wh- questions (turns with questions starting with who, what, where, etc), thanking (thank*), apology (sorry, apolog*), yes-answer, and no-answer Classes Table TABREF30 shows the division of classes we use for each of our experiments. We select our classes using the distribution of annotations we observe in our data collection phase (see Table TABREF19 ), selecting the top 12 classes as candidates. While iteratively selecting the most frequently-occurring classes helps to ensure that classes with the most data are represented in our experiments, it also introduces the problem of including classes that are very well-defined lexically, and may not require learning for classification, such as Social Act Apology and Social Act Thanking in the first 10-Class set. For this reason, we call this set 10-Class (Easy), and also experiment using a 10-Class (Hard) set, where we add in the next two less-defined and more semantically rich labels, such as Statement Offer and Question Open. When using each set of classes, a turn is either classified as one of the classes in the set, or it is classified as "other" (i.e. any of the other classes). We discuss our experiments in more detail and comment on performance differences in the experiment section. Experiments Following previous work on conversation modeling BIBREF23 , we use a sequential SVM-HMM (using the INLINEFORM0 toolkit BIBREF29 ) for our conversation modeling experiments. We hypothesize that a sequential model is most suited to our dialogic data, and that we will be able to concisely capture conversational attributes such as the order in which dialogue acts often occur (i.e. some Answer act after Question a question act, or Apology acts after Complaints). We note that with default settings for a sequence of length INLINEFORM0 , an SVM-HMM model will be able to refine its answers for any turn INLINEFORM1 as information becomes available for turns INLINEFORM2 . However, we opt to design our classifier under a real-time setting, where turn-by-turn classification is required without future knowledge or adaptation of prediction at any given stage. In our setup, turns are predicted in a real-time setting to fairly model conversation available to an intelligent agent in a conversational system. At any point, a turn INLINEFORM3 is predicted using information from turns INLINEFORM4 , and where a prediction is not changed when new information is available. We test our hypothesis by comparing our real-time sequential SVM-HMM model to non-sequential baselines from the NLTK BIBREF30 and Scikit-Learn BIBREF31 toolkits. We use our selected feature set (described above) to be generic enough to apply to both our sequential and non-sequential models, in order to allow us to fairly compare performance. We shuffle and divide our data into 70% for training and development (560 conversations, using 10-fold cross-validation for parameter tuning), and hold out 30% of the data (240 conversations) for test. Motivated by the prevalent overlap of dialogue acts, we conduct our learning experiments using a multi-label setup. For each of the sets of classes, we conduct binary classification task for each label: for each INLINEFORM0 -class classification task, a turn is labeled as either belonging to the current label, or not (i.e. "other"). In this setup, each turn is assigned a binary value for each label (i.e. for the 6-class experiment, each turn receives a value of 0/1 for each indicating whether the classifier predicts it to be relevant to the each of the 6 labels). Thus, for each INLINEFORM1 -class experiment, we end up with INLINEFORM2 binary labels, for example, whether the turn is a Statement Informative or Other, Request Information or Other, etc. We aggregate the INLINEFORM3 binary predictions for each turn, then compare the resultant prediction matrix for all turns to our majority-vote ground-truth labels, where at least 3 out of 5 annotators have selected a label to be true for a given turn. The difficulty of the task increases as the number of classes INLINEFORM4 increases, as there are more classifications done for each turn (i.e., for the 6-class problem, there are 6 classification tasks per turn, while for the 8-class problem, there are 8, etc). Due to the inherent imbalance of label-distribution in the data (shown in Figure FIGREF17 ), we use weighted F-macro to calculate our final scores for each feature set (which finds the average of the metrics for each label, weighted by the number of true instances for that label) BIBREF31 . Our first experiment sets out to compare the use of a non-sequential classification algorithm versus a sequential model for dialogue act classification on our dataset. We experiment with the default Naive Bayes (NB) and Linear SVC algorithms from Scikit-Learn BIBREF31 , comparing with our sequential SVM-HMM model. We test each classifier on each of our four class sets, reporting weighted F-macro for each experiment. Figure FIGREF33 shows the results of the experiments. From this experiment, we observe that our sequential SVM-HMM outperforms each non-sequential baseline, for each of the four class sets. We select the sequential SVM-HMM model for our preferred model for subsequent experiments. We observe that while performance may be expected to drop as the number of classes increases, we instead get a spike in performance for the 10-Class (Easy) setting. This increase occurs due to the addition of the lexically well-defined classes of Statement Apology and Statement Thanks, which are much simpler for our model to predict. Their addition results in a performance boost, comparable to that of the simpler 6-Class problem. When we remove the two well-defined classes and add in the next two broader dialogue act classes of Statement Offer and Question Open (as defined by the 10-Class (Hard) set), we observe a drop in performance, and an overall result comparable to our 8-Class problem. This result is still strong, since the number of classes has increased, but the overall performance does not drop. We also observe that while NB and LinearSVC have the same performance trend for the smaller number of classes, Linear SVC rapidly improves in performance as the number of classes increases, following the same trend as SVM-HMM. The smallest margin of difference between SVM-HMM and Linear SVC also occurs at the 10-Class (Easy) setting, where the addition of highly-lexical classes makes for a more differentiable set of turns. Our next experiment tests the differences in performance when training and testing our real-time sequential SVM-HMM model using only a single type of speaker's turns (i.e. only Customer or only Agent turns). Figure FIGREF35 shows the relative performance of using only speaker-specific turns, versus our standard results using all turns. We observe that using Customer-only turns gives us lower prediction performance than using both speakers' turns, but that Agent-only turns actually gives us higher performance. Since agents are put through training on how to interact with customers (often using templates), agent behavior is significantly more predictable than customer behavior, and it is easier to predict agent turns even without utilizing any customer turn information (which is more varied, and thus more difficult to predict). We again observe a boost in performance at out 10-Class (Easy) set, due to the inclusion of lexically well-defined classes. Notably, we achieve best performance for the 10-Class (Easy) set using only agent turns, where the use of the Apology and Thanks classes are both prevalent and predictable. In our final experiment, we explore the changes in performance we get by splitting the training and test data based on company domain. We compare this performance with our standard setup for SVM-HMM from our baseline experiments (Figure FIGREF33 ), where our train-test data splitting is company-independent (i.e. all conversations are randomized, and no information is used to differentiate different companies or domains). To recap, our data consists of conversations from four companies from three different industrial domains (one from the telecommunication domain, two from the electronics domain, and one from the insurance domain). We create four different versions of our 6-class real-time sequential SVM-HMM, where we train on the data from three of the companies, and test on the remaining company. We present our findings in Table TABREF37 . From the table, we see that our real-time model achieves best prediction results when we use one of the electronics companies in the test fold, even though the number of training samples is smallest in these cases. On the other hand, when we assign insurance company in the test fold, our model's prediction performance is comparatively low. Upon further investigation, we find that customer-agent conversations in the telecommunication and electronics domains are more similar than those in the insurance domain. Our findings show that our model is robust to different domains as our test set size increases, and that our more generic, company-independent experiment gives us better performance than any domain-specific experiments. Conversation Outcome Analysis Given our observation that Agent turns are more predictable, and that we achieve best performance in a company-independent setting, we question whether the training that agents receive is actually reliable in terms of resulting in overall "satisfied customers", regardless of company domain. Ultimately, our goal is to discover whether we can use the insight we derive from our predicted dialogue acts to better inform conversational systems aimed at offering customer support. Our next set of experiments aims to show the utility of our real-time dialogue act classification as a method for summarizing semantic intent in a conversation into rules that can be used to guide automated systems. Classifying Problem Outcomes We conduct three supervised classification experiments to better understand full conversation outcome, using the default Linear SVC classifier in Scikit-Learn BIBREF31 (which gave us our best baseline for the dialogue classification task). Each classification experiments centers around one of three problem outcomes: customer satisfaction, problem resolution, and customer frustration. For each outcome, we remove any conversation that did not receive majority consensus for a label, or received majority vote of "can't tell". Our final conversation sets consist of 216 satisfied and 500 unsatisfied customer conversations, 271 resolved and 425 unresolved problem conversations, and 534 frustrated and 229 not frustrated customer conversations. We retain the inherent imbalance in the data to match the natural distribution observed. The clear excess of consensus of responses that indicate negative outcomes further motivates us to understand what sorts of dialogic patterns results in such outcomes. We run the experiment for each conversation outcome using 10-fold cross-validation, under each of our four class settings: 6-Class, 8-Class, 10-Class (Easy), and 10-Class (Hard). The first feature set we use is Best_Features (from the original dialogue act classification experiments), which we run as a baseline. Our second feature set is our Dialogue_Acts predictions for each turn – we choose the most probable dialogue act prediction for each turn using our dialogue act classification framework to avoid sparsity. In this way, for each class size INLINEFORM0 , each conversation is converted into a vector of INLINEFORM1 (up to 10) features that describe the most strongly associated dialogue act from the dialogue act classification experiments for each turn, and the corresponding turn number. For example, a conversation feature vector may look as follows: INLINEFORM2 Thus, our classifier can then learn patterns based on these features (for example, that specific acts appearing at the end of a conversation are strong indicators of customer satisfaction) that allow us to derive rules about successful/unsuccessful interactions. Figure FIGREF38 shows the results of our binary classification experiments for each outcome. For each experiment, the Best_Features set is constant over each class size, while the Dialogue_Act features are affected by class size (since the predicted act for each turn will change based on the set of acts available for that class size). Our first observation is that we achieve high performance on the binary classification task, reaching F-measures of 0.70, 0.65, and 0.83 for the satisfaction, resolution, and frustration outcomes, respectively. Also, we observe that the performance of our predicted dialogue act features is comparable to that of the much larger set of best features for each label (almost identical in the case of frustration). In more detail, we note interesting differences comparing the performance of the small set of dialogue act features that "summarize" the large, sparse set of best features for each label, as a form of data-driven feature selection. For satisfaction, we see that the best feature set outperforms the dialogue acts for each class set except for 10-Class (Easy), where the dialogue acts are more effective. The existence of the very lexically well-defined Social Act Thanking and Social Act Apology classes makes the dialogue acts ideal for summarization. In the case of problem resolution, we see that the performance of the dialogue acts approaches that of the best feature set as the number of classes increases, showing that the dialogue features are able to express the full intent of the turns well, even at more difficult class settings. Finally, for the frustration experiment, we observe negligible different between the best features and dialogue act features, and very high classification results overall. Actionable Rules for Automated Customer Support While these experiments highlight how we can use dialogue act predictions as a means to greatly reduce feature sparsity and predict conversation outcome, our main aim is to gain good insight from the use of the dialogue acts to inform and automate customer service interactions. We conduct deeper analysis by taking a closer look at the most informative dialogue act features in each experiment. Table TABREF44 shows the most informative features and weights for each of our three conversation outcomes. To help guide our analysis, we divide the features into positions based on where they occur in the conversation: start (turns 1-3), middle (turns 4-6), and end (turns 7-10). Desirable outcomes (customers that are satisfied/not frustrated and resolved problems) are shown at the top rows of the table, and undesirable outcomes (unsatisfied/frustrated customers and unresolved problems) are shown at the bottom rows. Our analysis helps zone in on how the use of certain dialogue acts may be likely to result in different outcomes. The weights we observe vary in the amount of insight provided: for example, offering extra help at the end of a conversation, or thanking the customer yields more satisfied customers, and more resolved problems (with ratios of above 6:1). However, some outcomes are much more subtle: for example, asking yes-no questions early-on in a conversation is highly associated with problem resolution (ratio 3:1), but asking them at the end of a conversation has as similarly strong association with unsatisfied customers. Giving elaborate answers that are not a simple affirmative, negative, or response acknowledgement (i.e. Answer (Other)) towards the middle of a conversation leads to satisfied customers that are not frustrated. Likewise, requesting information towards the end of a conversation (implying that more information is still necessary at the termination of the dialogue) leads to unsatisfied and unresolved customers, with ratios of at least 4:1. By using the feature weights we derive from using our predicted dialogue acts in our outcome classification experiments, we can thus derive data-driven patterns that offer useful insight into good/bad practices. Our goal is to then use these rules as guidelines, serving as a basis for automated response planning in the customer service domain. For example, these rules can be used to recommend certain dialogue act responses given the position in a conversation, and based previous turns. This information, derived from correlation with conversation outcomes, gives a valuable addition to conversational flow for intelligent agents, and is more useful than canned responses. Conclusions In this paper, we explore how we can analyze dialogic trends in customer service conversations on Twitter to offer insight into good/bad practices with respect to conversation outcomes. We design a novel taxonomy of fine-grained dialogue acts, tailored for the customer service domain, and gather annotations for 800 Twitter conversations. We show that dialogue acts are often semantically overlapping, and conduct multi-label supervised learning experiments to predict multiple appropriate dialogue act labels for each turn in real-time, under varying class sizes. We show that our sequential SVM-HMM model outperforms all non-sequential baselines, and plan to continue our exploration of other sequential models including Conditional Random Fields (CRF) BIBREF32 and Long Short-Term Memory (LSTM) BIBREF33 , as well as of dialogue modeling using different Markov Decision Process (MDP) BIBREF34 models such as the Partially-Observed MDP (POMDP) BIBREF35 . We establish that agents are more predictable than customers in terms of the dialogue acts they utilize, and set out to understand whether the conversation strategies agents employ are well-correlated with desirable conversation outcomes. We conduct binary classification experiments to analyze how our predicted dialogue acts can be used to classify conversations as ending in customer satisfaction, customer frustration, and problem resolution. We observe interesting correlations between the dialogue acts agents use and the outcomes, offering insights into good/bad practices that are more useful for creating context-aware automated customer service systems than generating canned response templates. Future directions for this work revolve around the integration of the insights derived in the design of automated customer service systems. To this end, we aim to improve the taxonomy and annotation design by consulting domain-experts and using annotator feedback and agreement information, derive more powerful features for dialogue act prediction, and automate ranking and selection of best-practice rules based on domain requirements for automated customer service system design.
overlapping dialogue acts
0a7ac8eccbc286e0ab55bc5949f3f8d2ea2d1a60
0a7ac8eccbc286e0ab55bc5949f3f8d2ea2d1a60_0
Q: How many improvements on the French-German translation benchmark? Text: Introduction Given the data-driven nature of neural machine translation (NMT), the limited source-to-target bilingual sentence pairs have been one of the major obstacles in building competitive NMT systems. Recently, pseudo parallel data, which refer to the synthetic bilingual sentence pairs automatically generated by existing translation models, have reported promising results with regard to the data scarcity in NMT. Many studies have found that the pseudo parallel data combined with the real bilingual parallel corpus significantly enhance the quality of NMT models BIBREF0 , BIBREF1 , BIBREF2 . In addition, synthesized parallel data have played vital roles in many NMT problems such as domain adaptation BIBREF0 , zero-resource NMT BIBREF3 , and the rare word problem BIBREF4 . Inspired by their efficacy, we attempt to train NMT models using only synthetic parallel data. To the best of our knowledge, building NMT systems with only pseudo parallel data has yet to be studied. Through our research, we explore the availability of synthetic parallel data as an effective alternative to the real-world parallel corpus. The active usage of synthetic data in NMT particularly has its significance in low-resource environments where the ground truth parallel corpora are very limited or not established. Even in recent approaches such as zero-shot NMT BIBREF5 and pivot-based NMT BIBREF6 , where direct source-to-target bilingual data are not required, the direct parallel corpus brings substantial improvements in translation quality where the pseudo parallel data can also be employed. Previously suggested synthetic data, however, have several drawbacks to be a reliable alternative to the real parallel corpus. As illustrated in Figure 1 , existing pseudo parallel corpora can be classified into two groups: source-originated and target-originated. The common property between them is that ground truth examples exist only on a single side (source or target) of pseudo sentence pairs, while the other side is composed of synthetic sentences only. The bias of synthetic examples in sentence pairs, however, may lead to the imbalance of the quality of learned NMT models when the given pseudo parallel corpus is exploited in bidirectional translation tasks (e.g., French $\rightarrow $ German and German $\rightarrow $ French). In addition, the reliability of the synthetic parallel data is heavily influenced by a single translation model where the synthetic examples originate. Low-quality synthetic sentences generated by the translation model would prevent NMT models from learning solid parameters. To overcome these shortcomings, we propose a novel synthetic parallel corpus called PSEUDOmix. In contrast to previous works, PSEUDOmix includes both synthetic and real sentences on either side of sentence pairs. In practice, it can be readily built by mixing source- and target-originated pseudo parallel corpora for a given translation task. Experiments on several language pairs demonstrate that the proposed PSEUDOmix shows useful properties that make it a reliable candidate for real-world parallel data. In detail, we make the following contributions: Neural Machine Translation Given a source sentence $x = (x_1, \ldots , x_m)$ and its corresponding target sentence $y= (y_1, \ldots , y_n)$ , the NMT aims to model the conditional probability $p(y|x)$ with a single large neural network. To parameterize the conditional distribution, recent studies on NMT employ the encoder-decoder architecture BIBREF7 , BIBREF8 , BIBREF9 . Thereafter, the attention mechanism BIBREF10 , BIBREF11 has been introduced and successfully addressed the quality degradation of NMT when dealing with long input sentences BIBREF12 . In this study, we use the attentional NMT architecture proposed by Bahdanau et al. bahdanau2014neural. In their work, the encoder, which is a bidirectional recurrent neural network, reads the source sentence and generates a sequence of source representations $\bf {h} =(\bf {h_1}, \ldots , \bf {h_m}) $ . The decoder, which is another recurrent neural network, produces the target sentence one symbol at a time. The log conditional probability thus can be decomposed as follows: $$\log p(y|x) = \sum _{t=1}^{n} \log p(y_t|y_{<t}, x)$$ (Eq. 3) where $y_{<t}$ = ( $y_1, \ldots , y_{t-1}$ ). As described in Equation (2), the conditional distribution of $p(y_t|y_{<t}, x)$ is modeled as a function of the previously predicted output $y_{t-1}$ , the hidden state of the decoder $s_t$ , and the context vector $c_t$ . $$p(y_t|y_{<t}, x) \propto \exp \lbrace g(y_{t-1}, s_t, c_t)\rbrace $$ (Eq. 4) The context vector $c_t$ is used to determine the relevant part of the source sentence to predict $y_t$ . It is computed as the weighted sum of source representations $\bf {h_1}, \ldots , \bf {h_m}$ . Each weight $\alpha _{ti}$ for $\bf {h_i}$ implies the probability of the target symbol $y_t$ being aligned to the source symbol $x_i$ : $$c_t = \sum _{i=1}^{m} \alpha _{ti} \bf {h_i}$$ (Eq. 5) Given a sentence-aligned parallel corpus of size $N$ , the entire parameter $\theta $ of the NMT model is jointly trained to maximize the conditional probabilities of all sentence pairs ${ \lbrace (x^n, y^n)\rbrace }_{ n=1 }^{ N }$ : $$\theta ^* = \underset{\theta }{\arg \!\max } \sum _{n=1}^{N} \log p(y^{n}|x^{n})$$ (Eq. 6) where $\theta ^*$ is the optimal parameter. Related Work In statistical machine translation (SMT), synthetic bilingual data have been primarily proposed as a means to exploit monolingual corpora. By applying a self-training scheme, the pseudo parallel data were obtained by automatically translating the source-side monolingual corpora BIBREF13 , BIBREF14 . In a similar but reverse way, the target-side monolingual corpora were also employed to build the synthetic parallel data BIBREF15 , BIBREF16 . The primary goal of these works was to adapt trained SMT models to other domains using relatively abundant in-domain monolingual data. Inspired by the successful application in SMT, there have been efforts to exploit synthetic parallel data in improving NMT systems. Source-side BIBREF1 , target-side BIBREF0 and both sides BIBREF2 of the monolingual data have been used to build synthetic parallel corpora. In their work, the pseudo parallel data combined with a real training corpus significantly enhanced the translation quality of NMT. In Sennrich et al., sennrich2015improving, domain adaptation of NMT was achieved by fine-tuning trained NMT models using a synthetic parallel corpus. Firat et al. firat2016zero attempted to build NMT systems without any direct source-to-target parallel corpus. In their work, the pseudo parallel corpus was employed in fine-tuning the target-specific attention mechanism of trained multi-way multilingual NMT BIBREF17 models, which enabled zero-resource NMT between the source and target languages. Lastly, synthetic sentence pairs have been utilized to enrich the training examples having rare or unknown translation lexicons BIBREF4 . Motivation As described in the previous section, synthetic parallel data have been widely used to boost the performance of NMT. In this work, we further extend their application by training NMT with only synthetic data. In certain language pairs or domains where the source-to-target real parallel corpora are very rare or even unprepared, the model trained with synthetic parallel data can function as an effective baseline model. Once the additional ground truth parallel corpus is established, the trained model can be improved by retraining or fine-tuning using the real parallel data. Limits of the Previous Approaches For a given translation task, we classify the existing pseudo parallel data into the following groups: Source-originated: The source sentences are from a real corpus, and the associated target sentences are synthetic. The corpus can be formed by automatically translating a source-side monolingual corpus into the target language BIBREF4 , BIBREF1 . It can also be built from source-pivot bilingual data by introducing a pivot language. In this case, a pivot-to-target translation model is employed to translate the pivot language corpus into the target language. The generated target sentences paired with the original source sentences form a pseudo parallel corpus. Target-originated: The target sentences are from a real corpus, and the associated source sentences are synthetic. The corpus can be formed by back-translating a target-side monolingual corpus into the source language BIBREF0 . Similar to the source-originated case, it can be built from a pivot-target bilingual corpus using a pivot-to-source translation model BIBREF3 . The process of building each synthetic parallel corpus is illustrated in Figure 1 . As shown in Figure 1 , the previous studies on pseudo parallel data share a common property: synthetic and ground truth sentences are biased on a single side of sentence pairs. In such a case where the synthetic parallel data are the only or major resource used to train NMT, this may severely limit the availability of the given pseudo parallel corpus. For instance, as will be demonstrated in our experiments, synthetic data showing relatively high quality in one translation task (e.g., French $\rightarrow $ German) can produce poor results in the translation task of the reverse direction (German $\rightarrow $ French). Another drawback of employing synthetic parallel data in training NMT is that the capacity of the synthetic parallel corpus is inherently influenced by the mother translation model from which the synthetic sentences originate. Depending on the quality of the mother model, ill-formed or inaccurate synthetic examples could be generated, which would negatively affect the reliability of the resultant synthetic parallel data. In the previous study, Zhang and Zong zhang2016exploiting bypassed this issue by freezing the decoder parameters while training with the minibatches of pseudo bilingual pairs made from a source language monolingual corpus. This scheme, however, cannot be applied to our scenario as the decoder network will remain untrained during the entire training process. Proposed Mixing Approach To overcome the limitations of the previously suggested pseudo parallel data, we propose a new type of synthetic parallel corpus called PSEUDOmix. Our approach is quite straightforward: For a given translation task, we first build both source-originated and target-originated pseudo parallel data. PSEUDOmix can then be readily built by mixing them together. The overall process of building PSEUDOmix for the French $\rightarrow $ German translation task is illustrated in Figure 1 . By mixing source- and target-originated pseudo parallel data, the resultant corpus includes both real and synthetic examples on either side of sentence pairs, which is the most evident feature of PSEUDOmix. Through the mixing approach, we attempt to lower the overall discrepancy in the quality of the source and target examples of synthetic sentence pairs, thus enhancing the reliability as a parallel resource. In the following section, we evaluate the actual benefits of the mixed composition in the synthetic parallel data. Experiments: Effects of Mixing Real and Synthetic Sentences In this section, we analyze the effects of the mixed composition in the synthetic parallel data. Mixing pseudo parallel corpora derived from different sources, however, inevitably brings diversity, which affects the capacity of the resulting corpus. We isolate this factor by building both source- and target-originated synthetic corpora from the identical source-to-target real parallel corpus. Our experiments are performed on French (Fr) $\leftrightarrow $ German (De) translation tasks. Throughout the remaining paper, we use the notation * to denote the synthetic part of the pseudo sentence pairs. Data Preparation By choosing English (En) as the pivot language, we perform pivot alignments for identical English segments on Europarl Fr-En and En-De parallel corpora BIBREF18 , constructing a multi-parallel corpus of Fr-En-De. Then each of the Fr*-De and Fr-De* pseudo parallel corpora is established from the multi-parallel data by applying the pivot language-based translation described in the previous section. For automatic translation, we utilize a pre-trained and publicly released NMT model for En $\rightarrow $ De and train another NMT model for En $\rightarrow $ Fr using the WMT'15 En-Fr parallel corpus BIBREF19 . A beam of size 5 is used to generate synthetic sentences. Lastly, to match the size of the training data, PSEUDOmix is established by randomly sampling half of each Fr*-De and Fr-De* corpus and mixing them together. We use the parallel corpora from the shared translation task of WMT'15 and WMT'16 BIBREF27 . Using the same pivot-based technique as the previous task, Cs-De* and Fr-De* corpora are built from the WMT'15 Cs-En and Fr-En parallel data respectively. For Cs*-De and Fr*-De, WMT'16 En-De parallel data are employed. We again use pre-trained NMT models for En $\rightarrow $ Cs, En $\rightarrow $ De, and En $\rightarrow $ Fr to generate synthetic sentences. A beam of size 1 is used for fast decoding. For the Real Fine-tuning scenario, we use real parallel corpora from the Europarl and News Commentary11 dataset. These direct parallel corpora are obtained from OPUS BIBREF28 . The size of each set of ground truth and synthetic parallel data is presented in Table 5 . Given that the training corpus for widely studied language pairs amounts to several million lines, the Cs-De language pair (0.6M) reasonably represents a low-resource situation. On the other hand, the Fr-De language pair (1.8M) is considered to be relatively resource-rich in our experiments. The details of the preprocessing are identical to those in the previous case. Data Preprocessing Each training corpus is tokenized using the tokenization script in Moses BIBREF20 . We represent every sentence as a sequence of subword units learned from byte-pair encoding BIBREF21 . We remove empty lines and all the sentences of length over 50 subword units. For a fair comparison, all cleaned synthetic parallel data have equal sizes. The summary of the final parallel corpora is presented in Table 1 . Training and Evaluation All networks have 1024 hidden units and 500 dimensional embeddings. The vocabulary size is limited to 30K for each language. Each model is trained for 10 epochs using stochastic gradient descent with Adam BIBREF22 . The Minibatch size is 80, and the training set is reshuffled between every epoch. The norm of the gradient is clipped not to exceed 1.0 BIBREF23 . The learning rate is $2 \cdot 10^{-4}$ in every case. We use the newstest 2012 set for a development set and the newstest 2011 and newstest 2013 sets as test sets. At test time, beam search is used to approximately find the most likely translation. We use a beam of size 12 and normalize probabilities by the length of the candidate sentences. The evaluation metric is case-sensitive tokenized BLEU BIBREF24 computed with the multi-bleu.perl script from Moses. For each case, we present average BLEU evaluated on three different models trained from scratch. We use the same experimental settings that we used for the previous case except for the Real Fine-tuning scenario. In the fine-tuning step, we use the learning rate of $2 \cdot 10^{-5}$ , which produced better results. Embeddings are fixed throughout the fine-tuning steps. For evaluation, we use the same development and test sets used in the previous task. Results and Analysis Before we choose the pivot language-based method for data synthesis, we conduct a preliminary experiment analyzing both pivot-based and direct back-translation. The model used for direct back-translation was trained with the ground truth Europarl Fr-De data made from the multi-parallel corpus presented in Table 2 . On the newstest 2012/2013 sets, the synthetic corpus generated using the pivot approach showed higher BLEU (19.11 / 20.45) than the back-translation counterpart (18.23 / 19.81) when used in training a De $\rightarrow $ Fr NMT model. Although the back-translation method has been effective in many studies BIBREF0 , BIBREF25 , its availability becomes restricted in low-resource cases which is our major concern. This is due to the poor quality of the back-translation model built from the limited source-to-target parallel corpus. Instead, one can utilize abundant pivot-to-target parallel corpora by using a rich-resource language as the pivot language. This consequently improves the reliability of the quality of baseline translation models used for generating synthetic corpora. From Table 2 , we find that the bias of the synthetic examples in pseudo parallel corpora brings imbalanced quality in the bidirectional translation tasks. Given that the source- and target-originated classification of a specific synthetic corpus is reversed depending on the direction of the translation, the overall results imply that the target-originated corpus for each translation task outperforms the source-originated data. The preference of target-originated synthetic data over the source-originated counterparts was formerly investigated in SMT by Lambert et al., lambert2011investigations. In NMT, it can be explained by the degradation in quality in the source-originated data owing to the erroneous target language model formed by the synthetic target sentences. In contrast, we observe that PSEUDOmix not only produces balanced results for both Fr $\rightarrow $ De and De $\rightarrow $ Fr translation tasks but also shows the best or competitive translation quality for each task. We note that mixing two different synthetic corpora leads to improved BLEU not their intermediate value. To investigate the cause of the improvement in PSEUDOmix, we build additional target-originated synthetic corpora for each Fr $\leftrightarrow $ De translation with a beam of size 3. As shown in Table 3 , for the De $\rightarrow $ Fr task, the new target-originated corpus (c) shows higher BLEU than the source-originated corpus (b) by itself. The improvement in BLEU, however, occurs only when mixing the source- and target-originated synthetic parallel data (b+d) compared to mixing two target-originated synthetic corpora (c+d). The same phenomenon is observed in the Fr $\rightarrow $ De case as well. The results suggest that real and synthetic sentences mixed on either side of sentence pairs enhance the capability of a synthetic parallel corpus. We conjecture that ground truth examples in both encoder and decoder networks not only compensate for the erroneous language model learned from synthetic sentences but also reinforces patterns of use latent in the pseudo sentences. We also evaluate the effects of the proposed mixing strategy in phrase-based statistical machine translation BIBREF26 . We use Moses BIBREF20 and its baseline configuration for training. A 5-gram Kneser-Ney model is used as the language model. Table 4 shows the translation results of the phrase-based statistical machine translation (PBSMT) systems. In all experiments, NMT shows higher BLEU (2.44-3.38) compared to the PBSMT setting. We speculate that the deep architecture of NMT provides noise robustness in the synthetic examples. It is also notable that the proposed PSEUDOmix outperforms other synthetic corpora in PBSMT. The results clearly show that the benefit of the mixed composition in synthetic sentence pairs is beyond a specific machine translation framework. Table 6 shows the results of the Pseudo Only scenario on Cs $\leftrightarrow $ De and Fr $\leftrightarrow $ De tasks. For the baseline comparison, we also present the translation quality of the NMT models trained with the ground truth Europarl+NC11 parallel corpora (a). In Cs $\leftrightarrow $ De, the Pseudo Only scenario shows outperforming results compared to the real parallel corpus by up to 3.86-4.43 BLEU on the newstest 2013 set. Even for the Fr $\leftrightarrow $ De case, where the size of the real parallel corpus is relatively large, the best BLEU of the pseudo parallel corpora is higher than that of the real parallel corpus by 1.3 (Fr $\rightarrow $ De) and 0.49 (De $\rightarrow $ Fr). We list the results on the newstest 2011 and newstest 2012 in the appendix. From the results, we conclude that large-scale synthetic parallel data can perform as an effective alternative to the real parallel corpora, particularly in low-resource language pairs. As shown in Table 6 , the model learned from the Cs*-De corpus outperforms the model trained with the Cs-De* corpus in every case. This result is slightly different from the previous case, where the target-originated synthetic corpus for each translation task reports better results than the source-originated data. This arises from the diversity in the source of each pseudo parallel corpus, which vary in their suitability for the given test set. Table 6 also shows that mixing the Cs*-De corpus with the Cs-De* corpus of worse quality brings improvements in the resulting PSEUDOmix, showing the highest BLEU for bidirectional Cs $\leftrightarrow $ De translation tasks. In addition, PSEUDOmix again shows much more balanced performance in Fr $\leftrightarrow $ De translations compared to other synthetic parallel corpora. While the mixing strategy compensates for most of the gap between the Fr-De* and the Fr*-De (3.01 $\rightarrow $ 0.17) in the De $\rightarrow $ Fr case, the resulting PSEUDOmix still shows lower BLEU than the target-originated Fr-De* corpus. We thus enhance the quality of the synthetic examples of the source-originated Fr*-De data by further training its mother translation model (En $\rightarrow $ Fr). As illustrated in Figure 2 , with the target-originated Fr-De* corpus being fixed, the quality of the models trained with the source-originated Fr*-De data and PSEUDOmix increases in proportion to the quality of the mother model for the Fr*-De corpus. Eventually, PSEUDOmix shows the highest BLEU, outperforming both Fr*-De and Fr-De* data. The results indicate that the benefit of the proposed mixing approach becomes much more evident when the quality gap between the source- and target-originated synthetic data is within a certain range. As presented in Table 6 , we observe that fine-tuning using ground truth parallel data brings substantial improvements in the translation qualities of all NMT models. Among all fine-tuned models, PSEUDOmix shows the best performance in all experiments. This is particularly encouraging for the case of De $\rightarrow $ Fr, where PSEUDOmix reported lower BLEU than the Fr-De* data before it was fine-tuned. Even in the case where PSEUDOmix shows comparable results with other synthetic corpora in the Pseudo Only scenario, it shows higher improvements in the translation quality when fine-tuned with the real parallel data. These results clearly demonstrate the strengths of the proposed PSEUDOmix, which indicate both competitive translation quality by itself and relatively higher potential improvement as a result of the refinement using ground truth parallel corpora. In Table 6 (b), we also present the performance of NMT models learned from the ground truth Europarl+NC11 data merged with the target-originated synthetic parallel corpus for each task. This is identical in spirit to the method in Sennrich et al. sennrich2015improving which employs back-translation for data synthesis. Instead of direct back-translation, we used pivot-based back-translation, as we verified the strength of the pivot-based data synthesis in low-resource environments. Although the ground truth data is only used for the refinement, the Real Fine-tuning scheme applied to PSEUDOmix shows better translation quality compared to the models trained with the merged corpus (b). Even the results of the Real Fine-tuning on the target-originated corpus provide comparable results to the training with the merged corpus from scratch. The overall results support the efficacy of the proposed two-step methods in practical application: the Pseudo Only method to introduce useful prior on the NMT parameters and the Real Fine-tuning scheme to reorganize the pre-trained NMT parameters using in-domain parallel data. Experiments: Large-scale Application The experiments shown in the previous section verify the potential of PSEUDOmix as an efficient alternative to the real parallel data. The condition in the previous case, however, is somewhat artificial, as we deliberately match the sources of all pseudo parallel corpora. In this section, we move on to more practical and large-scale applications of synthetic parallel data. Experiments are conducted on Czech (Cs) $\leftrightarrow $ German (De) and French (Fr) $\leftrightarrow $ German (De) translation tasks. Application Scenarios We analyze the efficacy of the proposed mixing approach in the following application scenarios: Pseudo Only: This setting trains NMT models using only synthetic parallel data without any ground truth parallel corpus. Real Fine-tuning: Once the training of an NMT model is completed in the Pseudo Only manner, the model is fine-tuned using only a ground truth parallel corpus. The suggested scenarios reflect low-resource situations in building NMT systems. In the Real Fine-tuning, we fine-tune the best model of the Pseudo Only scenario evaluated on the development set. Conclusion In this work, we have constructed NMT systems using only synthetic parallel data. For this purpose, we suggest a novel pseudo parallel corpus called PSEUDOmix where synthetic and ground truth real examples are mixed on either side of sentence pairs. Experiments show that the proposed PSEUDOmix not only shows enhanced results for bidirectional translation but also reports substantial improvement when fine-tuned with ground truth parallel data. Our work has significance in that it provides a thorough investigation on the use of synthetic parallel corpora in low-resource NMT environment. Without any adjustment, the proposed method can also be extended to other learning areas where parallel samples are employed. For future work, we plan to explore robust data sampling methods, which would maximize the quality of the mixed synthetic parallel data.
one